-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathindex.Rmd
More file actions
118 lines (76 loc) · 9.2 KB
/
index.Rmd
File metadata and controls
118 lines (76 loc) · 9.2 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
title: "`r params$set_title`"
output: html_document
params:
set_title: "The Task Space: An Integrative Framework for Team Research"
---
*Management Science*, Accepted October 2025
Author(s): Xinlan Emily Hu, Mark E. Whiting, Linnea Gandhi, Duncan J. Watts, and Abdullah Almaatouq
# Read the <a href="https://osf.io/preprints/psyarxiv/543sz" target="_blank">Paper</a>!
Access the <a href="https://osf.io/5dhu2/files/rjwhu" target="_blank">Appendices and Supplementary Materials</a> for details about the tasks, annotation process, experimental design, and analyses.
# Abstract
Research on teams spans many contexts, but integrating knowledge from heterogeneous sources is challenging because studies typically examine different tasks that cannot be directly compared. Most investigations involve teams working on just one or a handful of tasks, and researchers lack principled ways to quantify how similar or different these tasks are from one another. We address this challenge by introducing the “Task Space,” a multidimensional space in which tasks—and the distances between them—can be represented formally, and use it to create a “Task Map” of 102 crowd-annotated tasks from the published experimental literature. We then demonstrate the Task Space’s utility by performing an integrative experiment that addresses a fundamental question in team research: *when do interacting groups outperform individuals?* Our experiment samples 20 diverse tasks from the Task Map at three complexity levels and recruits 1,231 participants to work either individually or in groups of three or six (180 experimental conditions). We find striking heterogeneity in group advantage, with groups performing anywhere from three times worse to 60% better than the best individual working alone, depending on the task context. Critically, the Task Space makes this heterogeneity predictable: it significantly outperforms traditional typologies in predicting group advantage on unseen tasks. Our models also reveal theoretically meaningful interactions between task features; for example, group advantage on creative tasks depends on whether the answers are objectively verifiable. We conclude by arguing that the Task Space enables researchers to integrate findings across different experiments, thereby building cumulative knowledge about team performance.
# Understanding teams requires understanding their tasks.
> Is a larger group more effective than a smaller group? Are groups more productive than individuals? Can AI enhance team performance?
The answers to many critical questions about teamwork depend on the task being performed. Consequently, researchers and practitioners alike need a practical way to quantify when two tasks are similar or different from one another --- and hence to understand how findings from one task might generalize to another.
The Task Space provides a framework to do exactly that. It allows researchers to represent any task as a point in a 24-dimensional space, where each dimension corresponds to a theoretically-motivated feature of the task.
Potential applications of the Task Space include:
- Intelligently **selecting tasks for experiments** (for example, to [maximize the diversity of tasks selected](https://github.com/Watts-Lab/task-mapping/blob/master/task%20space%20resources/choose-10-tasks.ipynb), or to focus on tasks with special properties);
- **Quantifying task similarities and differences**, both to understand the generalizability of findings and to control for task features in statistical analysis;
- **Building "task-sensitive" theories** that account for how team behavior varies as a function of task features;
- **Designing team environments** that are tailored to the specific task at hand (for example, assigning actors to tasks for which they have a comparative advantage).
# Using the Task Space
As a starting point, we built a "map" of the space: a repository of 102 laboratory tasks from the interdisciplinary literature on group performance, rated along the 24 Task Space dimensions using a crowd annotation process.
## 1. Download the Task Map
The Task Map (102 x 24 matrix) can be downloaded here: [`task_map.csv`](https://raw.githubusercontent.com/Watts-Lab/task-mapping/master/data/task_map.csv).
## 2. 24 Task Dimensions
The 24 dimensions are detailed here: [`24_dimensions_clean.csv`](https://raw.githubusercontent.com/Watts-Lab/task-mapping/master/task%20space%20resources/24_dimensions_clean.csv).
## 3. 102 Tasks
The 102 tasks and their descriptions are detailed here: [`102_tasks_with_sources_clean.csv`](https://raw.githubusercontent.com/Watts-Lab/task-mapping/master/task%20space%20resources/102_tasks_with_sources_clean.csv).
# Annotating New Tasks
We also provide documentation on how to annotate new tasks, including the annotation rubric and the code used to generate the Task Space from raw annotations.
- The questions displayed to raters can be found here: [`questions_displayed_to_raters.csv`](https://raw.githubusercontent.com/Watts-Lab/task-mapping/master/task%20space%20resources/questions_displayed_to_raters.csv).
- Resources for rater training and managing the annotation pipeline can be found here: [`rating pipelines/`](https://github.com/Watts-Lab/task-mapping/tree/master/task%20space%20resources/rating%20pipelines)
- The code used to generate the Task Space from raw annotations can be found here: [`generate_task_map_from_raw.Rmd`](https://github.com/Watts-Lab/task-mapping/blob/master/analysis/analysis_task_space/generate_task_map_from_raw.Rmd).
# Exploring Our Data
Please use the interactive tools below to explore the data associated with our paper!
## Task Cluster Explorer
The following interactive visualizer allows you to explore the Task Map in two dimensions using PCA, and to conduct k-means clustering on the underlying 24-dimensional space. You can choose different values of *k*, and mouse over each dot to see the name of the task. You can use this tool to get an intuitive sense of the relationships between tasks.
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
library(ggplot2)
library(e1071)
library(plotly)
library(dplyr)
library(tidyverse)
```
```{css, echo=FALSE}
iframe {
width: 100% !important;
border: none;
}
```
```{r echo=F}
knitr::include_app("https://xehu.shinyapps.io/interactive-task-map/")
```
## Group Advantage Explorer (20 Experimental Tasks)
In our paper, we demonstrate the use of the Task Space by conducting a large-scale integrative experiment measuring the phenomenon of *group advantage* --- in which an interacting group outperforms individuals working alone. In our experiment, we define two types of group advantage:
- **Strong Group Advantage** is the ratio between an interacting group's performance to that of the *best* individual in an equivalently-sized "nominal" team (a statistical aggregation of participants who worked alone, used to account for the resource advantage of having more people).
- **Weak Group Advantage** is the ratio between an interacting group's performance to that of a *randomly-selected* individual in an equivalently sized "nominal" team.
Our experiment involved 20 tasks sampled from the task space, which were implemented at three levels of complexity (low, medium, and high) and completed by groups of two different sizes (3 and 6). In the following interactive panel, please explore our data to see how group advantage varies across these experimental conditions. You can toggle between Strong/Weak Advantage, filter by complexity and group size, and hover to see task names.
A key takeaway is that **group advantage is incredibly heterogeneous**; there's no one answer for whether groups outperform individuals. But importantly, these differences are explainable by task features. However, it turns out that the Task Space features can explain 43% of the variance in this phenomenon, demonstrating that our framework can systematically account for variations in group outcomes like this one.
If you are interested in learning more, our data, code, and materials are available in our [GitHub repository](https://github.com/Watts-Lab/task-mapping).
```{r echo=F}
knitr::include_app("https://xehu-gh.shinyapps.io/interactive-task-map-group-advantage/")
```
# Accessing our Full Reproduction Package
Our full reproduction package is hosted [on GitHub (https://github.com/Watts-Lab/task-mapping)](https://github.com/Watts-Lab/task-mapping).
# Team
The paper's authors are listed below. For feedback, questions, or suggestions for new tasks and dimensions, please reach out to the Corresponding Authors.
- [Xinlan Emily Hu](https://xinlanemilyhu.com) (Corresponding Author)
- [Mark Whiting](https://whiting.me/)
- [Linnea Gandhi](https://www.linneagandhi.com/)
- [Duncan J. Watts](https://duncanjwatts.com/)
- [Abdullah Almaatouq](http://amaatouq.io/) (Corresponding Author)
We also acknowledge that this work was created with the support from many other people, including research assistants at the University of Pennsylvania and the labor of Amazon Mechanical Turk workers.
This project is part of the [group dynamics / integrative experiments research](https://css.seas.upenn.edu/project/integrative-experiments/) at the [Computational Social Science Lab at Penn](https://css.seas.upenn.edu). You can learn more about our lab [here](https://css.seas.upenn.edu/people/).