Online communities, from question and answer sites to general-purpose discussion forums, are increasingly working to solve hard problems together, often through a process of open sharing and discussion of ideas and information. The idea is that large groups can increase the diversity of information available and lead to better outcomes. This may not be true; research shows that in small groups, people often fail to share relevant knowledge and tend to overlook minority points of view. This proposal's goal is to examine how these research findings about small groups transfer to larger, online "crowds", which typically have more members and different contribution patterns than small groups working more closely together; further, participating in a crowd platform is typically voluntary and thus their membership can change quickly during the discussion. To understand how these differences affect information sharing behavior, the research team will first modify the well-known hidden profile task that is used to study small groups' information sharing to fit crowd contexts. The team will then run a series of studies that vary the size, participation requirements, task duration, and composition of the crowd, and measure how both the sharing of information and the quality of the crowd's responses change as the design of the task changes. This will lead to both a deeper understanding of information sharing behaviors in groups, and design guidelines for building crowd platforms that are better for problem solving. The work will also provide training opportunities for both PhD and undergraduate researchers, materials to use in teaching courses around social computing, and software that will be shared with other researchers who want to study online crowds.
The work will be based on classic hidden profile tasks where information is unequally distributed among group members. Because group sizes are generally small because of practicalities of recruitment, in typical hidden profile designs a piece of information is either known to a majority of the group, or to one unique member. For crowd contexts, the team will introduce a third category of rare clues that are neither unique nor majority, since this is likely to be a common phenomenon in large groups; the relative proportion of shared, rare, and unique clues can be adjusted to set the difficulty of a particular case. The team plans to conduct four sets of experiments using a real online crowd platform, allowing them to recruit large groups subject to the actual incentives of crowd participation. Each set of experiments will manipulate one major difference identified between crowds and smaller groups that theory suggests might affect the crowd's information sharing behavior: group size, the ability to choose a preferred task, the length of the discussion (minutes versus days), and the amount of turnover in the crowd. Doing this will lead to a richer theoretical picture of how large groups perform information sharing, discussion, and analysis tasks, as well as generate practical guidelines for the design of both specific crowd tasks and crowd platforms more generally.