Generative AI and Collective Intelligence

 

Several CCI research projects examine how generative AI systems can enhance the collective intelligence of human-computer groups.

Supermind Ideator

Previous efforts to support creative problem-solving have included techniques (such as brainstorming and design thinking) to stimulate creative ideas, and software tools to record and share these ideas. Now, generative AI technologies can suggest new ideas that might never have occurred to the users, and users can then select from these ideas or use them to stimulate even more ideas.

Supermind Ideator was created to supercharge creative ideation by harnessing these new generative capabilities. The system uses a large language model and adds prompting, fine tuning, and a user interface specifically designed to help people use creative problem-solving techniques. Some of these techniques can be applied to any problem; others are specifically intended to help generate innovative ideas about how to design groups of people and/or computers (“superminds”).

We have conducted preliminary evaluation to understand early experiences of people using this system and are continuing more thorough empirical evaluations to understand how Ideator augments individuals and teams working on creative problem solving and ideation. We have also created a waitlist where we have been slowly opening up the application to eager beta-testers who have now created more than 10,000 ideas using the system.

Researchers
Steven Rick, Gianni Giacomelli, Haoran Wen, Robert Laubacher, Nancy Taubenslag, Jennifer Heyman, Max Sina Knicker, Younes Jeddi, Hendrik Maier, Stephen Dwyer, Pranav Ragupathy, Thomas Malone

Publications
Supermind Ideator: Exploring generative AI to support creative problem-solving, https://arxiv.org/abs/2311.01937

Links
Supermind Design Methodology
Supermind Ideator system

Turing test for human-computer systems

The Turing test for comparing computer performance to that of humans is well known, but, surprisingly, there is no widely used test for comparing how much better human-computer systems perform relative to humans alone, computers alone, or other baselines. Here, we show how to perform such a test using the ratio of means as a measure of effect size. We demonstrate the use of this test in three ways.

First, in an analysis of 79 recently published experimental results, we find that, surprisingly, over half of the studies find a decrease in performance, the mean and median ratios of performance improvement are both approximately 1 (corresponding to no improvement at all), and the maximum ratio is 1.36 (a 36% improvement).

Second, we experimentally investigate whether a higher performance improvement ratio is obtained when 100 human programmers generate software using GPT-3, a massive, state-of-the-art AI system. In this case, we find a speed improvement ratio of 1.27 (a 27% improvement).

Third, we find that 50 human non-programmers using GPT-3 can perform the task about as well as—and less expensively than—the human programmers. In this case, neither the non-programmers nor the computer would have been able to perform the task alone, so this is an example of a very strong form of human-computer synergy.

Our work in this project continues to synthesize human-AI experiments, search for patterns of synergy in human-AI groups, and conduct novel studies using generative AI to augment human abilities.

People
Michelle Vaccaro, Andres Campero, Jaeyoon Song, Haoran Wen, Abdullah Almaatouq, Thomas W Malone

Publications
Andres Campero, Michelle Vaccaro, Jaeyoon Song, Haoran Wen, Abdullah Almaatouq, Thomas W. Malone, A test for evaluating performance in human-computer systems, ArXiv, June 2022.

Videos
What is an Analogue of the Turing Test for Human-Computer Systems?

DesignAID

Designers often struggle to sufficiently explore large design spaces, which can lead to design fixation and suboptimal outcomes. We designed DesignAID, a generative AI tool that supports broader design space exploration, in order to help address this problem.

By using large language models to produce a range of diverse ideas expressed in words, and then using image generation software to create images from these words, we create human-computer teams that can rapidly create a diverse set of visual concepts without time-consuming drawing.

In a study with 87 crowd-sourced designers, we found that designers rated the automatic generation of images from words as significantly more inspirational, enjoyable, and useful than a conventional baseline condition of image search using Pinterest.

Surprisingly, however, we found that automatically generating highly diverse ideas had less value. For image generation, the high diversity condition was somewhat better in inspiration but no better in the other dimensions, and for image search it was significantly worse in all dimensions.

Researchers
Alice Cai, Steven Rick, Jennifer Heyman, Yanxia Zhang, Alexandre Filipowicz, Matthew Hong, Matthew Klenk, Thomas Malone

Publications
Alice Cai, Steven R Rick, Jennifer L Heyman, Yanxia Zhang, Alexandre Filipowicz, Matthew Hong, Matt Klenk, Thomas W. Malone. DesignAID: Using Generative AI and Semantic Diversity for Design Inspiration, ACM Collective Intelligence Conference 2023, https://dl.acm.org/doi/10.1145/3582269.3615596

Links
Presentation at 2023 ACM Collective Intelligence Conference

Press
MIT Sloan School of Management, A generative AI tool to inspire creative workers, February 14, 2024