Why This Study?
The fashion industry remains one of the world’s most pressing sustainability challenges – responsible for 10% of global CO₂ emissions and being the second-largest consumer of clean water [1]. Meanwhile, every week, users exchange 18 billion messages with ChatGPT. About 380 million of those messages involve shopping [2]. In general, the use of large language model-based conversational agents for online shopping is rapidly gaining importance.
These patterns raise an important question: What is the impact of an AI-based sustainable conversational shopping assistant on consumers’ purchasing decisions regarding environmentally sustainable clothing products compared to a control conversational shopping assistant?
ManyPrompts addresses this question through a large-scale manydesign study in which research teams (RTs) from around the world design theoretically grounded system prompts intended to promote more sustainable purchasing decisions for clothing. The system prompts are integrated into a conversational online shopping platform, allowing to investigate and compare their effects on shopping decisions.
The project coordinators (PCs) will centrally coordinate the study and analyze the effects of the different prompt designs. Based on these findings, the PCs will write a paper. All RTs whose prompts are implemented and analyzed in the study will be co-authors of the paper.
This manydesign study is fully funded by the Nuremberg Institute for Market Decisions (NIM e.V.), a non-profit research institute at the interface of academia and practice. We invite you to subscribe to the mailing list to stay up-to-date.
What is Conversational Shopping?
Conversational shopping refers to online shopping interactions that are guided or supported by conversational agents—such as chatbots or AI assistants—that communicate with users through natural, dialogue-based exchanges rather than traditional graphical interfaces. This allows shoppers to ask questions, compare options, and receive personalized assistance in real time [4].
Procedure
Phase 0: Project Preparation and Pre-Registration
The PCs launch the study website and open the mailing list registration. During this stage, the final study design is developed, the pre-analysis plan (PAP) is finalized and preregistered, and the conversational shopping platform (playground) is implemented and tested to ensure technical and procedural readiness.
Phase 1: Research Team Registration and Eligibility Screening
RTs register their interest in participating in the study by completing a short online registration form. The PCs screen all registrations based on predefined eligibility criteria and notify eligible RTs, who are then invited to proceed to the prompt design and submission stage.
Phase 2: System Prompt Design and Submission
Eligible RTs receive access to the centrally provided conversational shopping platform, which serves as a sandbox environment for testing system-level prompts. RTs design and submit a pair of theory-based system prompts (baseline and treatment), along with a brief description of the underlying theoretical rationale. All submissions are treated as final.
Phase 3: Design Pre-Screening and Selection
The PCs conduct a procedural pre-screening of all submitted designs to ensure compliance with the predefined design constraints. If the number of eligible designs exceeds the study capacity, a preregistered and transparent random selection procedure is used to determine which designs are included in the study.
Phase 4: Prompt Implementation and Design Evaluation
All selected prompt designs are centrally implemented by the PCs within the conversational shopping platform . In parallel, RTs evaluate a subset of other submitted designs with respect to expected effect sizes, theoretical fit, and intervention techniques. These evaluations are used for descriptive and analytical purposes only and do not affect the experimental implementation.
Phase 5: Data Collection
Data are collected through a large-scale online experiment conducted on the centralized platform. Participants are recruited and randomly assigned to one design and, within each design, to either the treatment or the baseline condition. All aspects of experimental execution and data collection are centrally coordinated by the PCs.
Phase 6: Joint Publication
The PCs will write up a paper summarizing the results of the study. All RTs whose prompt designs were implemented and who contributed throughout the project will be co-authors of the paper.
Project Timeline
Launch of Website (mailinglist registration); finalize study design, pre-registration, and playground implementation/testing
Official Start of the ManyPrompts Study: Send out invitations, allow RT for registration, perform screening of RT for eligibility and notify about acceptance for study
RT engage with conversational shopping playground, submit prompt design and theory grounding
Pre-screening & Lottery for prompt design selection (if required)
RTs estimate prompt design effect; Implementation of all prompt designs (central)
Data collection
Analysis and write-up of joint paper
[1] Niinimäki, K., Peters, G., Dahlbo, H., Perry, P., Rissanen, T., & Gwilt, A. (2020). The environmental price of fast fashion. Nature Reviews Earth & Environment, 1(4), 189–200. https://doi.org/10.1038/s43017-020-0039-9
[2] Chatterji, A., Cunningham, T., Deming, D. J., Hitzig, Z., Ong, C., Shan, C. Y., & Wadman, K. (2025). How People Use ChatGPT (Working Paper No. 34255). National Bureau of Economic Research. https://doi.org/10.3386/w34255
[3] Duckworth, A. L., & Milkman, K. L. (2022). A guide to megastudies. PNAS Nexus, 1(5), pgac214. https://doi.org/10.1093/pnasnexus/pgac214
[4] Gnewuch, U., Morana, S., & Maedche, A. (2017). Towards designing cooperative and social conversational agents for customer service. ICIS 2017 Proceedings. https://aisel.aisnet.org/icis2017/HCI/Presentations/1