top of page

The Evolutionary Logic of Generative AI in Consumer Research: From Democratization to the Average Trap and Model Collapse

Generative AI democratization in consumer research: broader participation brings new voices into the data ecosystem.
Generative AI democratization in consumer research: broader participation brings new voices into the data ecosystem.

Today, a consumer researcher beginning a new project may start not with a database, but with a generative AI system. Within seconds, a structured framework emerges. Literature streams are synthesized, research questions are articulated, and potential hypotheses are outlined. The efficiency is undeniable.


Yet Huang and Rust argue that efficiency is not the central issue. In The GenAI Future of Consumer Research, they propose a trajectory— “democratization–average trap–model collapse”, that describes how increasing GenAI intensity gradually reshapes consumers, data distributions, and research paradigms. Rather than a simple story of technological progress, this trajectory outlines a movement from expansion to convergence and, potentially, to self-referential decline.


The first stage is democratization. Through interactive prompt–response mechanisms, GenAI lowers barriers to participation in both consumption and research activities. Scholars can generate ideas more easily; consumers can engage in creative or digital markets previously inaccessible to them. Eapen et al. (2023), Epstein et al. (2023), and Schmitt et al. (2024) highlight how GenAI expands innovation and creative access. Maciel and Weinberger (2024) show how technological infrastructures broaden marketplace participation. As more marginalized consumers and rare consumption events enter the data ecosystem, the tails of the distribution become thicker, increasing representativeness.


However, democratization does not eliminate bias. It makes data more reflective of real-world human experience, and real-world experience includes inequality. Kotek et al. (2023) find that large language models reinforce occupational gender stereotypes, and Ukanwa and Rust (2021) demonstrate that algorithms can produce exclusionary outcomes even without explicit bias. Thus, democratization produces more truthful data, but not necessarily more socially desirable data. This tension between representation and bias defines the first structural challenge.

The average trap of generative AI: outputs converge into generic, one-size-fits-all consumer insights.
The average trap of generative AI: outputs converge into generic, one-size-fits-all consumer insights.

As GenAI usage intensifies, the trajectory moves into the average trap. Because generative models rely on autoregressive next-token prediction, they generate statistically likely outputs. Huang and Rust (2024, 2025) argue that this mechanism naturally pulls responses toward the mean of the data distribution. Huang and Rust (2025) formally define this as the “average trap,” describing how outputs become generic and undifferentiated.


Even if marginalized consumers are included in the dataset, their influence remains limited unless they significantly reshape the overall distribution. Statistical convergence compresses individual nuance. Models excel at providing general recommendations but struggle to generate deeply personalized solutions (Huang and Rust, 2024). Within consumer research, this convergence risks narrowing research questions and theoretical explanations, gradually eroding originality.

Model collapse in generative AI: recursive machine-generated data creates a self-referential loop that drifts away from human sensibility
Model collapse in generative AI: recursive machine-generated data creates a self-referential loop that drifts away from human sensibility

The third stage, model collapse, represents a deeper structural risk. As more consumption decisions are delegated to GenAI agents, machine-generated data increasingly dominate training datasets. Shumailov et al. (2024) define model collapse as the progressive loss of distributional tails when models are recursively trained on their own synthetic outputs. Wenger (2024) illustrates how repeated self-training can eliminate diversity and distort meaning.

Generative AI democratization in consumer research: broader participation brings new voices into the data ecosystem.
Generative AI democratization in consumer research: broader participation brings new voices into the data ecosystem.

In consumer research, this suggests a potential shift from studying human behavior to modeling machine behavior. Huang and Rust (2022) introduce the concept of “AI as Customer,” highlighting AI’s emerging role as a market actor. When predictions center on machine actions rather than human preferences, outputs may lose “human sensibilities” (Huang and Rust, 2024). At that point, interpretation risks drifting away from lived consumer experience.


Huang and Rust propose clear research responses. Sustaining data tails requires continued attention to marginalized consumers and transformative research traditions (Mick, 2006; Price et al., 2024). Fine-tuning and response engineering embed domain knowledge and human insight into outputs, mitigating convergence (Huang and Rust 2024, 2025). Training on hybrid human–machine data and preserving human agency in long interaction chains can delay or prevent collapse (Gerstgrasser et al., 2024; Arora et al., 2025; Valenzuela et al., 2024).


The trajectory is not technological destiny but conditional evolution. Democratization need not lead to convergence, and convergence need not culminate in collapse. Yet without deliberate intervention, statistical logic will compress diversity, and recursive machine learning may distance research from human meaning.


The future of consumer research depends not on whether GenAI is adopted, but on whether human complexity and theoretical originality can be sustained within generative systems.


Quoting Huang and Rust, “Big data can tell us what people do, but it cannot tell us why. Only human-centric research can capture deeper consumer insights.” Research is a long and rigorous process of inquiry. Although this may run counter to the efficiency pursued by machine automation, it remains the only path toward truth.




Reference:

Arora, N., Chakraborty, I., & Nishimura, Y. (2025). AI–Human Hybrids for Marketing Research: Leveraging Large Language Models (LLMs) as Collaborators. Journal of Marketing, 89(2), 43–70. https://doi.org/10.1177/00222429241276529 

Eapen, T. T., Finkenstadt, D. J., Folk, J., & Venkataswamy, L. (2023). How Generative AI Can Augment Human Creativity. Harvard Business Review, 101(4), 56–64.

Epstein, Z., Hertzmann, A., Akten, M., Farid, H., Fjeld, J., Frank, M. R., Groh, M., Herman, L., Leach, N., Mahari, R., Pentland, A. “Sandy,” Russakovsky, O., Schroeder, H., & Smith, A. (2023). Art and the science of generative AI. Science (American Association for the Advancement of Science), 380(6650), 1110–1111. https://doi.org/10.1126/science.adh4451

Huang, M.-H., & Rust, R. T. (2022). AI as customer. International Journal of Service Industry Management, 33(2), 210–220. https://doi.org/10.1108/JOSM-11-2021-0425

Huang, M.-H., & Rust, R. T. (2024). The Caring Machine: Feeling AI for Customer Care. Journal of Marketing, 88(5), 1–23. https://doi.org/10.1177/00222429231224748

Huang, M.-H., & Rust, R. T. (2025), “Creative AI for Marketing Strategy: A Triple Engineering Approach,” Working Paper.

Huang, M.-H., & Rust, R. T. (2025). The GenAI Future of Consumer Research. The Journal of Consumer Research, 52(1), 4–17. https://doi.org/10.1093/jcr/ucaf013

Gerstgrasser, M., Schaeffer, R., Dey, A., Rafailov, R., Sleight, H., Hughes, J., Korbak, T., Agrawal, R., Pai, D., Gromov, A., Roberts, D. A., Yang, D., Donoho, D. L., & Koyejo, S. (2024). Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data. arXiv.Org.

Kotek, H., Dockum, R., & Sun, D. Q. (2023). Gender bias and stereotypes in Large Language Models. arXiv.Org. https://doi.org/10.48550/arxiv.2308.14921

Maciel, A. F., & Weinberger, M. F. (2024). Crowdfunding as a Market-Fostering Gift System. The Journal of Consumer Research, 50(6), 1221–1242. https://doi.org/10.1093/jcr/ucad052

Mick, D. G. (2006). PRESIDENTIAL ADDRESS: Meaning and Mattering Through Transformative Consumer Research. Advances in Consumer Research, 33, 1.

Price, L., Hill, R., & Bone, S. (2024), “Disadvantaged and Vulnerable Consumers,” Journal of Service Research, call for papers.

Schmitt, B., Cotte, J., Giesler, M., Stephen, A. T., & Wood, S. (2024). Will We Be the Last Human Editors of JCR? The Journal of Consumer Research, 51(3), 451–454. https://doi.org/10.1093/jcr/ucae053

Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., & Gal, Y. (2024). AI models collapse when trained on recursively generated data. Nature (London), 631(8022), 755–759. https://doi.org/10.1038/s41586-024-07566-y

Ukanwa, Kalinda, and Roland T. Rust. “Algorithmic Discrimination in Service.” SSRN Electronic Journal, 2020, https://doi.org/10.2139/ssrn.3654943

Valenzuela, A., Puntoni, S., Hoffman, D., Castelo, N., De Freitas, J., Dietvorst, B., Hildebrand, C., Huh, Y. E., Meyer, R., Sweeney, M. E., Talaifar, S., Tomaino, G., & Wertenbroch, K. (2024). How Artificial Intelligence Constrains the Human Experience. Journal of the Association for Consumer Research, 9(3), 241–256. https://doi.org/10.1086/730709 

Wenger, E. (2024). AI produces gibberish when trained on too much AI-generated data. Nature (London), 631(8022), 742–743. https://doi.org/10.1038/d41586-024-02355-z


 
 
 

Comments


This initiative is supported by the following organizations:

  • Twitter
  • LinkedIn
  • YouTube
logo_edited.png
bottom of page