From Insight to Inspiration: How GenAI Is Redefining Consumer Research?
- Yuan Ren
- May 27
- 4 min read

Introduction
Generative AI is profoundly reshaping consumer research, enhancing creative ideation, supporting qualitative analysis, and redefining the field’s trajectory. Three empirical studies and one review from the Journal of Consumer Research, Oxford Academic, offer complementary frameworks and practical guidance, revealing the vast potential and pitfalls of human–AI collaboration.
Creative Ideation: The Double-Edged Sword of LLMs
As a cornerstone of consumer research, creative ideation and its collaboration with GenAI remain underexplored in academia. De Freitas, Nave, and Puntoni (2025) define ideation as the process that treats each idea as a unit, assigning it two properties: originality and appropriateness. Originality gauges an idea’s novelty relative to existing concepts, while appropriateness assesses its practical utility in solving the problem (Melumad and Pham, 2020; Amabile, 1982; Harvey and Berry, 2023). In examining how large language models (LLMs) support creativity, researchers identify two core LLMs properties: productivity, the ability to generate vast quantities of ideas rapidly, iteratively increasing originality as well as semantic breadth, the capacity to draw on diverse domains and connect seemingly unrelated concepts to transcend conventional thinking.
However, these strengths carry risks: voluminous outputs risk repetition and homogeneity. To preserve diversity and depth, the authors recommend techniques such as fine-tuning, few-shot prompting, prompt variation, hybrid prompting, and chain-of-thought prompting. They further introduce the metaphor of ideation roles to clarify LLMs functions: when an LLM acts as key ideator, it can design more rigorous, objective experiments and produce more creative, ethical, and readable content; when humans serve as key ideator, the LLMs adopt the role of interviewer, posing a series of probing questions to spark human creativity, or of ‘actor’, simulating consumer interviews to elicit ideas. Finally, they propose ten guidelines for using LLMs in ideation, balancing productivity and semantic breadth, safeguarding originality through diverse co-creation, and evaluating outputs on originality, relevance, and impact (De Freitas et al., 2025).

Qualitative Consumer Culture Research: Four Collaboration Contexts
Epp and Humphreys (2025) focus on GenAI in qualitative consumer culture research, comparing four human–AI collaboration scenarios. In the theoretical context, human researchers apply clear theories, hypothesis-driven inquiry, and embodied experience to deepen insights through friction between data and theory; GenAI, by contrast, offers an abstract, “view from nowhere” overview of massive information without autonomous goals or deep conceptualization. In the embodied context, humans leverage sensory perception, rich life experience, and cultural nuance; AI, trained on its “biography” of datasets, synthesizes multiple perspectives efficiently but cannot truly perceive context or subtle cultural depth. In the empirical context, human immersion in real-world scenarios, inductive reasoning, and sensitivity to emergent details drive discoveries; AI uses decontextualized information and automated coding to identify patterns quickly but often misses the most novel observations. In historical context, human scholars’ deep understanding of specific eras deciphers cultural evolution and contemporary meaning, whereas AI’s aggregation across eras tends to flatten historical context. To address these trade-offs, Epp and Humphreys advocate four principles and guidelines for AI collaboration: transparency (describing each stage of research using GenAI), verifiability (focusing on and reporting the sources of GenAI model training data and other model details), privacy protection (considering and reporting measures to protect privacy), and faithful reproduction (ensuring the authenticity of research findings).

Macro Trends: Three-Stage GenAI Trajectory
At the macro level, Huang and Rust (2025) map a three-stage GenAI trajectory in consumer research: Democratization, Average trap, and Model collapse. Initially, GenAI democratizes participation, yet also inherits human data biases; next, next-token prediction models gravitate toward average responses, eroding individuality and diversity; ultimately, research risks self-reference as models learn increasingly from their own outputs, detaching from authentic human behavior. Key challenges include amplification of human biases, homogenization of outputs, and layering of machine bias. To mitigate these, they recommend enriching data sources to preserve long-tail distributions, applying response/prompt engineering with context-specific fine-tuning, and integrating human–machine hybrid data and “theory of mind” frameworks to re-embed human factors and improve AI output quality.

Looking Ahead: GenAI VS Human Researchers
Schmitt’s (2025) review reflects on GenAI not merely as a tool but as a potential autonomous “research agent.” He highlights recurring themes which are enhanced creativity, homogenization risks, and the indispensability of human oversight, and poses critical questions: Will future research be co-creative or AI-led? Can synthetic participants replace real ones? How should consumer research ethically and methodologically adapt to rapid GenAI evolution? Schmitt argues that current understanding remains nascent, necessitating ongoing dialogue and critical examination.
Conclusion
With GenAI tools proliferating in consumer culture research, we stand at an exciting yet challenging juncture. Researchers must balance AI’s efficiency with safeguarding diversity—ensuring marginalized groups in the data “tail” receive authentic representation rather than stereotyping. By fine-tuning models, optimizing prompt engineering, and adopting human-AI dialogue principles, we can break free from the next-token “average trap” and avert model collapse. Intriguingly, LLMs may soon not only generate ideas but help us select the most promising research concepts and predict a paper’s impact, transcending traditional citation metrics. Ultimately, consumer research scholars must treat AI as more than an experimental tool, critically examining its underlying norms and discourse, and jointly exploring how to preserve originality while leveraging AI’s novel perspectives and methods to forge more inclusive and impactful research paths.
References:
Amabile Teresa M. (1982), “Social Psychology of Creativity: A Consensual Assessment Technique,” Journal of Personality and Social Psychology, 43 (5), 997–1013.
De Freitas, J., Nave, G., & Puntoni, S. (2025). Ideation with Generative AI—in Consumer Research and Beyond. Journal of Consumer Research, 52(1), 18–31. https://doi.org/10.1093/jcr/ucaf012
Epp, A. M., & Humphreys, A. (2025). Collaborating with Generative AI in Consumer Culture Research. Journal of Consumer Research, 52(1), 32–48. https://doi.org/10.1093/jcr/ucaf014
Harvey Sarah, Berry James W. (2023), “Toward a Meta-Theory of Creativity Forms: How Novelty and Usefulness Shape Creativity,” Academy of Management Review, 48 (3), 504–29.
Huang, M.-H., & Rust, R. T. (2025). The GenAI future of consumer research. Journal of Consumer Research, 52(1), 4–17. https://doi.org/10.1093/jcr/ucaf013
Melumad Shiri, Pham Michel Tuan (2020), “The Smartphone as a Pacifying Technology,” Journal of Consumer Research, 47 (2), 237–55.
Schmitt, B. (2025). GenAI and consumer research: Are we the last generation of human consumer researchers? Journal of Consumer Research, 52(1), 1–3. https://doi.org/10.1093/jcr/ucaf015
Comments