top of page

When Generative AI Enters Consumer Culture Research: Gains and Unexamined Assumptions

How GenAI Transforms Qualitative Data Analysis
How GenAI Transforms Qualitative Data Analysis

Generative Artificial Intelligence (GenAI) is reshaping qualitative consumer culture research by introducing new efficiencies while raising epistemological tradeoffs across theoretical, embodied, empirical, and historical dimensions. In other words, when consumer culture researchers upload interview transcripts, fieldnotes, or archival materials into large language models (LLMs), what changes is not merely research efficiency. Epp and Humphreys (2025) argue that the emergence of GenAI compels us to reconsider the role of the researcher, the sources of knowledge, and the epistemological foundations upon which cultural analysis depends. The question is no longer simply whether AI can help summarize data, but whether, within the context of human–AI collaboration, the very logic of cultural understanding is undergoing subtle yet profound transformation.


From Fieldnotes to Foundation Models

Technology has long intervened in cultural research. Early ethnographers documented rituals, speech patterns, and lifeworlds through handwritten notes (Jones, 2010; Roldán, 2002). Mauss developed theoretical insights based on archived printed materials rather than direct field immersion (Fournier, 2006), demonstrating that cultural knowledge production has always been mediated through technological forms.


In the twentieth century, computational text analysis gradually entered the social sciences, beginning with IBM mainframe–based content analysis (Stone et al., 1966), followed by word-counting software (Carley, 1994), and later advancing to topic modeling and word embeddings (Berger et al., 2020; Humphreys and Wang, 2018; Mikolov et al., 2013a, 2013b). LLMs extend this technological lineage by generating language through predicting the “next most probable word” (Brown et al., 2020; Puntoni et al., 2021). Unlike earlier tools, however, GenAI does not merely analyze text; it can generate responses, synthesize speech, and produce language with theoretical structure. This capability gradually shifts its role from analytical assistant to potential “research partner.”

Evolution of Automated Text-based Analysis
Evolution of Automated Text-based Analysis

To understand how researchers actually collaborate with GenAI, the authors conducted interviews with academic and industry-based consumer culture researchers and complemented these with participatory experimentation using LLMs (Huberman and Miles, 1994; Strauss and Corbin, 2014). Cultural research is inherently iterative and recursive, continually moving between theory and data (McCracken, 1988; Peirce, 1903/1934; Sætre and Van de Ven, 2021). The introduction of GenAI does not eliminate this structure but instead clarifies fundamental differences between human researchers and machines across four dimensions: theoretical, embodied, empirical, and historical.


Theoretical Dimension: Goal-Directed Perspective versus the “View from Nowhere”

Consumer culture research is premised on the researcher as a theoretical subject. Researchers enter the field with questions, interpretive goals, and theoretical frameworks; this directed perspective constitutes the core of cultural analysis (Daston, 1992; Nagel, 1986). By contrast, LLMs embody what Daston calls the “view from nowhere”—an abstracted, generalized standpoint devoid of independent motivation or inquiry (Barandiaran and Almendros, 2024).


Consequently, GenAI demonstrates notable strengths in summarization and structured integration, such as aggregating interview themes or comparing theoretical positions. Yet when asked to generate novel insights with genuine theoretical tension, its outputs tend to become smooth and flattened. Theoretical breakthroughs often arise from friction between prior assumptions and empirical data, whereas LLMs transform tension into statistical coherence. In this sense, GenAI may accelerate researchers’ entry into empirical materials, but theoretical judgment remains dependent upon a goal-directed human subject.


Embodied Dimension: Lived Experience versus Synthetic Experience

Cultural understanding is rooted in embodied experience and the lifeworld (Merleau-Ponty, 1945/2013; Johnson, 2015). During interviews, researchers construct meaning through tone, pauses, affect, and environmental awareness, elements that resist full abstraction into linguistic patterns. While LLMs can generate the voice of a “composite consumer,” such a voice emerges from statistical synthesis of discourse rather than lived experience. As the authors emphasize, generative interview texts cannot substitute for real interviews because they lack contextual and embodied complexity (Pollio et al., 1997).


More importantly, the scope of a model’s training data defines the boundaries of what it can “voice.” If certain communities are underrepresented in textual corpora, the model’s outputs cannot adequately reflect their experiences. In this sense, the model’s “biography” is not background information but an analytic condition that researchers must incorporate into interpretation.


Empirical Dimension: Deep Familiarity versus Decontextualized Knowledge

An epistemological distinction also separates human researchers from LLMs. Researchers develop deep familiarity with specific field contexts through long-term immersion, whereas LLMs possess broad but decontextualized textual knowledge. The authors note that researchers may use models dialogically, querying themes, validating coding schemes, or triangulating findings (Than et al., 2024). However, positioning the machine as primary analyst risks overlooking themes that gradually emerge through repeated reading and contextual observation.


Cultural research is not merely pattern recognition; it is interpretation of meaning-making processes. Through iterative movement between theory and experience, researchers cultivate sensitivity to cultural nuance. Under current technological conditions, this dimension of inquiry cannot be fully outsourced to machines.


Historical Dimension: Temporality and the “Flattening” of Context

Cultural meaning is embedded in specific historical contexts (Hall, 2016; Holt, 2004). Because LLMs are trained on corpora spanning broad temporal periods, their generative logic may attenuate historical specificity, compressing diverse cultural moments into generalized expression patterns. Researchers may attempt to increase temporal sensitivity by constraining training data to specific historical corpora, such as constructing period-specific linguistic dictionaries (Brysbaert et al., 2014), or applying models to endangered language preservation (Koc, 2025).


Yet for emergent cultural practices that have not stabilized into textual form, models often struggle to identify meaningful structures (Price et al., 2024). Culture unfolds at the margins, in everyday rituals and micro-interactions that may not yet be sedimented in archives. GenAI’s historical lens is anchored in prior associations, rendering it comparatively less attuned to meanings in formation.

Theoretical, Embodied, Empirical, and Historical Tradeoffs of Generative AI in Consumer Culture Research
Theoretical, Embodied, Empirical, and Historical Tradeoffs of Generative AI in Consumer Culture Research

Principles for Human–AI Collaboration: Methodological Positioning, Not Checklists

In response to these tradeoffs, the authors propose four guiding principles: transparency, provenance, privacy, and verisimilitude. Researchers should document when and how AI tools were used, including prompts and outputs; report the provenance and characteristics of training data; protect participant privacy through walled or sandboxed environments; and ensure verisimilitude by triangulating AI-generated insights with raw data and human interpretation (Arnould and Epp, 2006; Belk et al., 1989).


These principles are not merely procedural requirements. They represent methodological positioning. Transparency clarifies how AI shapes interpretation. Provenance situates outputs within their training contexts. Privacy safeguards embodied participants. Verisimilitude ensures that findings resonate with lived experience. Collaboration with GenAI must remain accountable to the epistemic commitments of qualitative inquiry.


Model selection and parameter configuration likewise carry methodological implications. Different tools vary in privacy safeguards, reasoning capacities, and data-processing affordances. Hyperparameters such as “temperature” influence the randomness and creativity of outputs (Arnold et al., 2024; Talamadupula, 2024). Researchers must calibrate models in accordance with research objectives rather than treating them as neutral instruments.

Human-AI Collaboration in Consumer Research
Human-AI Collaboration in Consumer Research

Conclusion: Reconsidering the Researcher-as-Instrument Paradigm

Ultimately, Epp and Humphreys emphasize that consumer culture theory has long operated within a researcher-as-instrument paradigm. GenAI is valuable only insofar as it facilitates human meaning-making rather than obscuring researcher judgment. At the same time, AI itself becomes an important object of cultural inquiry, warranting sustained investigation into its role within consumer practices and technological discourse (Giesler, 2008; Kozinets, 2008; Huang and Rust, 2025).


When researchers hand text over to machines, the issue has never been efficiency alone. The deeper question is whether, in this collaboration, we are quietly redefining what it means to understand.




Reference:

  • Arnold, C., Biedebach, L., Küpfer, A., & Neunhoeffer, M. (2024). The role of hyperparameters in machine learning models and how to tune them. Political Science Research and Methods, 12(4), 841–848. https://doi.org/10.1017/psrm.2023.61

  • Arnould, E. J., & Epp, A. M. (2006). Deep engagement with consumer experience: Listening and learning with qualitative data. In R. Grover & M. Vriens (Eds.), The SAGE handbook of marketing research (pp. 51–82). Sage Publications

  • Barandiaran, X. E., & Almendros, L. S. (2024). Transforming Agency. On the mode of existence of Large Language Models. arXiv.Org.

  • Belk, R. W., Wallendorf, M., & Sherry, J. F. (1989). The Sacred and the Profane in Consumer Behavior: Theodicy on the Odyssey. The Journal of Consumer Research, 16(1), 1–38. https://doi.org/10.1086/209191 

  • Berger, J., Humphreys, A., Ludwig, S., Moe, W. W., Netzer, O., & Schweidel, D. A. (2020). Uniting the Tribes: Using Text for Marketing Insight. Journal of Marketing, 84(1), 1–25. https://doi.org/10.1177/0022242919873106

  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. arXiv.Org.

  • Brysbaert, M., Warriner, A. B., & Kuperman, V. (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46(3), 904–911. https://doi.org/10.3758/s13428-013-0403-5

  • Carley, K. (1994). Extracting culture through textual analysis. Poetics, 22(4), 291–312.

  • Daston, L. (1992). Objectivity and the Escape from Perspective. Social Studies of Science, 22(4), 597–618. https://doi.org/10.1177/030631292022004002

  • Epp, A. M., & Humphreys, A. (2025). Collaborating with Generative AI in Consumer Culture Research. The Journal of Consumer Research, 52(1), 32–48. https://doi.org/10.1093/jcr/ucaf014

  • Fournier, M. (2006). Marcel Mauss: A biography. Princeton University Press.

  • Jones, J. S. O. (2010). Origins and ancestors: A brief history of ethnography. In Ethnography in social science practice (pp. 29-43). Routledge.

  • Hall, S. (2016). Cultural studies 1983: A theoretical history (J. D. Slack & L. Grossberg, Eds.). Duke University Press.

  • Holt, D. B. (2004). How brands become icons : the principles of cultural branding (1st ed.). Harvard Business School Press.

  • Huang, M.-H., & Rust, R. T. (2025). The GenAI Future of Consumer Research. The Journal of Consumer Research, 52(1), 4–17. https://doi.org/10.1093/jcr/ucaf013

  • Huberman, A. M., & Miles, M. B. (1994). Data management and analysis methods.

  • HUMPHREYS, A., & WANG, R. J.-H. (2018). Automated Text Analysis for Consumer Research. The Journal of Consumer Research, 44(6), 1274–1306. https://doi.org/10.1093/jcr/ucx104

  • Giesler, M. (2008). Conflict and Compromise: Drama in Marketplace Evolution. The Journal of Consumer Research, 34(6), 739–753. https://doi.org/10.1086/522098

  • Johnson, M. (2015). Embodied understanding. Frontiers in Psychology, 6, 875. https://doi.org/10.3389/fpsyg.2015.00875

  • Koc, V. (2025). GenAI and large language models in language preservation: Opportunities and challenges. arXiv. https://doi.org/10.48550/arXiv.2501.11496

  • Kozinets, R. V. (2008). Technology/ideology: How ideological fields influence consumers’ technology narratives. Journal of Consumer Research, 34(6), 865–881.

  • McCracken, G. (1988). The long interview. Sage.

  • Merleau-Ponty, M. (2013). Phenomenology of perception (D. A. Landes, Ed.). Routledge. (Original work published 1945)

  • Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013a). Efficient Estimation of Word Representations in Vector Space. arXiv.Org.

  • Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013b). Distributed Representations of Words and Phrases and their Compositionality. arXiv.Org.

  • Nagel, T. (1986). The view from nowhere. Oxford University Press.

  • Peirce, C. S. (1934). Collected papers of Charles Sanders Peirce (Vol. 5). Harvard University Press. (Original work published 1903)

  • Pollio, H. R., Henley, T. B., & Thompson, C. J. (1997). The phenomenology of everyday life: Empirical investigations of human experience. Cambridge University Press.

  • Price, L. L., Moisio, R., & Arnould, E. J. (2024). Making context matter even more: Tools for leveraging contexts for insights. In R. W. Belk & C. Otnes (Eds.), Handbook of qualitative research methods in marketing (pp. 471–485). Edward Elgar Publishing.

  • Puntoni, S., Reczek, R. W., Giesler, M., & Botti, S. (2021). Consumers and Artificial Intelligence: An Experiential Perspective. Journal of Marketing, 85(1), 131–151. https://doi.org/10.1177/0022242920953847

  • Roldán, A. Á. (2002). Writing ethnography. Malinowski’s fieldnotes on Baloma. Social Anthropology, 10(3), 377–393. https://doi.org/10.1111/j.1469-8676.2002.tb00065.x

  • Stone, P. J., Dunphy, D. C., & Smith, M. S. (1966). The general inquirer: A computer approach to content analysis.

  • Strauss, A. L., & Corbin, J. M. (2014). Basics of qualitative research: Grounded theory procedures and techniques (4th ed.). Sage.

  • Sætre, A. S., & Van de Ven, A. (2021). Generating Theory by Abduction. The Academy of Management Review, 46(4), 684–701. https://doi.org/10.5465/amr.2019.0233

  • Talamadupula, Kartik. “A Guide to LLM Hyperparameters.” Symbl.ai, 4 Mar. 2024, A Guide to LLM Hyperparameters | Symbl.ai

  • Than, N., Fan, L., Law, T., Nelson, L. K., & McCall, L. (2025). Updating “The Future of Coding”: Qualitative Coding with Generative Large Language Models. Sociological Methods & Research, 54(3), 849–888. https://doi.org/10.1177/00491241251339188

 
 
 

Comments


This initiative is supported by the following organizations:

  • Twitter
  • LinkedIn
  • YouTube
logo_edited.png
bottom of page