Conversational Analysis Reimagined Integrating Generative AI into Qualitative Research
- Yuan Ren
- 12 minutes ago
- 8 min read

The rapid emergence of generative artificial intelligence (genAI) tools is fundamentally reshaping how researchers engage with unstructured data, particularly within the domain of qualitative analysis. While these technological advancements offer exciting possibilities, they also bring significant challenges. Traditional qualitative coding methods, though rigorous, are often extremely time-intensive, whereas general-purpose chatbots and large language models (LLMs) offer immense speed but frequently fall short in providing the necessary rigor, transparency, and traceability crucial for sound qualitative inquiry. As Susanne Friese notes in her 2026 article, many current applications claiming to automate qualitative analysis perform tasks that are closer to classification proxies rather than engaging in the deep, interpretive work characteristic of the field.
In this context of technological change, Friese (2026) questions the long-standing reliance on coding as the dominant analytic procedure in many qualitative research traditions. She argues that coding, once considered an essential cornerstone of qualitative analysis, may now be effectively replaced by AI-supported data retrieval combined with dialogue-based interpretation. To this end, she introduces a novel methodological framework: Conversational Analysis to the Power of AI (CAAI), proposing a shift toward a more interactive, co-analytic engagement with qualitative data. This framework reimagines analysis as a process of iterative questioning, synthesis, and reflexive interpretation rather than simple segmentation and categorization.
The Debate: Qualitative Coding vs. LLM-Driven Classifications
As the use of LLMs in qualitative research gains momentum, it becomes increasingly important to distinguish between real qualitative coding and what may more accurately be described as "classification proxies". Researchers from computer science, data science, or human-computer interaction (HCI) departments (e.g., Gao et al., 2023; Xiao et al., 2023; Zackary, 2024; Zhang et al., 2024) typically adopt an engineering mindset, treating qualitative data as text to be segmented, labeled, or clustered using models. These projects frequently rely on custom tools and pre-processed datasets, reducing analysis to a set of classification tasks that often require "chunking" the text, which destroys the continuity essential for making the interpretive leaps necessary in qualitative analysis.
By contrast, qualitative researchers from fields like sociology, education, health, or communication (e.g., Goyanes et al., 2025; Hayes, 2025; Hitch, 2024; Lixandru, 2024; Perkins & Roe, 2024) tend to emphasize interpretation, reflexivity, and transparency. In these studies, coding is not merely a function of labeling text but a theoretical, iterative, and contextual process. Real qualitative coding involves repeated immersion, reflection, memoing, and reconfiguration across full transcripts, often resulting in 80 to 250 distinct codes. When LLMs are used to simplify or flatten these methods into labeling exercises, reflexivity, nuance, and methodological rigor are sacrificed. Therefore, recognizing the difference between classification proxies and real qualitative coding is essential to ensure methodological integrity in this rapidly evolving AI era.
Epistemological Shifts: The Triadic Interpretive Space
Qualitative inquiry has traditionally been grounded in the interpretive paradigm, where meaning does not reside directly in the data but emerges from a dialogic and recursive relationship between the analyst, the context, and the text. When AI is added to this equation, it does not replace the researcher but becomes part of a triadic interpretive space. As Krähnke et al. (2025) and Schäffer and Lieder (2023) argue, a hermeneutic epistemology is particularly well-suited to conceptualizing how AI might be meaningfully integrated into qualitative analysis. The process of understanding becomes a navigation between perspectives: those of the participant, the AI model, and the researcher’s own evolving framework.
While AI models lack Seinsverbundenheit, the ontological embeddedness in lived socio-historical worlds described by Gadamer (1960) and Mannheim (1936), their outputs can still offer viable insights because they are trained on vast, socially-situated corpora. LLMs can serve as "abductive catalysts," helping researchers gain unexpected insights, challenging established assumptions, and inviting new interpretive directions. In this mode, the knowledge produced by AI is collective and distributed, an aggregated echo of meaning-making across diverse social, cultural, and discursive contexts. While LLM training data may overrepresent Western discourse, in a data-grounded dialogic use, the model is tasked with retrieving and reorganizing meaning strictly within the boundaries of the material provided by the researcher, anchoring the interpretive space in the actual voices of participants.

The Rise and Practice of Post-Coding Analysis
Several researchers are already charting pathways toward post-coding analysis, demonstrating that coding-free analysis is already emerging in practice. Hayes (2025) exemplifies a hybrid approach, using GPT-4 or Claude 3.5 to generate thematic frameworks and subsequently prompting these models to perform more dynamic tasks, such as comparing perspectives across cases or simulating policy debates. Perkins and Roe (2024) move further by decoupling theme development from data segmentation, using chatbots for pattern recognition and plausibility checks while human researchers validate the outputs.
Morgan’s (2026) Query-Based Analysis (QBA) offers a structured three-step process, starting with broad initial queries to elicit high-level themes, followed by specific queries to elaborate on sub-themes, and finally substantiating findings with original quotations. Additionally, Nguyen-Trung and Nguyen’s (2025) Narrative-Integrated Thematic Analysis (NITA) generates synthesis through the construction of individual narrative profiles and cross-case integration, allowing meaning to arise from a holistic engagement with participant accounts rather than by labeling discrete data fragments. Together, these practices suggest that dialogue, not classification, may become the foundation of future qualitative analysis.
The Five Core Steps of the CAAI Framework
The CAAI framework is grounded in a relational-constructivist ontology, viewing meaning as a product of interaction. The framework is organized around four core iterative steps, with an optional fifth step for theoretical development.
Step 1 is getting to know the data. Researchers must develop familiarity with the material, potentially using AI-generated summaries for initial orientation. Due to the non-deterministic nature of LLMs, it is advised to conduct theme generation three times to determine which topics to interrogate further.
Step 2 is preparing for analysis. Researchers select a topic and develop a set of exploratory questions (inductive or deductive), which function as analytical scaffolding and provide a basis for transparency and replicability.
Step 3 focuses on selecting data and asking questions. To avoid generalized summarization, it is best to work with a focused subset of data (e.g., 4 to 6 interviews) and engage in iterative dialogue. During this process, abductive reasoning can be used to probe anomalies or contradictions, and the model can brainstorm plausible explanations.
Step 4 is synthesizing insights. After rapid AI exploration, researchers slow down to deeply engage with the conversation records and write a synthesis. This process can be done independently or co-developed with the AI, forming an "assemblage" where meaning arises through relations.
The optional Step 5 involves elevating the analysis. This stage integrates findings into existing theoretical frameworks, ensuring what Strübing et al. (2018) describe as "theoretical pervasiveness," allowing research to contribute original and robust insights to the scientific conversation.
Adapting Traditional Methods and Reconstructing Integrity

The CAAI framework is highly flexible and allows for the integration of procedural steps from traditions such as Grounded Theory (GT), Qualitative Content Analysis, or Thematic Analysis (TA). When adapting Grounded Theory, researchers no longer assign formal codes but "break open" the data through exploratory questioning, mirroring the open coding phase described by Strauss and Corbin (2014) and using dialogue for constant comparison and category development. Qualitative content analysis can also be reconstructed: rather than fitting data segments into predefined slots, researchers first build a rich narrative understanding through dialogue, only abstracting categories and their dimensions at the end of the process.
To ensure rigor in CAAI, several strategies are recommended. First, researchers should prioritize using RAG-based tools, as these architectures explicitly ground outputs in the user’s data, greatly reducing hallucination risk. Second, topic-based question sets act as analytical scaffolds, and the full chat protocol provides an audit trail, making interpretive pathways visible and reviewable. Furthermore, cross-person comparison between analysts or temporal comparison for solo researchers helps assess the stability of interpretations. Configuring model parameters, such as a low temperature setting between 0.2 and 0.5, further supports output consistency.

Ethics, Authorship, and the Ultimate Paradigmatic Shift
In the CAAI approach, the researcher remains the primary author. Even when language is co-generated, the significance, the choice of theoretical lens, and the final interpretive judgment belong entirely to the human researcher. Authorship lies not in who generates the text, but in who confers significance. Transparency is essential, requiring researchers to disclose the question sets and analytic trails used.
Ultimately, CAAI represents a deeper paradigmatic shift in qualitative analysis. It challenges the long-standing assumption that segmentation and classification are preconditions for rigorous interpretation, replacing them with a logic of questioning, synthesis, interpretive flow, and iterative refinement. As Krähnke et al. (2025) suggest, while LLMs act as assistants in the conversation, the human researcher remains the analytic engine. By embracing AI as a generative mediator within the interpretive process, CAAI not only offers efficiency and flexibility but also deepens interpretation and expands the epistemic horizons of qualitative research in the digital age.
References:
Corbin, J. M., & Strauss, A. L. (2015). Basics of qualitative research: techniques and procedures for developing grounded theory (4. utgave). Sage.
Friese, S. (2026). From Coding to Conversation: A New Methodological Framework for AI-Assisted Qualitative Analysis. Qualitative Inquiry. https://doi.org/10.1177/10778004251412871
Gadamer, H.-G. (1960). Hermeneutik 1. Wahrheit und meth ode. Grundzüge einer philosophischen Hermeneutik. Mohr-Siebeck.
Gao, J., Kenny Tsu Wei Choo, Cao, J., Roy Ka Wei Lee, & Perrault, S. (2023). CoAIcoder: Examining the Effectiveness of AI-assisted Human-to-Human Collaboration in Qualitative Analysis. arXiv.Org.
Goyanes, M., Lopezosa, C., & Jordá, B. (2025). Thematic analysis of interview data with ChatGPT: designing and testing a reliable research protocol for qualitative research. Quality & Quantity, 59(6), 5491–5510. https://doi.org/10.1007/s11135-025-02199-3
Hayes, A. S. (2025). “Conversing” With Qualitative Data: Enhancing Qualitative Research Through Large Language Models (LLMs). International Journal of Qualitative Methods, 24. https://doi.org/10.1177/16094069251322346
Hitch, D. (2024). Artificial Intelligence Augmented Qualitative Analysis: The Way of the Future? Qualitative Health Research, 34(7), 595–606. https://doi.org/10.1177/10497323231217392
Krähnke, U., Pehl, T., & Dresing, T. (2025). Hybride Interpretation textbasierter Daten mit dialogisch integrierten LLMs: Zur Nutzung generativer KI in der qualitativen Forschung. SSOAR. https://nbn-resolving.org/urn:nbn:de:0168-ssoar- 99389-7
LIXANDRU, I.-D. (2024). The Use of Artificial Intelligence for Qualitative Data Analysis: ChatGPT. Informatica Economica, 28(1/2024), 57–67. https://doi.org/10.24818/issn14531305/28.1.2024.05
Mannheim, K. (1936). Ideology and utopia: An introduction to the sociology of knowledge (L. Wirth & E. Shils, Trans.). Harcourt, Brace & Company.
Morgan, D. L. (2026). Query-Based Analysis: A Strategy for Analyzing Qualitative Data Using ChatGPT. Qualitative Health Research, 36(2–3), 206–217. https://doi.org/10.1177/10497323251321712
Nguyen-Trung, K., & Nguyen, N. L. (2026). Narrative-Integrated Thematic Analysis (NITA): How can LLMs support theme generation without coding? Qualitative Research in Psychology, 1–37. https://doi.org/10.1080/14780887.2026.2638348
Perkins, M., & Roe, J. (2024). The use of generative AI in qualitative analysis: Inductive thematic analysis with ChatGPT. Journal of Applied Learning & Teaching, 7(1), 390–405. https://doi.org/10.37074/jalt.2024.7.1.22
Schäffer, B., & Lieder, F. R. (2023). Distributed interpretation - teaching reconstructive methods in the social sciences supported by artificial intelligence. Journal of Research on Technology in Education, 55(1), 111–124. https://doi.org/10.1080/15391523.2022.2148786
Strübing, J., Hirschauer, S., Ayaß, R., Krähnke, U., & Scheffer, T. (2018). Gütekriterien qualitativer Sozialforschung. Ein Diskussionsanstoß. Zeitschrift Für Soziologie, 47(2), 83–100. https://doi.org/10.1515/zfsoz-2018-1006
Xiao, Z., Yuan, X., Q Vera Liao, Abdelghani, R., & Pierre-Yves Oudeyer. (2023). Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding. arXiv.Org. https://doi.org/10.48550/arxiv.2304.10548
Zackary Okun Dunivin. (2024). Scalable Qualitative Coding with LLMs: Chain-of-Thought Reasoning Matches Human Performance in Some Hermeneutic Tasks. arXiv.Org.
Zhang, H., Wu, C., Xie, J., Rubino, F., Graver, S., Kim, C., Carroll, J. M., & Cai, J. (2024). When Qualitative Research Meets Large Language Model: Exploring the Potential of QualiGPT as a Tool for Qualitative Coding. arXiv.Org.




Comments