Using AI in Qualitative Research
- Lille My

- Oct 26
- 11 min read

On October 15, New Scholars hosted a webinar featuring Stine Grodal and Henri Schildt on using AI for qualitative studies. Here is A Best Practice Guide to AI-Augmented Qualitative Research based on Stine and Henri's paper and their webinar.
1.0 Introduction: A New Frontier for Qualitative Inquiry
New artificial intelligence (AI) tools are fundamentally reshaping the landscape of knowledge work, creating both significant opportunities and profound challenges for the qualitative research community. Just as professions like law and translation have been transformed, our field now stands at a critical juncture. The risk of widespread "deskilling" and the proliferation of superficial "AI slop" are real threats. Yet, these same tools hold the potential to drive new waves of theoretical innovation. The practices we develop now are likely to have a lasting, path-dependent impact, shaping both our methods and the future of the technology itself. The choices made by this generation of scholars will prove profoundly consequential.
For today's researcher, the central choice is not if they will engage with these tools, but how. This decision represents a strategic fork in the road, leading down one of two distinct paths. The first is automation, a tempting route that promises speed and efficiency but risks producing superficial, descriptive findings by sidelining the researcher's critical interpretive faculties. The second path is augmentation, a more demanding but far more rewarding approach where AI serves as a powerful collaborator, enhancing the human craft of qualitative analysis and creating the potential for deeper, more innovative theory.
This guide provides a practical framework for adopting an augmented approach. Its purpose is to equip researchers with the principles and practices needed to ethically and effectively leverage AI to enhance, not replace, their own expertise. By mastering these methods, researchers can maintain intellectual control over the analytical process, ensuring that technology serves the ultimate goal of rigorous and impactful theory development. To begin, we must first establish the foundational choice between automation and augmentation.
2.0 The Foundational Choice: Automation vs. Augmentation
Understanding the distinction between automating and augmenting research tasks is the single most critical step for any researcher considering AI tools. This foundational choice dictates not only the quality and depth of the analytical output but also the legitimacy of the findings and the fundamental role of the researcher. It is the difference between treating AI as a black box that delivers "answers" and engaging it as a partner in a complex, iterative process of discovery. Making this choice deliberately is essential for safeguarding the craft and values of qualitative inquiry in the age of AI, a decision whose urgency cannot be overstated.
The automated approach is one in which "machines take over a human task" (Raisch & Krakowski, 2021, p. 192). In this model, the researcher offloads core analytical work, treating the AI as a producer of objective results. The allure is undeniable: it promises objectivity, speed, and ease, seemingly removing human bias and dramatically reducing laborious coding time. However, this approach carries severe risks. Findings are often superficial, descriptive, and limited to the most common patterns, overlooking the rare observations that lead to valuable insights. With legitimacy residing in the tool itself, the researcher abdicates responsibility for the final interpretation.
In stark contrast, the augmented approach is one in which "humans collaborate closely with machines to perform a task" (Raisch & Krakowski, 2021, p. 192). Here, the AI functions as a powerful assistant under the direct intellectual control of the researcher. The upside is the potential for a deeper, richer understanding of data and the generation of novel theory. The downside is that this approach is more time-consuming and cognitively demanding. Crucially, in the augmented model, the final intellectual and ethical responsibility—and thus the legitimacy of the findings—resides firmly and unequivocally with the researcher.
To use AI effectively as an augmentation tool, one must first understand its specific capabilities and inherent limitations.
3.0 Understanding AI's Capabilities: The Smart but Erratic Colleague
AI tools are not magic boxes capable of independent thought; they are powerful but fallible instruments with specific "affordances"—the enabling and constraining effects that shape how they can be used in research. A useful metaphor is to think of an AI as an "eccentric colleague." This colleague is always available, can process information at incredible speed, and sometimes offers brilliant insights. However, they are also prone to error, lack deep contextual understanding, and will never take responsibility for their suggestions. As with such a colleague, you must always question, verify, and ultimately own the final work.
Core Affordances for Qualitative Research
The generic capabilities of AI translate into specific, powerful affordances for qualitative analysis. These must be leveraged to support, rather than replace, core research practices.
• Rapidly identify themes and meanings: AI excels at processing large datasets to identify recurring patterns and concepts in text. This affordance directly supports the foundational qualitative tasks of open and closed coding. In an augmented approach, a researcher can use AI to quickly identify passages related to an emerging interpretation, allowing them to support, refine, or contradict their ideas.
• Categorization and grouping of passages: AI can group identified passages or codes based on semantic similarity or user-defined criteria. This capability is highly useful for the formation and assessment of second-order categories. After forming initial ideas, a researcher can triangulate their findings by comparing them to the groupings suggested by the AI, uncovering new connections or challenging initial assumptions.
• Summarization of longer texts: LLMs are adept at condensing large volumes of text into coherent summaries. This affordance can be used for the creation of narrative summaries of interviews or field notes. For researchers managing large datasets, these summaries provide an efficient way to grasp key aspects of the context and navigate the material more effectively.
Essential Conditions for Effective Use
For an AI language model to function effectively in a qualitative research context, three essential conditions must be met. Failure in any of these areas will lead to unreliable or misleading outputs.
1. The relevant "language game" is contained in the model’s training data. AI models learn from vast text corpora. If the specific jargon or communication style of your research context (e.g., a highly specialized professional community) is not well-represented in that data, the model's ability to interpret meaning accurately will be limited.
2. The prompt provides relevant context to infer which language game is being played. A well-crafted prompt is crucial. It must give the AI enough information to understand the context of the data and the specific analytical task required. Vague or poorly contextualized prompts will yield generic and often useless results.
3. The model does not miss important moves, such as non-verbal communication or shared history. AI can only analyze the text it is given. It is blind to the rich, unwritten context that humans grasp intuitively, such as body language, tone of voice, or unspoken cultural understandings. This limitation underscores why human oversight is irreplaceable.
With a clear understanding of these capabilities, researchers can now apply them within a structured, multi-phase process.
4.0 A Framework for AI-Augmented Analysis: From Proto-Theory to Contribution
To maintain intellectual control and ensure theoretical rigor, researchers must embed AI within a structured and iterative research workflow. This framework is not merely a workflow; it is a strategic discipline designed to subordinate the tool to the theorizing process. The following four-phase model integrates AI into the established practice of qualitative theory building, ensuring the researcher remains in control and the focus stays on developing insightful contributions.
Phase 1: Formulating Proto-theories
The initial goal of qualitative research is to develop novel explanations that move beyond existing knowledge. This process begins with the formation of proto-theories—the coarse and fragmented theoretical explanations that emerge from early engagement with the data. In an augmented workflow, AI can be a powerful ally in this phase. Researchers can use AI tools to rapidly explore the dataset, surface initial themes, and see how theoretical concepts might apply. This allows for a much broader initial exploration than manual coding alone, helping to generate a richer set of ideas to pursue.
Phase 2: Elaborating Proto-theories
This critical phase is an iterative cycle of deep reflection that is least amenable to AI involvement and relies almost entirely on human cognition. After generating initial proto-theories, the researcher must step back to elaborate on them. This involves two key activities: reflecting on the theory-data fit by revisiting raw data to question and refine the AI-generated patterns, and comparing proto-theories with existing theory to assess their novelty. This is where the researcher's expertise, creativity, and deep understanding of the literature are paramount. An automated approach bypasses this essential reflective work, leading to findings that lack theoretical depth.
Phase 3: Supporting Proto-theories with AI
Once a promising proto-theory has been elaborated, the researcher must support it with convincing empirical evidence. Here, AI's speed and scale become invaluable again. Researchers can use AI tools to systematically comb through large datasets to find further evidence that supports, refines, or contradicts their emerging explanation. This aligns with the core qualitative principle of "showing not telling," where theoretical claims are substantiated with direct quotes. AI can help create comprehensive data tables or quantify the prevalence of certain themes, increasing the researcher's confidence and making the final write-up more compelling.
Phase 4: Moving from Proto-theories to Theory
In the final phase, the goal is to generalize the findings, disassociating the refined proto-theory from its specific empirical setting to specify its broader theoretical contribution. This involves relating the new explanation back to the existing literature and clearly articulating its novelty. While this is primarily an intellectual task for the researcher, AI can serve as a useful "sparring partner." For example, a researcher can prompt an AI to generate baseline explanations for their research question. If the AI’s response is close to the study’s current contribution, it may signal that the argument needs further refinement to be truly novel.
By following this framework, researchers can strategically apply different reasoning styles within a controlled process.
5.0 Applying Reasoning Approaches with AI: Induction, Deduction, and Abduction
Qualitative inquiry employs three core reasoning approaches, and AI's role differs significantly for each. Applying an augmented model is possible across all three, but abduction emerges as the superior strategy for leveraging AI's capabilities while ensuring the researcher remains firmly in command of the analytical process.
The Inductive Approach
Induction reasons from specific empirical observations to formulate a general, novel claim or "rule." In an augmented inductive process, a researcher can leverage AI to rapidly identify potential themes, akin to a supercharged form of open coding, using these as a starting point for deeper interpretation. The primary risk of an automated inductive approach is that it produces atheoretical, descriptive results that merely report the most common patterns, overlooking theoretically rich anomalies.
The Deductive Approach
Deduction uses existing theoretical concepts to explain empirical findings. AI can augment this work by rapidly assessing the match between a theory and a large dataset, for instance, by finding all passages related to a predefined construct. The risk of an automated deductive approach is that it is often merely duplicative, confirming what is already known without generating new theory, which is the primary goal of qualitative research.
The Abductive Approach
Abduction is a creative inferential process that produces new hypotheses to explain surprising or anomalous evidence. This approach is the most powerful and synergistic with AI augmentation because it ensures the researcher remains in control. The process starts with a human-identified puzzle, positioning AI as a tool to explore and evaluate explanations for the anomaly. Researchers can use AI's speed to test different theoretical lenses and search for related evidence, engaging in a dynamic form of "alternative casing" that amplifies their creativity without ceding intellectual authority. Abduction hard-wires an augmented workflow, making it the ideal approach for rigorous AI-assisted inquiry.
The table below provides practical examples of how to apply AI within each reasoning approach.
Successfully applying these methods requires a deep commitment to the ethical responsibilities that govern all research.
6.0 Researcher Responsibility and Ethical Considerations in the AI Era
As powerful as AI tools are, they are simply that: tools. They are not co-authors, they possess no judgment, and they cannot take responsibility. It is therefore imperative to remember that final ethical and intellectual accountability for any research output remains entirely and unequivocally with the human researcher. Just as you would not let an erratic colleague co-author a paper without scrutiny, you cannot cede responsibility to an AI.
The Prime Directive: You Are Responsible
The most critical ethical guideline is straightforward: you, the researcher, are the only one responsible. Every interpretation, claim, and conclusion that leaves your hands is one you must be fully committed to defending. AI can provide suggestions and identify patterns, but it is the researcher's duty to critically evaluate whether those outputs make sense with the data and meet the standards of scholarly rigor. Trusting an AI's output without deep, personal verification is an abdication of this core professional duty.
Critical Ethical Obligations and Risks
Researchers must navigate several key ethical domains when using AI. These are central to producing trustworthy and legitimate scholarship.
• Reliability and Bias: AI models are not neutral; they reproduce and often amplify the human biases present in their training data, such as drastic covert racial biases. A researcher using AI could unwittingly generate findings skewed by the model's biases related to gender, dialect, or social class. Mitigating this risk requires vigilant human examination of all AI-generated results against the original texts.
• Data Privacy and Security: Protecting sensitive data is a paramount ethical and legal obligation. Regulations like GDPR make it illegal to upload personal data, such as interview transcripts, to public AI services that transfer data across borders. Researchers must never use public-facing tools for confidential data and should explore secure, local models where possible.
• Transparency and Legitimacy: For AI-augmented research to be accepted, its methods must be transparent. The academic community, particularly journal editors and reviewers, plays a key role here. If AI use becomes associated with superficial, automated work, it risks delegitimizing the tool for everyone. We must collectively establish standards that distinguish rigorous augmentation from "quick and dirty" automation.
• Environmental Impact: Large language models consume significant amounts of energy and water. A single complex analysis on a large dataset can have a non-negligible environmental footprint. Researchers should be mindful of this impact and avoid running unnecessary or redundant analyses.
These responsibilities are not peripheral concerns; they are central to upholding the integrity of qualitative inquiry.
7.0 Conclusion: The Future of Augmented Qualitative Research
This guide has argued that AI presents the qualitative research community with a pivotal choice: between the superficial ease of automation and the demanding, insightful work of augmentation. While automation risks rendering our findings descriptive or duplicative, a strategically augmented approach holds the potential to deepen our engagement with data, sharpen our theoretical insights, and expand the frontiers of what we can learn from qualitative inquiry. The path we choose will have lasting consequences for our methods and the legitimacy of our craft.
Therefore, the qualitative research community—including individual researchers, journal editors, reviewers, and educators—must now deliberately develop, adopt, and enforce standards for augmented practice. By embracing AI as a sophisticated collaborator rather than a replacement for human intellect, we can harness its power to improve our productivity, generate more impactful theories, and secure the legitimacy of these new tools. By retaining the reflexive, creative human at the center of the analytical process, we ensure that technology serves scholarship, not the other way around.
The time to shape these practices is now. The early uses of a new technology often dictate its evolution and adoption. By thoughtfully integrating AI into our work, we can ensure that it becomes a powerful and enduring ally in the timeless human quest for understanding.




Comments