Dancing With the Black Box: Mapping Bias and Agency in Using Generative Artificial Intelligence in Qualitative Method Development
- Yuan Ren
- 2 days ago
- 10 min read

GenAI Is More Than a Tool
Looking back from the mid-2020s, it is hard to think of any recent change in academia more disruptive than the rise of generative artificial intelligence (GenAI). As Deleuze and Guattari wrote in the 1980s, “Tools exist only in relation to the interminglings they make possible or that make them possible.” Today, this observation feels especially relevant to new AI technologies, and particularly to GenAI. GenAI is not just a simple tool. It is a broad and complex societally entangled apparatus, built through the intermingling of technologies, infrastructures, human labor, economy, culture, politics, power, ideologies, natural resources, fuel, histories, classifications, and meaning-making (Crawford, 2021; Lindgren, 2024).
As generative artificial intelligence has developed at astonishing speed, it has promised unprecedented efficiency and created powerful imaginaries of revolutionary progress in how we work, communicate, and create. Academia has quickly joined this wave, adopting GenAI to support teaching, administration, and research (Burton et al., 2024; Perkins & Roe, 2024). Yet what AI eventually becomes politically, practically, socially, and within qualitative research is not something predetermined by nature. It is the result of sociopolitical processes in which scholars themselves participate (Lindgren, 2024). This is why researchers who integrate GenAI into qualitative research designs need critical reflection. They need to examine AI through broader questions of power, politics, and technology (Roberts & Bassett, 2023). At the moment, legislative regulation of AI technologies is still years away, and it may easily be outpaced by rapid technological development. With or without such regulation, academia needs to stay highly alert to how GenAI is used and how it reshapes research.
One of the central problems is the “black box” nature of AI tools. For researchers, it is almost impossible to evaluate AI’s ideological underpinnings, the biases embedded in data sets and classifications, the labor conditions behind system development, or the environmental footprint created by developing and using these models. The meanings, knowledge, and prompts generated by GenAI models for qualitative research are never neutral. They are “rooted in society’s existing power structures and stereotypizations” (Lindgren, 2024). If used uncritically, these technologies can easily echo, and sometimes strengthen, harmful social dynamics.

From Passive Tool to Active Actant
To respond to this challenge, Minna Vigren and Luis Lozano Paredes, in Struggling With the Black Box: Mapping Bias and Agency in Using Generative Artificial Intelligence in Qualitative Method Development, propose using actor-network theory (ANT) as an analytical framework. They treat GenAI as an “actant” in knowledge production, that is, a human or nonhuman entity that acts within a network (Latour, 2007). In qualitative method development, GenAI is often mistakenly treated as a passive tool. But from the perspective of ANT, it becomes clear that GenAI is an active participant. It interacts with human researchers, institutional infrastructures, and societal relations. It mediates meaning-making, influences interpretative practices, and embeds its own epistemic logic into research settings, thereby shaping the research process itself. Vigren and Lozano Paredes also keep the idea of “AI-as-assemblage” in view, so that the heterogeneous elements behind AI do not disappear from analysis. This dual framing following Latour’s actor-network theory (Latour, 2007) and Law’s concept of method assemblage (Law, 1992) allows researchers to explore the realities AI helps compose, while keeping its material-extractive underpinnings visible.
To make these questions more concrete, Vigren and Lozano Paredes use a workshop called “Images of the Future” as a case. In this workshop, young people used the text-to-image GenAI tool Wombo Dream to visualize desirable futures. The project grew out of the need to develop new methodological approaches that could strengthen people’s capacity to imagine alternative futures, and in doing so, challenge the assumption that present social conditions are inevitable (Galafassi et al., 2018; Yusoff & Gabrys, 2011). Imagination here is understood as a collective skill that can be trained, rather than something that should be left only to the few in power (Eskelinen et al., 2020; Galafassi et al., 2018; Salmenniemi et al., 2024).

What the “Images of the Future” Workshop Revealed
During the workshop, many participants felt that using AI to generate an image was easier than starting from a blank sheet of paper. In this sense, GenAI did help spark ideation. But when the participants’ own imaginaries collided with the assumptions encoded in Wombo Dream, bias started to show. For instance, when participants entered prompts such as “sustainable community,” the system repeatedly produced high-rise, technologically saturated landscapes: greened images of vertical urbanism that quietly pushed aside other, more communitarian imaginaries. In this moment, Wombo Dream worked as an epistemic filter. It encoded its own assumptions about what progress or sustainability should look like. These moments of friction echo a key point in ANT: networks remain stable only through ongoing negotiations between actants (Goodwin & Kuehn, 2021). Interestingly, by arguing back against the AI-generated image, participants were pushed to sharpen their own imagination. They had to refine or correct the visual output so that it better reflected the future they actually desired.
This shows how AI systems embody biases through their training data and classification choices (Akter et al., 2021; Crawford, 2021). From ideological and political leanings to representation gaps, AI systems mirror the social and institutional environments that shape them (Lindgren, 2024). These biases do not stay inside the technical system. As AI increasingly shapes public knowledge and perception, its social consequences become wider and deeper (Motoki et al., 2024; Rozado, 2024). Image generation models such as Wombo Dream have been shown to carry a more conservative bias, with a notable increase in alignment with right-wing stereotypes between 2023 and 2024. When AI is used to imagine futures, its outputs do not simply present an open field of possible scenarios. Instead, they subtly frame certain futures as “desirable” while pushing others to the margins.
What Hides Inside the Black Box
As Vigren and Lozano Paredes further unpack the black box of GenAI, they show that it is deeply embedded in what Crawford (2021) calls the “extractive system of computation.” Through the ANT concepts of “punctualization” and “depunctualization,” the black box can be partly reopened. When a system runs smoothly, it becomes punctualized: it appears as one unified actant. Through depunctualization, researchers can zoom back out and reveal the training data, ownership structures, and environmental costs that make the system work (Law, 1992; Latour, 2007). For qualitative researchers, this “zoom-in/zoom-out” perspective is crucial.
At the institutional level, Vigren and Lozano Paredes’ study reveals the structural resistance between corporate AI and academic governance. The university hosting the research project focused mainly on data security, rather than broader ethical questions or the opacity of GenAI systems. As a result, auditing these systems became the individual researcher’s responsibility, even though this was almost impossible to do. For example, AI models are often built through large-scale capture of digital material without creators’ attribution or consent, while training and classification schemes remain extremely difficult for outsiders to audit (Crawford, 2021). On top of that, the data labeling required to train large language models is often outsourced to “ghost workers” in the Global South, who earn poverty-level wages, work under precarious conditions, and face heavy surveillance (Gray & Suri, 2019; Hurlburt, 2023; Regilme, 2024).
Environmental cost is another hidden layer inside the black box. Although AI is often promoted as a driver of sustainability transformations, this techno-solutionist rhetoric tends to overlook its enormous ecological consequences. The carbon footprint of AI includes not only the energy needed for model training and deployment, but also the operation of data centers and, in some assessments, the manufacturing of the underlying hardware infrastructure (de Vries, 2023; Strubell et al., 2019). Beyond energy use, AI also consumes large amounts of freshwater (Li et al., 2023; Ren & Wierman, 2024), and causes environmental harm through mineral extraction for hardware (Crawford, 2021). Data centers take up land, consume building materials, and often become obsolete within short time spans, adding to the growing problem of electronic waste (Baldé et al., 2024; Velkova, 2019).

Researchers Are Also in the Network
Researchers, then, occupy a crucial position within this complex AI assemblage. When we use GenAI in research, we are also participating in the promotion and domestication of these technologies. We help normalize them within social and market frameworks, and may end up serving corporate interests. Wombo Dream is a telling example. Although the app presents itself as “building the happiest place on the Internet,” it later became involved in controversies around political content creation, such as the “Trumpify” photograph frame, as well as legal disputes over alleged violations of users’ biometric information. This makes one thing clear: AI tools are far from neutral. They are shaped by broader political and economic agendas, and they can reinforce profit-driven motives and centralized control.
Faced with these tangled challenges, Vigren and Lozano Paredes emphasize the importance of critical AI literacy. Although the workshop originally focused on image generation, it organically became a space for critical reflection. Participants were able to think about how AI, along with its ethical, social, and environmental complexities, is shaping the world and what kind of technological world we actually want to live in. The responsibility of researchers can be understood in three dimensions. First is epistemic responsibility: recognizing GenAI as an active mediator that shapes research outcomes, rather than a neutral instrument. Second is infrastructural responsibility: tracing the socio-material conditions that make GenAI possible, including labor and environmental costs. Third is pedagogical responsibility: fostering critical AI literacy when research involves human participants interacting with GenAI.
Learning to Live with the Black Box
Through self-reflection on method development and actor-network analysis, Vigren and Lozano Paredes argue that the black box is not simply an obstacle to understanding. It is a recursive condition of contemporary AI systems. Their contribution lies in developing a systematic protocol that makes this agency analytically tractable in the context of method development. This protocol gives researchers a predeployment auditing framework and shows that opacity is not just a single barrier. It is a recursive property circulating through institutional, corporate, and infrastructural layers.
In qualitative research, GenAI should not be treated as a shortcut to efficiency. Instead, researchers should approach it as a collaborative actant and use ANT to turn its opacity into an object of critical inquiry. This approach encourages researchers to recognize GenAI as active and influential, rather than treating it as context-free or neutral. As Latour suggests, the value of ANT does not lie in stabilizing networks, but in making their fragilities, breakdowns, and potentials visible.
Looking ahead, we need to ask how researchers can resist the hegemonic narratives of the AI industry, and how our research interventions might take part in radically reimagining AI’s technological development and its role in society. When bringing GenAI into the methodological toolkit, qualitative researchers need rigorous ethical and methodological scrutiny. They must examine not only the tool’s biases, but also its governance implications, ideology and politics, and long-term epistemic consequences. Living with the black box does not simply mean accepting its unknowability. It means mapping, challenging, and negotiating its agency at every stage of the methodological process. This is a difficult and often uncomfortable struggle. But it is precisely this struggle that offers a path for protecting the integrity of academic work and using new technologies responsibly in an age of multiplying technological black boxes. In this way, qualitative research can do more than reveal social reality. Through its ongoing interactions with nonhuman actants, it can also help reconstruct a more reflexive methodological landscape.
Reference:
Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’Ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60, Article 102387. https://doi.org/10.1016/j.ijinfomgt.2021.102387
Baldé, C. P., Kuehr, R., Yamamoto, T., McDonald, R., D’Angelo, E., Althaf, S., Bel, G., Deubzer, O., Fernandez-Cubillo, E., Forti, V., Gray, V., Herat, S., Honda, S., Iattoni, G., Khetriwal, D. S., Luda di Cortemiglia, V., Lobuntsova, Y., Nnorom, I., Pralat, N., & Wagner, M. (2024). Global e-waste monitor 2024. International Telecommunication Union (ITU) and United Nations Institute for Training and Research (UNITAR). https://ewastemonitor.info/wp-content/uploads/2024/12/GEM_2024_EN_11_NOV-web.pdf
Burton, J. W., Lopez-Lopez, E., Hechtlinger, S., Rahwan, Z., Aeschbach, S., Bakker, M. A., Becker, J. A., Berditchevskaia, A., Berger, J., Brinkmann, L., Flek, L., Herzog, S. M., Huang, S., Kapoor, S., Narayanan, A., Nussberger, A.-M., Yasseri, T., Nickl, P., Almaatouq, A., ... Hertwig, R. (2024). How large language models can reshape collective intelligence. Nature Human Behaviour, 8(9), 1643–1655. https://doi.org/10.1038/s41562-024-01959-9
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence.
Deleuze, G., & Guattari, F. (1988). A thousand plateaus: Capitalism and schizophrenia. Bloomsbury Publishing.
de Vries, A. (2023). The growing energy footprint of artificial intelligence. Joule, 7(10), 2191–2194. https://doi.org/10.1016/j.joule.2023.09.004
Eskelinen, T., Lakkala, K., & Laakso, M. (2020). Introduction: Utopias and the revival of imagination. In T. Eskelinen (Ed.), The revival of political imagination: Utopias as methodology (pp. 3–19). Bloomsbury.
Galafassi, D., Tàbara, D. J., & Heras, M. (2018). Restoring our senses, restoring the Earth: Fostering imaginative capacities through the arts for envisioning climate transformations. Elementa: Science of the Anthropocene, 6(69), Article 330. https://doi.org/10.1525/elementa.330
Goodwin, S., & Kuehn, E. (2021). Latour’s hotel keys: An actor-network ontology. NASKO, 8, Article 15871. https://doi.org/10.7152/nasko.v8i1.15871
Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. HarperCollins.
Hurlburt, G. (2023). What if ethics got in the way of generative AI? IT Professional, 25(2), 4–6. https://doi.org/10.1109/MITP.2023.3267140
Latour, B. (2007). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.
Law, J. (1992). Notes on the theory of the actor-network: Ordering, strategy, and heterogeneity. Systems Practice, 5(4), 379–393. https://doi.org/10.1007/BF01059830
Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI less “thirsty”: Uncovering and addressing the secret water footprint of AI models. arXiv. https://doi.org/10.48550/arXiv.2304.03271
Lindgren, S. (2024). Critical theory of AI. Polity Press.
Motoki, F., Pinho Neto, V., & Rodrigues, V. (2024). More human than human: Measuring ChatGPT political bias. Public Choice, 198(1), 3–23. https://doi.org/10.1007/s11127-023-01097-2
Perkins, M., & Roe, J. (2024). Generative AI tools in academic research: Applications and implications for qualitative and quantitative research methodologies. arXiv. https://doi.org/10.48550/arXiv.2408.06872
Regilme, S. S. F. (2024). Artificial intelligence colonialism: Environmental damage, labor exploitation, and human rights crises in the Global South. SAIS Review of International Affairs, 44(2), 75–92. https://doi.org/10.1353/sais.2024.a950958
Ren, S., & Wierman, A. (2024). The uneven distribution of AI’s environmental impacts. Harvard Business Review. https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts
Roberts, B., & Bassett, C. (2023). Automation anxiety: A critical history. In S. Lindgren (Ed.), The handbook of critical studies of artificial intelligence (pp. 79–93). Edward Elgar. https://doi.org/10.4337/9781803928562.00012
Rozado, D. (2024). The political preferences of LLMs. PLOS ONE, 19(7), Article e0306621. https://doi.org/10.1371/journal.pone.0306621
Salmenniemi, S., Porkola, P., & Ylöstalo, H. (2024). Political imagination and utopian pedagogy. Critical Arts, 38(4–5), 24–39. https://doi.org/10.1080/02560046.2023.2299450
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 3645–3650). Association for Computational Linguistics. https://aclanthology.org/P19-1355/
Velkova, J. (2019). Data centers as impermanent infrastructures. Culture Machine. https://culturemachine.net/wp-content/uploads/2019/04/VELKOVA.pdf
Vigren, M., & Lozano Paredes, L. (2026). Struggling With the Black Box: Mapping Bias and Agency in Using Generative Artificial Intelligence in Qualitative Method Development. Qualitative Inquiry. https://doi.org/10.1177/10778004261434487
Yusoff, K., & Gabrys, J. (2011). Climate change and the imagination. Wiley Interdisciplinary Reviews: Climate Change, 2(4), 516–534. https://doi.org/10.1002/wcc.117




Comments