top of page

The Ethics of Generative AI in Social Science: Three Narratives from 17 Early-Career Scholars

Generative AI in Social Science
Generative AI in Social Science

Generative Artificial Intelligence (Generative AI) is profoundly transforming the way social science research is conducted. It can write, code, translate, assist with literature searches, and even generate materials for experiments (Berg, 2023; Dabić et al., 2023; Lin, 2023; Lund & Wang, 2023; Porsdam Mann, Earp, Møller, et al., 2023). Yet, discussions about its ethical implications in academia largely remain at the “principles checklist” level—a list of agreed-upon “dos and don’ts” (Much to discuss in AI ethics, 2022; Schlagwein WillcocksChatGPT et al., 2023).

 

While this approach has value, it struggles to address the complex dilemmas researchers face in real academic contexts. To explore this further, the authors conducted interviews with 17 early-career social scientists, using their stories to examine what generative AI means in academia and to distill three typical narratives: the Equalizer Narrative, the Meritocracy Narrative, and the Community Narrative.

 

How was the research conducted?


The participants were recruited from computational social science workshops in South Korea, including doctoral students, master’s students, and postdoctoral researchers. They came from various disciplines and educational backgrounds, with most affiliated with well-resourced research universities. A common trait was that they were in the early stages of institutionalization in academia, learning to use AI while also adapting to the rule changes AI is bringing to their fields.


1. Equalizer Narrative


The central question here is whether generative AI can help reduce or further entrench existing inequalities in the social sciences. Many interviewees’ initial response was to view AI as a means of promoting academic equality. In particular, they emphasized its potential to bridge gaps caused by limited resources, such as the lack of supervisor guidance, language barriers and insufficient programming skills. For non-native English speakers, AI offers discreet and affordable forms of support, such as text polishing and translation, which can effectively lower cultural and academic barriers.

 

At the same time, several scholars expressed concern that AI could deepen hidden inequalities. Overreliance on AI by early-career researchers might reduce their motivation to develop fundamental skills, creating a divide between those who have mastered methodological foundations and those who depend heavily on AI. In the realm of language, native English speakers may be better positioned to judge the quality of AI-assisted editing, potentially widening the linguistic gap. Furthermore, the standardized English style generated by AI risks reinforcing academic cultural hegemony, leaving some researchers feeling alienated from their own scholarly identity.

 

Overall, interviewees were more concerned with how AI interacts with existing academic structures than with its technical functions in isolation.

Generative AI as an Academic Equalizer
Generative AI as an Academic Equalizer

 

2. Meritocracy Narrative


The key question in this narrative is whether researchers who can use AI effectively should be regarded as having greater academic competence. One perspective argues that proficiency with AI should be considered part of a researcher’s scholarly skill set. Effective use of AI requires sustained training and strategic practice and is far from effortless. Many participants likened generative AI to other established research tools, such as R, SPSS, or Google Search, arguing that it is simply another skill and that those who use it well deserve recognition.

 

On the other hand, some saw AI as a potential driver of unequal distribution of benefits. Not all disciplines or methodological traditions stand to gain equally. For instance, quantitative research often benefits more while qualitative research may be further marginalized. In addition, advanced AI services may become commercialized and costly, exacerbating economic disparities among scholars.

 

AI Skills and Academic Merit
AI Skills and Academic Merit

3. Community Narrative


This narrative focuses on the long-term implications of generative AI for the academic community and institutional systems. Scholars’ views here diverged into two camps—those highlighting positive contributions and those warning of risks. On the positive side, AI can improve research efficiency and rigor, for example by automating model testing and reducing redundant text. It can also refine the style of academic writing, making papers more concise and could in the future assist with peer review, easing reviewer shortages. Finally, AI’s translation and information services can expand the readership of academic work, thereby enhance scholars’ social responsibility and extend their influence.

 

Yet alongside these benefits, participants also pointed to significant risks. AI might reduce the quality of literature reviews, leading to homogenization and mechanization as well as undermining creativity in research. Its inconsistent and non-reproducible outputs also pose a challenge to the verifiability of science. Moreover, some interviewees feared that the surge in research output enabled by AI would outpace the capacity for peer review, potentially diminishing overall academic quality.

AI’s Impact on the Academic Community
AI’s Impact on the Academic Community

 

Conclusion


Theoretically, drawing on the analytical frameworks of science and technology studies (STS) and the philosophy of technology, this paper introduces the perspective of ethics-in-practice, emphasizing that ethical judgments must be made in connection with specific contexts and institutional conditions. At the same time, it advocates for continuous ethical scrutiny throughout the entire AI technology lifecycle to ensure that its application upholds both social responsibility and academic integrity.

 

Overall, this study contends that the research ethics of generative artificial intelligence should not rely solely on universal and abstract technological regulations, but should instead be institutionally grounded in specific academic contexts. Effective ethical norms need to be formulated through democratic discussions and deliberations at multiple levels, including colleges, departments, and disciplines. The root of ethical issues often does not lie in technology itself as an isolated factor, but rather in its interaction with existing structural inequalities. Therefore, universities and academic communities should regard the emergence of generative AI as an opportunity to examine and improve their own structural problems, rather than merely focusing on preventing or rejecting its potential risks.








References:

Berg, C. (2023). The case for generative AI in scholarly practice. Available at SSRN 4407587.

Dabić, M., Maley, J. F., Švarc, J., & Poček, J. (2023). Future of digital work: Challenges for sustainable human resources management. Journal of Innovation & Knowledge8(2), 100353.

Intelligence, N. M. (2022). Much to discuss in AI ethics. Nat Mach Intell4, 1055-1056.

Jeon, J., Kim, L., & Park, J. (2025). The ethics of generative AI in social science research: A qualitative approach for institutionally grounded AI research ethics. Technology in Society81, 102836.

Lin, Z. (2023). Why and how to embrace AI such as ChatGPT in your academic life. Royal Society Open Science10(8), 230658.

Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: how may AI and GPT impact academia and libraries?. Library hi tech news40(3), 26-29.

Porsdam Mann, S., Earp, B. D., Møller, N., Vynn, S., & Savulescu, J. (2023). AUTOGEN: A personalized large language model for academic enhancement—Ethics and proof of principle. The American Journal of Bioethics23(10), 28-41.

Schlagwein, D., & Willcocks, L. (2023). ‘ChatGPT et al.’: The ethics of using (generative) artificial intelligence in research and science. Journal of Information Technology38(3), 232-238.

 
 
 

Comments


This initiative is supported by the following organizations:

  • Twitter
  • LinkedIn
  • YouTube
logo_edited.png
bottom of page