top of page

The Pros and Cons of AI-Assisted Academic Research: From Pursuit of Efficiency to Rigorous Truth-Seeking

Artificial intelligence (AI) is transforming the landscape of academic research at an astonishing speed. From idea generation to data production, from writing to journal publishing, AI is infiltrating every link of the academic ecosystem (Van Quaquebeke et al., 2025). In this article published in The Leadership Quarterly, Van Quaquebeke, Tonidandel, and Banks point out that we are already in an era where AI deeply participates in the production of knowledge. The real question is no longer “Should we use AI?” but rather “How can we use it with principles and reflection?”


A Companion for Inspiration or a Driver of Homogenization? – How AI Reshapes Research Design


Generative AI is becoming an incubator for academic inspiration. Researchers use large language models (LLMs) for brainstorming, generating research questions, and even retrieving overlooked cross-disciplinary topics (Bianchini et al., 2022). Empirical studies even show that research ideas generated by AI outperform those proposed by 100 NLP experts in terms of novelty (Si et al., 2024). Moreover, not only beginners benefit — experienced scholars can also break through fixed thinking by interacting with AI through counter-prompting. AI can even help researchers check whether similar projects have already been published, thereby enhancing originality (Skarlinski et al., 2025).


However, if many scholars are using the same AI models, this convenience comes at a cost. AI tends to generate “average” answers, thereby compressing the space for truly disruptive questions and forming what is called an “algorithmic monoculture” (Holzner et al., 2025). At the same time, hot topic prediction tools may induce researchers to opportunistically select topics, thus falling into the trap of Goodhart’s law — when a measure becomes a target, it ceases to be a good measure.


Therefore, the authors suggest: on the one hand, use AI to spark creativity; on the other hand, deliberately input heterogeneous resources and raise counterintuitive questions, such as “Which perspectives have been systematically overlooked?” or “What implicit assumptions exist in this field?” (Lee & Chung, 2024). In addition, specific strategies are proposed — for example, first engage in individual thinking, then interact with AI, and finally discuss the ideas generated as a team. This “offline-AI-team” iterative process can effectively avoid collective convergence (Paulus & Kenworthy, 2019).


A New Era of Literature Understanding: Coexisting Efficiency, Depth, and Bias


AI’s potential in literature review is also remarkable. One study points out that within just two days, AI was able to reproduce and update the core content of 12 Cochrane systematic reviews — equivalent to 12 years of manual work (Cao et al., 2025). Researchers can now use tools like AlphaXiv or NotebookLM to conduct “conversational analysis” with articles or generate visual logical maps to identify hidden relationships among theoretical hypotheses. AI can also help address the “jingle-jangle” problem — that is, the issue where the same term refers to different concepts or different terms refer to the same concept — which is particularly prominent in structurally complex fields like leadership research (Banks et al., 2018; Cronin et al., 2022).


However, AI-powered review systems come with major risks. Their training corpora often exclude literature behind paywalls, which leads to the possible omission of important voices. A large-scale analysis found that GPT-4o-generated references favor short titles, newly published articles, and high-impact journals, thereby amplifying the Matthew effect (Algaba et al., 2025). A deeper risk is that AI may integrate philosophically conflicting schools of thought into a “smooth” narrative, weakening theoretical tension in the research. The authors remind us that AI can serve as an excellent “assistant,” but it must not become the “arbiter” of review conclusions (Malik & Terzidis, 2025).

From inspiration to algorithmic convergence
From inspiration to algorithmic convergence

Designing Research: AI Can Map the Way but Cannot Author the Journey


AI can assist not only in generating ideas but also in participating directly in research design, including variable manipulation, questionnaire development, and translation optimization. Compared to the traditional Brislin method, AI-assisted translation (e.g., DeepL) can better retain semantic details while handling cultural differences (Brislin, 1970). AI can also rely on a large corpus of methodological literature to help plan statistical strategies or select control variables (Van Quaquebeke et al., 2025). The ability of AI to simulate samples is becoming increasingly mature, allowing researchers to estimate potential effects before conducting fieldwork (Argyle et al., 2023; Ashokkumar et al., 2024). However, the authenticity of these “virtual samples” remains an issue. Experiments designed by AI may be theoretically perfect but overlook the complexity of real-world environments (Mei et al., 2024).


The most fundamental risk is that researchers may lose control over the methodological process by handing over design decisions to AI, thereby drifting away from the ability to “understand reality hands-on”. Therefore, the authors emphasize that AI suggestions should only serve as starting points, not final decisions. The judgment of research methods must be upheld by human researchers themselves (Resnik & Hosseini, 2025).


Data Generation: Innovative Experiment Participants or Polluted Data Sources?


AI now not only generates virtual data but can also act as experimental participants (“AI confederates”), simulating complex social interaction scenarios. For instance, AI can play the role of a “bad teammate” in experiments, responding flexibly to context and going beyond the limits of static scripts (Krafft et al., 2016). In addition, AI is already capable of collecting interaction data from digital platforms such as Slack and Zoom or conducting large-scale digital ethnographies. Some have even used AI to participate in Reddit community discussions under fictional identities (without consent) to test persuasive effectiveness. But all of these raise ethical alarms.


When participants suspect that the counterpart is not human, the authenticity of research data will be questioned. More seriously, participants themselves may use AI to assist in responding, thereby “polluting” the data in reverse. Therefore, the authors advocate that any AI intervention in experiments must be fully disclosed with transparency. AI-generated data should only be treated as “theoretical data,” not as a substitute for empirical investigation (Von Krogh et al., 2023).

AI: an assistant, not a judge
AI: an assistant, not a judge

Data Analysis: Opening the Door to Literacy or Obscuring Understanding?


AI has lowered the threshold for advanced statistical analysis, especially with significant effectiveness in text analysis and data visualization (Feuerriegel et al., 2025). Users can even upload tables, images, or SPSS outputs and have AI detect outliers and explain meanings. However, AI today is overly “confident,” which may lead beginners to ignore the underlying assumptions. Sometimes AI makes “logically correct” errors — its pattern recognition may be technically accurate but contextually wrong (Wenzel & Van Quaquebeke, 2018). Therefore, analytical results still require human researchers to conduct cross-verification using theoretical and methodological frameworks.


Writing and Expression: A Tool for Acceleration or a Risk of Losing Voice?


AI-assisted writing can effectively alleviate “blank page anxiety” and improve output quality. One study showed that with the help of ChatGPT, participants’ writing time was reduced by 40%, and quality increased by 18% (Noy & Zhang, 2023). AI can also help non-native English speakers polish grammar and adjust tone, making it easier to meet academic publication standards (Van Quaquebeke et al., 2025).


However, the issue is that AI-generated text tends to be stylistically homogenized and tonally “bland.” More seriously, AI may cite false or fabricated references, which is particularly dangerous when writing literature reviews (Letrud & Hernes, 2019). In addition, while AI improves individual writing capabilities, it may reduce opportunities for collaborative writing, thereby weakening communication and critical exchange within the academic community (Hoffmann et al., 2024). In light of these issues, the authors present their own viewpoint — that the essence of academic writing lies not in structural optimization, but in “preserving the author’s voice,” which is the core of scholarly inheritance.


Publishing and Dissemination: From Closed Processes to “Living Papers”


AI is also reshaping the journal editorial process. From initial manuscript screening and reviewer matching to generating editorial comments and revision tracking, AI can participate efficiently (Checco et al., 2021). It can even assist authors in drafting response letters, integrating feedback from multiple review rounds to enhance communication efficiency (Van Quaquebeke & Gerpott, 2024).


However, at the same time, AI may replicate historical biases and suppress novel but non-mainstream submissions (Pataranutaporn et al., 2025). An AI-optimized “good article” may merely be overfitted to current metrics, rather than a true knowledge breakthrough. More dangerously, we may unknowingly begin “writing for AI” instead of writing for scholarship (Naddaf, 2025).

Think deeper, not faster
Think deeper, not faster

Conclusion: Use AI to Reduce Burden and Amplify Reflection; Use AI to Empower, Not Replace Judgment


In conclusion, the authors state that AI may take over nearly all academic processes, but the real danger is not that it replaces humans — rather, it is that scholars voluntarily conform to AI logic and lose their imagination. Therefore, we must use AI to save time in order to “think more deeply,” not just to “replicate more quickly” (Messeri & Crockett, 2024). Human researchers still have an irreplaceable mission — to pose valuable questions and preserve a spirit of critique and independent voice. AI can improve efficiency, but the depth and meaning of knowledge must still be created by us.




References:

Algaba, A., Holst, V., Tori, F., Mobini, M., Verbeken, B., Wenmackers, S., & Ginis, V. (2025). How deep do large language models internalize scientific literature and citation practices? https://doi.org/10.48550/arXiv.2504.02767

Argyle, L. P., Busby, E. C., Fulda, N., Gubler, J. R., Rytting, C., & Wingate, D. (2023). Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3), 337–351. https://doi.org/10.1017/pan.2023.2

Ashokkumar, A., Hewitt, L., Ghezae, I., & Willer, R. (2024). Predicting results of social science experiments using large language models. https://docsend.com/view/ity6yf2dansesucf

Banks, G. C., Gooty, J., Ross, R. L., Williams, C. E., & Harrington, N. T. (2018). Construct redundancy in leader behaviors: A review and agenda for the future. The Leadership Quarterly, 29(1), 236–251. https://doi.org/10.1016/j.leaqua.2017.12.005

Bianchini, S., Müller, M., & Pelletier, P. (2022). Artificial intelligence in science: An emerging general method of invention. Research Policy, 51(10), 104604. https://doi.org/10.1016/j.respol.2022.104604

Brislin, R. W. (1970). Back‑translation for cross‑cultural research. Journal of Cross‑Cultural Psychology, 1(3), 185–216. https://doi.org/10.1177/135910457000100301

Cao, C., Arora, R., Cento, P., Manta, K., Farahani, E., Cecere, M., Selemon, A., Sang, J., Gong, L. X., Kloosterman, R., Jiang, S., Saleh, R., Margalik, D., Lin, J., Jomy, J., Xie, J., Chen, D., Gorla, J., Lee, S., & Bobrovitz, N. (2025). Automation of systematic reviews with large language models. https://doi.org/10.1101/2025.06.13.25329541

Checco, A., Bracciale, L., Loreti, P., Pinfield, S., & Bianchi, G. (2021). AI-assisted peer review. Humanities & Social Sciences Communications, 8(1), 1–11. https://doi.org/10.1057/s41599-020-00703-8

Cronin, M. A., Stouten, J., & van Knippenberg, D. (2022). Why Theory on “How Theory Fits Together” Benefits Management Scholarship. The Academy of Management Review, 47(2), 333–337. https://doi.org/10.5465/amr.2021.0517

Feuerriegel, S., Maarouf, A., Bär, D., Geissler, D., Schweisthal, J., Pröllochs, N., ... & Van Bavel, J. J. (2025). Using natural language processing to analyse text data in behavioural science. Nature Reviews Psychology4(2), 96-111.

Hoffmann, M., Boysel, S., Nagle, F., Peng, S., & Xu, K. (2024). Generative AI and the Nature of Work (No. 11479). CESifo Working Paper.

Hoffmann, S., Lasarov, W., & Dwivedi, Y. K. (2024). AI-empowered scale development: Testing the potential of ChatGPT. Technological Forecasting and Social Change205, 123488.

Holzner, N., Maier, S., & Feuerriegel, S. (2025). Generative AI and creativity: A systematic literature review and meta‑analysis (Version 1). https://doi.org/10.48550/ARXIV.2505.17241

Krafft, P. M., Macy, M., & Pentland, A. (2016). Bots as Virtual Confederates: Design and Ethics. arXiv.Org. https://doi.org/10.48550/arxiv.1611.00447

Lee, B. C., & Chung, J. (2024). An empirical investigation of the impact of ChatGPT on creativity. Nature Human Behaviour, 8(10), 1906–1914. https://doi.org/10.1038/s41562-024-01953-1

Letrud, K., & Hernes, S. (2019). Affirmative citation bias in scientific myth debunking: A three-in-one case study. PLoS One14(9), e0222213.

Malik, F. S., & Terzidis, O. (2025). A hybrid framework for creating artificial intelligence‑augmented systematic literature reviews. Management Review Quarterly. Advance online publication. https://doi.org/10.1007/s11301-025-00522-8

Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature (London), 627(8002), 49–58. https://doi.org/10.1038/s41586-024-07146-0

Naddaf, M. (2025). AI is transforming peer review — and many scientists are worried. Nature (London), 639(8056), 852–854. https://doi.org/10.1038/d41586-025-00894-7

Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science (American Association for the Advancement of Science), 381(6654), 187–192. https://doi.org/10.1126/science.adh2586Qiaozhu Mei, Xie, Y., Yuan, W., & Jackson, M. O. (2024). A Turing Test: Are AI Chatbots Behaviorally Similar to Humans? arXiv.Org.

Pataranutaporn, P., Powdthavee, N., Achiwaranguprok, C., & Maes, P. (2025). Can AI Solve the Peer Review Crisis? A Large Scale Cross Model Experiment of LLMs’ Performance and Biases in Evaluating over 1000 Economics Papers (No. arXiv:2502.00070). arXiv. https://doi.org/10.48550/arXiv.2502.00070

Paulus, P. B., Kenworthy, J. B., & Nijstad, B. A. (2019). Effective Brainstorming. In The Oxford Handbook of Group Creativity and Innovation. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190648077.013.17

Resnik, D. B., & Hosseini, M. (2025). The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. Ai and Ethics (Online), 5(2), 1499–1521. https://doi.org/10.1007/s43681-024-00493-8

Si, C., Yang, D., & Hashimoto, T. (2024). Can LLMs generate novel research ideas? A large‑scale human study with 100+ NLP researchers (Version 1). https://doi.org/10.48550/ARXIV.2409.04109

Skarlinski, M., Nadolski, T., Braza, J., Storni, R., Caldas, M., Mitchener, L., Hinks, M., White, A., & Rodriques, S. (2025). Superintelligent AI agents for scientific discovery. FutureHouse. https://www.futurehouse.org/research-announcements/launching-futurehouse-platform-ai-agents

Van Quaquebeke, N., & Gerpott, F. H. (2024). Artificial Intelligence (AI) and Workplace Communication: Promises, Perils, and Recommended Policy. Journal of Leadership & Organizational Studies, 31(4), 375–381. https://doi.org/10.1177/15480518241289644

Van Quaquebeke, N., Tonidandel, S., & Banks, G. C. (2025). Beyond efficiency: How artificial intelligence (AI) will reshape scientific inquiry and the publication process. The Leadership Quarterly, 36, 101895. https://doi.org/10.1016/j.leaqua.2025.101895

von Krogh, G., Roberson, Q., & Gruber, M. (2023). Recognizing and Utilizing Novel Research Opportunities with Artificial Intelligence. Academy of Management Journal, 66(2), 367–373. https://doi.org/10.5465/amj.2023.4002

Wenzel, R., & Van Quaquebeke, N. (2018). The Double-Edged Sword of Big Data in Organizational and Management Research: A Review of Opportunities and Risks. Organizational Research Methods, 21(3), 548–591. https://doi.org/10.1177/1094428117718627


 
 
 

This initiative is supported by the following organizations:

  • Twitter
  • LinkedIn
  • YouTube
logo_edited.png
bottom of page