Research in the Age of AI: Amplifying Curiosity or Automating Junk?”
AI As a Research Partner
Traditionally, research was a straight line, from literature reviews and hypothesis generation to data analysis and publishing; every step was human-powered. Today, AI transforms this into a recursive, interactive loop, where researchers act more as orchestrators, working with AI “uncredited co-authors” that speed up every stage.
Unlike humans, AI can digest thousands of papers in hours. It can quickly identify key trends, connect ideas across disciplines, and even spot niche gaps that point to new research directions.
In analysis, AI shines at detecting patterns in complex, high-dimensional data that humans can’t easily see. It automates routine statistics and can even analyse multiple data types at once (e.g. text, images, audio), leading to more holistic insights. With AI, a smaller and underfunded research team could also produce valuable research output on par with a better-equipped lab.
Bad Writing and Fake Authorship
It goes without saying that AI can also accelerate writing and reviewing. Drafting sections, summarising results, or screening for plagiarism can all be automated. But this creates dilemmas: who gets credit when AI does much of the heavy lifting? And how do journals manage a potential flood of AI-generated papers, most of which have no real scholarly contribution?
The problem of “bad science” and “fraudulent research”, as the activist called it, began way earlier than the first AI chatbot's appearance in 2022. A landmark 2021 study by David Bimler, the first scholar to systematically expose this problem, identified over 1,000 likely paper mill articles just within the health and life sciences on PubMed. This was considered a conservative estimate, highlighting that the issue was already widespread.
A widely cited 2022 report by the International Association of Scientific, Technical and Medical Publishers estimated that 2% of all scientific papers published annually are likely from paper mills (fraudulent research paper producers). Other experts, like integrity consultant Elisabeth Bik, have suggested the figure could be as high as 5-10% in some fields.
The primary impact of generative AI is the industrialisation and scaling of paper mill operations. Where a mill might have produced dozens of papers per month manually, they can now generate hundreds or thousands with minimal human effort.
Early AI-generated papers were easy to spot due to hallucinations and weird phrasing. The newer models produce much more coherent and plausible text. They are now flooding the system with higher-quality "junk."
The BMJ's chief executive cited a 350% increase in submissions from "hijacked" author accounts in 2023, a common paper mill tactic. Frontiers noted a significant rise in "bogus" papers and had to pause special issue submissions due to the volume of low-quality, likely mill-related articles.
The aim of the paper mill industry is attrition. It wants to overwhelm the system until some papers slip through. This results in reviewer burnout, growing cynicism, and eroded trust in science itself.
Can AI Help With Peer Review?
Ironically, to protect the integrity of scientific research in the age of AI, more human involvement is needed. The main gatekeeper, the peer review process, must be made more rigorous. It should focus more on methodology and data, and not just polished presentations. The rise of AI has reduced the man-hours needed to produce a scientific paper, but it also demands an increase in man-hours in reviewing it.
To safeguard against fabricated results, data or code sharing must be made mandatory. No longer do editors and reviewers take the author's claim at face value. The scientific research community used to be considered a high-trust society because producing believable fake data was hard and could be easily spotted. But now, AI can generate convincing data with just a single prompt.
Editors can also set a policy for the authors, where AI use must be disclosed.
The authors must clearly state how AI was used in any part of the research process. Eventually, it falls under the publishers’ responsibility to protect against AI-powered bad science. Journals need stronger editorial pre-screening and sanctions against repeat offenders.
We must remember that the wave of low-quality scientific publications produced now will ultimately produce a bad AI chatbot in the future. It is bad enough that they can hallucinate and create fake citations, but in the future, they could be citing bad research papers as well.
Ethical AI Usage
This article is not advocating for the total ban of AI in the research process. It would be hard to monitor and control anyway. AI text detector software in use today is also not too reliable. However, research institutions could formulate an ethical policy on how AI can be used by their researchers. The first step is to implement workplace AI training to avoid over-reliance, which could deskill future researchers, leaving them unable to critically evaluate AI outputs.
The policy should focus on how AI could augment discovery and must not replace the role of humans in interpreting the research findings. The policy should also emphasise the importance of research objectivity, as many AI tools would inherit and amplify human biases. Most chatbots avoid major disagreements with humans and would mirror the opinions of the user.
Conclusion: The Researcher’s New Role
In this new landscape, researchers must adapt. They need to know how to “talk” to AI effectively, but their real value lies in interpretation, ethical judgment, and critical thinking. The role is shifting from executor to navigator.
AI doesn’t replace human curiosity; it amplifies it. However, without safeguards, it could just as easily overwhelm the scientific ecosystem with noise. The responsibility lies with researchers, publishers, and institutions to ensure that AI strengthens the pursuit of knowledge rather than eroding it. The future of discovery isn’t only about faster answers, it’s about making sure those answers are trustworthy and aligned with human values.
Associate Professor Dr Azam Che Idris
School of Computing and Artificial Intelligence (SCAI)
Email: @email