Is AI killing scientific creativity? New study warns of research skill collapse

The study introduces the concept of the "Research Automaton," a hypothetical but increasingly plausible system capable of producing science-like outputs without meaningful human involvement. This development is no longer speculative, as several AI tools already perform individual research tasks with high efficiency. Generalist large language models such as ChatGPT, Claude, Gemini, and DeepSeek are increasingly paired with specialized tools for literature synthesis, data analysis, academic writing, and manuscript editing.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 11-08-2025 23:39 IST | Created: 11-08-2025 23:39 IST
Is AI killing scientific creativity? New study warns of research skill collapse
Representative Image. Credit: ChatGPT

A recent study published in AI & Society warns of the potential collapse of essential human skills in academia if generative AI is uncritically embraced. The paper calls for immediate reflection and action by researchers, supervisors, institutions, policymakers, and publishers to ensure the continued vitality of the research ecosystem.

The study, titled “The Rise of the Research Automaton: Science as Process or Product in the Era of Generative AI?”, argues that the true value of science lies in the human-led process of inquiry and that overreliance on generative AI tools risks turning science into a shallow output factory. 

Is automation undermining the foundations of scientific practice?

The paper, authored by Henrik Skaug Sætra, traces the historical relationship between science and technology to illustrate how previous advancements, from the printing press to AlphaFold, have enhanced human capacity for discovery. However, generative AI marks a sharp departure. Rather than supporting scientists in their work, Sætra argues, AI now has the potential to displace human engagement entirely by automating nearly every step of the research lifecycle, from ideation and literature review to analysis, writing, and even peer review.

The study introduces the concept of the "Research Automaton," a hypothetical but increasingly plausible system capable of producing science-like outputs without meaningful human involvement. This development is no longer speculative, as several AI tools already perform individual research tasks with high efficiency. Generalist large language models such as ChatGPT, Claude, Gemini, and DeepSeek are increasingly paired with specialized tools for literature synthesis, data analysis, academic writing, and manuscript editing.

While automation may offer short-term productivity gains, Sætra warns that it risks reducing science to a set of mechanical functions devoid of intellectual struggle, critical reasoning, or ethical deliberation. He stresses that scientific output should be viewed as the byproduct of a deeply formative and cognitively demanding process. Without this process, the foundational competencies of researchers may deteriorate, leading to a crisis of originality, creativity, and long-term innovation.

Can AI-augmented research preserve human values and judgment?

The study addresses the fundamental question of whether science can remain a human-centered enterprise when core intellectual tasks are increasingly outsourced to machines. Drawing from Max Weber’s view of science as a vocation, Sætra underscores the formative, ethical, and social dimensions of scientific practice. He argues that doing science is not just a technical procedure but a meaningful journey of intellectual engagement and civic responsibility.

By automating critical tasks such as developing research questions, framing methodologies, and analyzing complex data, AI could strip science of its normative character. These are stages where values, worldviews, and ethical considerations intersect with epistemology. Removing humans from these decisions risks weakening the democratic and reflexive functions of science. Additionally, the social dimension of research—its communal character and collaborative knowledge-building—could be compromised in a machine-dominated workflow.

Sætra suggests that AI integration in research should be guided by the principle of augmentation rather than substitution. When used as a cognitive scaffold or a collaborative partner, generative AI has the potential to enrich human inquiry. For instance, AI might be effectively deployed in transcription, proofreading, and data visualization—tasks where intellectual stakes are relatively low. However, deploying AI in ideation or interpretation must be approached with caution. Sætra emphasizes that researchers must maintain control over meaning-making, context, and ethical evaluations to preserve the integrity of science.

What are the long-term risks to researchers and the research ecosystem?

Beyond ethical and normative concerns, Sætra presents a strong pragmatic argument against excessive reliance on AI. One of the most pressing risks is cognitive atrophy, the gradual erosion of essential skills such as critical thinking, theoretical framing, and creative problem-solving. As researchers delegate these tasks to AI, their capacity for independent judgment and innovation may diminish. This, he warns, could create a feedback loop in which increasingly poor human input degrades the quality of AI systems themselves.

The paper discusses the concept of "model collapse," whereby future AI systems trained on AI-generated data rather than original human insights suffer a decline in performance and creativity. This scenario suggests that human researchers are not just users of AI but crucial contributors to its evolution. Without continuous human learning and independent innovation, AI models could stagnate, leading to a systemic collapse in the quality of scientific output.

Sætra also highlights the threat of automation bias, where human users over-trust machine-generated outputs without adequate skepticism. This could erode the peer-review system and compromise the collective self-correcting mechanisms that are foundational to science. In academic training, early-career researchers are particularly vulnerable. By skipping formative experiences in literature review, data analysis, and conceptual reasoning, they may fail to develop the expertise needed to lead future scientific advancements.

Implications and recommendations for stakeholders

The study offers a series of detailed recommendations for different actors in the research ecosystem. Early-career researchers are urged to prioritize deep disciplinary learning and critical engagement over productivity. Supervisors should set clear expectations about the ethical use of AI and foster environments where foundational skills are preserved. Academic researchers are advised to openly discuss AI use in collaborations and guard against loss of creativity and ownership.

Institutions are encouraged to critically evaluate AI vendors, develop comprehensive training programs, and foster a culture that values process over output. Policymakers should reassess research assessment frameworks to reward originality and integrity rather than volume and speed. Publishers are called on to enforce human authorship standards, invest in robust detection mechanisms, and resist the narrative that AI-generated research is inevitable or desirable.

The author argues that safeguarding the integrity of science in the age of generative AI requires a conscious and collective commitment to process-oriented research values. The question is not whether science can be automated, but whether society is willing to accept what might be lost in the process.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback