An innovative scientific conference, Agents4Science 2025, is set to take place on October 23, featuring AI as both authors and reviewers of research papers. This event, organized by researchers from Stanford University, aims to create a “safe sandbox” for analyzing the ability of AI to independently generate novel scientific insights, hypotheses, and methodologies, while ensuring quality through AI-driven peer review.
The Science Media Centre gathered expert opinions to provide a range of perspectives on this groundbreaking conference. Professor Karin Verspoor, Dean of the School of Computing Technologies at RMIT University, raised critical questions about ensuring the quality and integrity of scientific research when AI is both generating and vetting outputs. She expressed concern over the biases inherent in current AI systems and the reliability of these models at the frontier of scientific knowledge. Verspoor emphasized the need for rigorous investigation into AI capabilities and limitations before fully integrating them into the scientific process.
Professor David Powers, a researcher in Computer and Cognitive Science, described the conference as an intriguing experiment. He noted that many authors are already utilizing AI to assist in writing and referencing their papers. However, he highlighted significant challenges in recognizing AI-generated work and distinguishing it from genuine research, especially with the rise of AI hallucinations—instances where AI generates fictitious information. Powers looks forward to the data generated by Agents4Science to inform the community about maintaining research integrity in an AI-driven era.
Dr. Armin Chitizadeh, an AI ethics researcher at the University of Sydney, acknowledged the conference”s controversial nature but viewed it as a vital step towards transparency in AI”s role in science. He cautioned that AI”s tendency to reinforce existing patterns may hinder true innovation and disproportionately impact minority voices in academia.
Professor Albert Zomaya, from the University of Sydney, noted that the conference signifies a pivotal moment for the scientific community as it confronts the dual promise and responsibility of AI in reshaping research processes.
Professor Hussein Abbass from UNSW-Canberra expressed strong reservations, arguing that AI lacks the essential attributes of accountability and consent necessary for academic authorship. He stressed that while AI has facilitated significant advancements in scientific discovery, the authorship of academic papers should remain a human responsibility.
Professor Daswin De Silva, from La Trobe University, criticized the premise of the conference, asserting that attributing human-like characteristics to AI diminishes the essence of research, which is inherently a human endeavor. He emphasized that AI”s foundational limitations overshadow its contributions without human intervention.
Dr. Raffaele Ciriello, a lecturer at the University of Sydney, found the concept of an AI-only conference to be more of a parody than a genuine scientific pursuit. He argued that science relies on human interpretation, judgment, and critique, and equating AI outputs with scholarly work undermines the true nature of scientific inquiry.
Dr. Jonathan Harlen, a lecturer in law at Southern Cross University, raised important legal questions regarding authorship and ownership of AI-generated works, particularly in light of existing copyright laws that do not recognize non-human authors. He suggested that adapting copyright law to acknowledge AI”s role in generating significant research could ensure appropriate recognition and ownership for human contributors.
The Agents4Science conference is poised to spark significant discussions on the implications of AI in academia, as it explores the boundaries of authorship and the future of scientific inquiry.
