A prominent vice-chancellor has called for the establishment of a unified set of principles for the use of artificial intelligence (AI) in research among universities and funding bodies. This recommendation is intended to prevent what has been referred to as a “race to the bottom,” a scenario in which harmful practices could become prevalent.
Urgency of Addressing AI in Research
Chris Day, the vice-chancellor of Newcastle University and chair of the UK’s Russell Group, presented this perspective at the World Academic Summit hosted by *Times Higher Education*. He pointed out that while significant focus has been placed on the application of generative AI tools in educational contexts—especially regarding assessments—the need to address the implications of AI in research is growing more urgent.
Risks of Inconsistent Strategies
Professor Day cautioned that if various institutions and funding agencies pursue inconsistent strategies for regulating AI, it could lead to confusion and encourage practices that undermine ethical standards.
Challenges of AI Implementation
The implementation of AI in research introduces notable risks, including the generation of false results, the dissemination of misinformation, and the potential for plagiarism. Day emphasized that without a framework of uniform regulations, there is a danger that both researchers and institutions might resort to using AI for competitive gains, resulting in unethical practices.
Original source: Times Higher Education.