A study, largely composed with the help of ChatGPT, suggests that the academic community could experience both promising opportunities and notable difficulties due to the software’s potential.
As the technology continues to advance, it has raised concerns within the education industry regarding academic integrity and plagiarism.
The new research utilises ChatGPT to showcase the advancement of Large Language Machines (LLMs) and suggests measures that can be taken to ensure their positive impact, addressing some of the concerns raised.
Published in the peer-reviewed journal Innovations in Education and Teaching International, the research was conceived by academics from Plymouth Marjon University and the University of Plymouth.
For the majority of the paper, they used a series of prompts and questions to encourage ChatGPT to produce content in an academic style. These included:
- Write an original academic paper, with references, describing the implications of GPT-3 for assessment in higher education
- How can academics prevent students plagiarising using GPT-3?
- Are there any technologies which will check if work has been written by a chatbot?
- Produce several witty and intelligent titles for an academic research paper on the challenges universities face in ChatGPT and plagiarism
After generating the text, they pasted the output into the manuscript and organised it according to ChatGPT’s proposed structure. Then, they added authentic references throughout the document.
The researchers themselves, without the assistance of the software, wrote about this process exclusively in the discussion section of the paper, revealing it to readers.
The authors of the study emphasise in that particular section that despite being more advanced than previous developments, the text generated by ChatGPT can still be somewhat formulaic, which could be detected by various AI-detection tools already available.
Nonetheless, their discoveries must function as a wake-up call for university personnel to conscientiously contemplate the structure of their evaluations and devise strategies to guarantee that academic misconduct is explicitly elucidated to pupils and reduced.
Professor Debby Cotton, Director of Academic Practice and Professor of Higher Education at Plymouth Marjon University, is the study’s lead author. She said: “This latest AI development obviously brings huge challenges for universities, not least in testing student knowledge and teaching writing skills – but looking positively it is an opportunity for us to rethink what we want students to learn and why. I’d like to think that AI would enable us to automate some of the more administrative tasks academics do, allowing more time to be spent working with students.”
Corresponding author Dr Peter Cotton, Associate Professor in Ecology at the University of Plymouth, added: “Banning ChatGPT, as was done within New York schools, can only be a short-term solution while we think how to address the issues. AI is already widely accessible to students outside their institutions, and companies like Microsoft and Google are rapidly incorporating it into search engines and Office suites. The chat (sic) is already out of the bag, and the challenge for universities will be to adapt to a paradigm where the use of AI is the expected norm.”
Dr Reuben Shipway, Lecturer in Marine Biology at the University of Plymouth, said: “With any new revolutionary technology – and this is a revolutionary technology – there will be winners and losers. The losers will be those that fail to adapt to a rapidly changing landscape. The winners will take a pragmatic approach and leverage this technology to their advantage.”