From Hype to Reality: Separating Fact from Fiction in Media Coverage of ChatGPT and Higher Education
Written by Dr Sam Illingworth, Associate Professor in the Department for Learning and Teaching Enhancement at Edinburgh Napier University.

In recent weeks, there has been a lot of media attention around ChatGPT, a large language model created by OpenAI (an artificial intelligence company), and its potential applications in higher education. ChatGPT is a chatbot which can understand and generate text. It can be used to write news summaries, product descriptions, stories, essays, or emails, for example. However, the way in which this has been discussed and portrayed in the media has varied widely, ranging from accurate and informative to sensationalistic and misleading.
One common theme in media coverage of ChatGPT and higher education is the potential for the model to revolutionize the way we learn and teach. Some articles suggest that ChatGPT could replace traditional lectures and textbooks with personalised, interactive learning experiences that adapt to each student’s individual needs and interests. While there is certainly potential for ChatGPT to be used in this way, it is important to note that the technology is still in its early stages and there are many challenges that must be overcome before it can be widely adopted in higher education.
Another common theme in media coverage of ChatGPT and higher education is the potential for the model to perpetuate existing biases and inequalities in education. As others and I have warned, ChatGPT, like any technology, is only as unbiased as the data it is trained on, and that if the data contains biases or inequalities, these will be reflected in the model’s outputs. This is an important consideration and underscores the need for careful attention to be paid to the data used to train ChatGPT and other such tools in higher education and beyond.
However, not all media coverage of ChatGPT and higher education is accurate or informative. Some articles focus on sensationalistic headlines or clickbait, rather than providing a balanced and nuanced view of the research. For example, some articles have suggested that ChatGPT could eventually replace human teachers entirely, leading to mass unemployment in the education sector. While it is true that ChatGPT has the potential to automate some aspects of teaching and learning, it is unlikely to completely replace human teachers in the foreseeable future.
Similarly, some articles have suggested that ChatGPT could be used to cheat on exams or assignments, allowing students to pass without learning the material. While it is possible that some students may try to use ChatGPT in this way, it is important to note that using AI models to cheat would likely be detected by plagiarism detection software or human instructors. Likewise, in many instances some of these headlines are assuming that all students will use the technology to cheat, rather than either trusting our students or actually asking them about how they might use it to enhance their studies.
To sum up, the media portrayal of ChatGPT and higher education can differ greatly, ranging from factual and educational to exaggerated and deceptive. While there is certainly potential for ChatGPT to be used in innovative and beneficial ways in higher education, it is important to approach the technology with a critical eye and to carefully consider its potential limitations and drawbacks. By doing so, we can ensure that ChatGPT and other AI models are used to enhance, rather than replace, human teaching and learning in higher education.
—
You can find out more about Sam’s work and research by visiting his website www.samillingworth.com and connect with him on Twitter @samillingworth or email s.illingworth@napier.ac.uk.