Artificial Intelligence – when should we be wary?
Artificial intelligence (AI) has rarely been out of the headlines in the last couple of years, and particularly since the OpenAI organisation released its ChatGPT service to the public for testing in 2022. Thousands of articles have been written since, ranging from claims of revolutionary change to apocalyptic warnings (most recently culminating in the UK-hosted November 2023 AI Safety Summit). But what actually is “AI”, and how much of this commentary is realistic?
The idea of AI, and some of its core technology, has actually been around since the 1950s, when computing was very young, and versions of it have been around for a long time in relatively basic forms — for example, Microsoft’s “paperclip” Office assistant, or the predictive text feature on your phone. Scientific data analysis has also used AI concepts such as “neural networks” for several decades. These AI systems all have the common feature that they are driven by statistical analysis of data, e.g. your Word document or text message, coupled with some response to recognition of patterns in that data. AI systems have to be trained in this pattern-recognition by exposure to a large volume of example data, so they know which responses are most likely: this training is why in scientific applications AIs are more often known as “machine learning” (ML) tools. The reason we are hearing so much about AI now is a combination of advances in computing power (in particular, hardware for 3D computer games turned out to be ideal for ML training), new ideas for how to implement ML systems, and the explosion of training data from the internet.
These advances have made possible the building of unprecedentedly large-scale “generative” AI systems, trained on vast quantities of data and capable of interacting in a way convincingly similar to human beings. But at their core they are still essentially predictive text (and other media), just an extremely advanced sort. Generative AIs create mash-ups of their input data and, however impressive the results, there is no true reasoning going on. For example, early versions of ChatGPT were notoriously incapable of basic arithmetic if phrased in unfamiliar ways. Updates have added work-arounds to known pitfalls and appear more convincing, but their core is still just based on statistical remixing of the training date. They are therefore not good at innovation, but can be excellent at producing summaries or re-workings of existing formats, such as literature reviews, formulaic reports, forms, translations, and some computer codes. This frames the potential for AI technology to disrupt and potentially damage us: as is often the case with advances in automation there is real potential for generative AI to accelerate or replace repetitive human tasks and jobs, but there is little scope for a “rise of the machines” Terminator-style AI revolt. There have been several push-backs to drives for generative AI use from creative sectors, e.g. the Hollywood writers’ strike, artists whose work has been (usually without permission) used as training input data to image-generating AIs, and journalism.
The most high-profile warning of AI risks came in the form of a “Pause Giant AI Experiments” open letter from the Future of Life Institute in March 2023, famously signed by Elon Musk among thousands of others. However, this and OpenAI’s calls for risk-management of the current state-of-the-art systems — which it (and now the the UK government) calls “frontier AI” — notably shy away from placing regulatory restrictions on industry, and may be driven as much by pre-empting regulation or buying catch-up time as by genuine concern about sentient AI dangers. More immediate risks of generative AI are in public misinformation, through AI-driven social media posting, “deep fake” videos and audio putting words in the mouths of public figures, and poor data inputs or reviewing processes for AI-generated news articles. AI has also become a ubiquitous advertising buzzword: adding an ML component to an app or appliance is very easy and enables claims to be “AI powered”, without necessarily adding any value for users.
This is a rapidly evolving area and we can certainly expect to see social and technical changes driven by AI in the coming years, but — as always — don’t believe everything you read, either human- or AI-generated.
