Skip to main contentSkip to navigation
‘OpenAI’s ChatGPT smashed records in January to become the fastest-growing consumer application of all time, achieving 100 million users in two months.’
‘OpenAI’s ChatGPT smashed records in January to become the fastest-growing consumer application of all time, achieving 100 million users in two months.’ Photograph: Dmitrii Melnikov/Alamy
‘OpenAI’s ChatGPT smashed records in January to become the fastest-growing consumer application of all time, achieving 100 million users in two months.’ Photograph: Dmitrii Melnikov/Alamy

Artificial intelligence holds huge promise – and peril. Let’s choose the right path

Michael Osborne

AI can fight the climate crisis and fuel a renewable-energy revolution. It could also kill countless jobs or incite nuclear war

The last few months have been by far the most exciting of my 17 years working on artificial intelligence. Among many other advances, OpenAI’s ChatGPT – a type of AI known as a large language model – smashed records in January to become the fastest-growing consumer application of all time, achieving 100 million users in two months.

No one knows for certain what’s going to happen next with AI. There’s too much going on, on too many fronts, behind too many closed doors. However, we do know that AI is now in the hands of the world, and, as a consequence, the world seems likely to be transformed.

Such transformational potential is due to the fact that AI is a general-purpose technology, both adaptive and autonomous, bottling some of the magic that has led humans to reshaping the Earth.

AI is one of the few practical technologies that may allow us to re-engineer our economies wholesale to achieve Net Zero. For instance, collaborators and I have been using AI to help to predict intermittent renewable energy sources (like solar, tide and wind), to optimise the placement of electric vehicle chargers for equitable access, and to better manage and control batteries.

Even if AI leads to great economic gains, however, some may lose out. AI is currently being used to automate some of the work of copywriters, software engineers and even fashion models (an occupation that the economist Carl Frey and I estimated in 2013 as having a 98% probability of automatability).

A paper from OpenAI estimated that almost one in five US workers may see half of their tasks become automatable by large language models. Of course, AI is also likely to create jobs, but many workers may still see sustained precarity and wage cuts – for instance, taxi drivers in London experienced wage cuts of about 10% after the introduction of Uber.

AI also offers worrying new tools for propaganda. According to Amnesty International, Meta’s algorithms, by promoting hate speech, substantially contributed to the atrocities perpetrated by the Myanmar military against the Rohingya people in 2017. Can our democracies resist torrents of targeted disinformation?

Currently, AI is inscrutable, untrustworthy and difficult to steer – flaws that have and will lead to harm. AI has already led to wrongful arrests (like that of Michael Williams, falsely implicated by an AI policing program, ShotSpotter), sexist hiring algorithms (as Amazon was forced to concede in 2018), and the ruining of many thousands of lives (the Dutch tax authority falsely accused thousands, often from ethnic minorities, of benefits fraud).

Perhaps most concerning, AI might threaten our survival as a species. In a 2022 survey (albeit with likely selection bias), 48% of AI researchers thought AI has a significant (greater than 10%) chance of making humans extinct. For a start, the rapidly advancing, uncertain, progress of AI might threaten the balance of global peace. For instance, AI-powered underwater drones that prove capable of locating nuclear submarines might lead to a military power thinking it could launch a successful nuclear first strike.

If you think that AI could never be smart enough to take over the world, please note that the world was just taken over by a simple coronavirus. That is, sufficiently many people had their interests aligned just enough (eg “I need to go to work with this cough or else I won’t be able to feed my family”) with those of an obviously harmful pathogen that we have let Sars-CoV-2 kill 20 million people and disable many tens of millions more. That is, viewed as an invasive species, AI might immiserate or even eliminate humanity by initially working within existing institutions.

For instance, an AI takeover might begin with a multinational using its data and its AI to find loopholes in rules, to exploit workers, to cheat consumers, gaining political influence, until the entire world seems to be under the sway of its bureaucratic, machine-like power.

What can we do about all these risks? Well, we need new, bold, governance strategies to both address the risks and to maximise AI’s potential benefits – for example, we want to ensure that it is not only the largest firms who can bear a complex regulatory burden. Current efforts towards AI governance are either too lightweight (like the UK’s regulatory approach) or too slow (like the EU’s AI Act, already two years in the making, eight times as long as it took ChatGPT to reach 100 million users).

We need mechanisms for international cooperation, to develop shared principles and standards and prevent a “race to the bottom”. We need to recognise that AI encompasses many different technologies and hence demands many different rules. Above all, while we may not know exactly what is going to happen next in AI, we must begin to take appropriate precautionary action now.

  • Michael Osborne is a professor of machine learning at the University of Oxford, and a co-founder of Mind Foundry

Most viewed

Most viewed