The AI Summit was a promising start – but momentum must be maintained

The UK’s much anticipated AI Safety Summit saw twenty-eight governments reach a consensus that risks posed by systems on the ‘frontier’ of general purpose AI need to be addressed.

Given the frenetic pace of AI development, and the huge resources behind it, this consensus is much-needed progress. But it is just the first step.  

Billions of dollars have been invested into creating AI systems such as OpenAI’s GPT-4, which can serve as a coding assistant, or draft emails and essays. Without appropriate safeguards, however, it can also tell you how to build a bomb from household materials. 

Experts fear that future iterations might be capable of aiding bad actors in large-scale cyberattacks, or designing chemical weapons.

GPT-4 is perhaps not overly concerning at the moment. But if we compare the impressive leap in capability from its previous iteration to now, and project that forward, things start to feel scary.

The techniques underlying AI have been shown to scale: more data and computing resources applied to bigger models yield ever more capable AI. With more money and better techniques, we will continue to see rapid advances.

However, these AI systems are often opaque and unpredictable. New iterations have unexpected abilities that are sometimes uncovered only months after release.

Companies like Google DeepMind and OpenAI are testing and designing safeguards for their models, but not every company is putting in the same degree of work, and it’s unclear if even the most safety-conscious actors are doing enough.

Just before the Summit, the UK government released an ambitious 42-point outline for best practice policies that frontier AI companies should be following. I was part of a team of researchers that conducted a rapid review of whether the six biggest AI companies met these standards.

While all companies were committed to research on AI safety, none met all the standards, with Meta and Amazon getting lower ‘safety grades’. There were several best practices that no company met, including prepared responses to worst-case scenarios, and external scrutiny of the datasets used to train AI. 

With technology this powerful, we cannot rely on voluntary self-regulation. National bodies and frameworks will be vital, especially in the countries housing frontier AI developers.

Regulators need expertise and the power to monitor and intervene in AI – not just approving systems for release, but each stage of development, as with new medicines.

International governance is equally important. AI is global: from world-spanning semiconductor supply chains and data needs, to transnational use of frontier models. Meaningful governance of these systems requires domestic and international regulators working in tandem.

Day two of the AI Summit at Bletchley Park with UK Prime Minister Rishi Sunak, President of the European Commission Ursula von der Leyen, and US Vice President Kamala Harris. Credit: UK Government.

Day two of the AI Summit at Bletchley Park with UK Prime Minister Rishi Sunak, President of the European Commission Ursula von der Leyen, and US Vice President Kamala Harris. Credit: UK Government.

At present, useful preliminary steps are taking place in the US, UK and EU. Last week’s Executive Order from Joe Biden goes some way towards requiring companies to evaluate their models, including the sharing of safety tests with the federal government.

In the UK, the Frontier AI Taskforce is building up capacity for assessing AI models. The EU’s AI Act will bring in accountability for developers whose models are subsequently misused for malicious purposes.

The next two AI Summits, expected to be in South Korea and France next year, should aim to forge a consensus around international regulation, and a shared understanding of which standards must be international, and which can be left to national discretion.

Chinese participation is crucial — and it was heartening to see their presence at the Summit. AI is a strategic priority in China, where it is likely to advance rapidly. Yet China’s regulatory steps around generative AI have been some of the most robust in the world. It is essential that China is part of any international effort to manage AI advancement.

However, most of the world was absent from the Summit. Going forward, Global South nations will require representation. AI could increase global prosperity, or steepen the rise in inequality; decisions made in the coming months may influence which outcome we see.

The world cannot be divided into a club of wealthy countries doing AI, while everyone else has AI done to them.

Lastly, while projected capabilities and extreme risks of frontier AI are attention-grabbing, they must not distract from existing challenges surrounding its potential misuse: whether propagating existing societal bias, or spreading hate speech and misinformation during elections and international crises.

The Summit risks being a flashy spectacle for a government on the wane. Yet for many of us researching AI’s impacts, the hope is it has laid a foundation for the safe and beneficial development of a technology with unprecedented transformative potential.

That hope will only be fulfilled if the momentum of the last few weeks is now maintained.

Dr Seán Ó hÉigeartaigh is the Director of the AI: Futures and Responsibility Programme at the University of Cambridge's Leverhulme Centre for the Future of Intelligence.

Top and middle images from the UK's AI Safety Summit. Credit: UK Government.