Today we begin a month-long focus on research related to artificial intelligence. Here, four researchers reflect on the power of a technology to impact nearly every aspect of modern life – and why we need to be ready.

What we’ve seen of AI so far is only the leading edge of the revolution to come.

Mateja Jamnik, Seán Ó hÉigeartaigh, Beth Singler and Adrian Weller

AI systems are now used in everything from the trading of stocks to the setting of house prices; from detecting fraud to translating between languages; from creating our weekly shopping lists to predicting which movies we might enjoy.

This is just the beginning. Soon, AI will be used to advance our understanding of human health through analysis of large datasets, help us discover new drugs and personalise treatments. Self-driving vehicles will transform transportation and allow new paradigms in urban planning. Machines will run our homes more efficiently, make businesses more productive and help predict risks to society.

While some AI systems will outperform human intelligence to augment human decision making, others will carry out repetitive, manual and dangerous tasks to augment human labour. Many of the greatest challenges we face, from understanding and mitigating climate change to quickly identifying and containing disease outbreaks, will be aided by the tools of AI.

What we’ve seen of AI so far is only the leading edge of the revolution to come.

Yet the idea of creating machines that think and learn like humans has been around since the 1950s. Why is AI such a hot topic now? And what does Cambridge have to offer?

Three major advances are enabling huge progress in AI research: the availability of masses of data generated by all of us all the time; the power and processing speeds of today’s supercomputers; and the advances that have been made in mathematics and computer science to create sophisticated algorithms that help machines learn.

Unlike in the past when computers were programmed for specific tasks and domains, modern machine learning systems know nothing about the topic in question, they only know about learning: they use huge amounts of data about the world in order to learn from it and to make predictions about future behaviour. They can make sense of complex datasets that are difficult to use and have missing data.

That these advances will provide tremendous benefits is becoming clear. One strand of the UK government’s Industrial Strategy is to put the UK at the forefront of the AI and data revolution. In 2017, a report by PricewaterhouseCoopers described AI as “the biggest commercial opportunity in today’s fast-changing economy”, predicting a 10% increase in the UK’s GDP by 2030 as a result of the applications of AI.

Cambridge University is helping to drive this revolution – and to prepare for it.

Our computer scientists are designing systems that are cybersecure, model human reasoning, interact in affective ways with us, uniquely identify us by our face and give insights into our biological makeup.

Our engineers are building machines that are making decisions under uncertain conditions based on probabilistic estimation of perception and for the best course of action. And they’re building robots that can carry out a series of actions in the physical world – whether it’s for self-driving cars or for picking lettuces.

Our researchers in a multitude of different disciplines are creating innovative applications of AI in areas as diverse as discovering new drugs, overcoming phobias, helping to make police custody decisions and forecasting extreme weather events.

Our philosophers and humanists are asking fundamental questions about the ethics, trust and humanity of AI system design, and the effect that the language of discussion has on the public perception of AI. Together with the work of our engineers and computer scientists, these efforts aim to create AI systems that are trustworthy and transparent in their workings – that do what we want them to do.

All of this is happening in a university research environment and wider ecosystem of start-ups and large companies that facilitates innovative breakthroughs in AI. The aim of this truly interdisciplinary approach to research at Cambridge is to invent transformative AI technology that will benefit society at large.

However, transformative advances may carry negative consequences if we do not plan for them carefully on a societal level.

The fundamental advances that underpin self-driving cars may allow dangerous new weapons on the battlefield. Technologies that automate work may result in livelihoods being eliminated. Algorithms trained on historical data may perpetuate, or even exacerbate, biases and inequalities such as sex- or race-based discrimination. Without careful planning, systems for which large amounts of personal data is essential, such as in healthcare, may undermine privacy.

Engaging with these challenges requires drawing on expertise not just from the sciences, but also from the arts, humanities and social sciences, and requires delving deeply into questions of policy and governance for AI. Cambridge has taken a leading position here too, with the recent establishment of the Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk, as well as being one of the founding partners of The Alan Turing Institute based in London.

In the longer term, it is not outside the bounds of possibility that we might develop systems able to match or surpass human intelligence in the broader sense. There are some who think that this would change humanity’s place in the world irrevocably, while others look forward to the world a superintelligence might be able to co-create with us.

As the University where the great mathematician Alan Turing was an undergraduate and fellow, it seems entirely fitting that Cambridge’s scholars are exploring questions of such significance to prepare us for the revolution to come. Turing once said: “we can only see a short distance ahead, but we can see plenty there that needs to be done.”

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Dr Mateja Jamnik (Department of Computer Science and Technology), Dr Seán Ó hÉigeartaigh (Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, CFI), Dr Beth Singler (Faraday Institute for Science and Religion and CFI) and Dr Adrian Weller (Department of Engineering, CFI and The Alan Turing Institute).


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.