Cambridge Festival 2024
AI and elections, deepfakes, the metaverse and more...
Will insidious use of AI influence elections this year? How can we empower teenagers growing up in a world of deepfakes? And how far off is the metaverse – the virtual immersive worlds which will allow humans to experience life differently? The rise of AI and its impact on our society will be explored during a series of fascinating events at the Cambridge Festival.
As billions of people in more than 80 countries – more than half the planet’s population – prepare to vote this year, headlines are sounding alarm about the threat of AI disruption. Yet, the extent to how disinformation might impact elections is still unclear.
How will AI affect the democratic process? (20 March) will hear from Cambridge experts Dr Ella McPherson, Associate Professor of the Sociology of New Media and Digital Technology, and Dr Jonnie Penn, best-selling author and historian of technology, who along with Dr Melisa Basol, at social impact business Moonshot, and journalist and author Chris Stokel-Walker, will discuss the role AI will play and how we mitigate any risks. Observer columnist Professor John Naughton, a senior research fellow in the Centre for Research in the Arts, Social Sciences, will chair.
But just as important as looking at the technology itself is focusing on the hype around the issue, and the confusion it generates, says Dr McPherson.
“That’s not to say I don't think that understanding what AI could do and all the different ways it can inflect elections is important. But there’s also a threat from the panic and frenzy, and just the idea that AI is somehow messing with elections, even if we as individuals haven’t been tricked. It can mean we begin to lose our faith in elections, and that is serious.
“Audiences are actually very sophisticated and complex, despite the scaremongering that they will be duped. There will be some people who believe misinformation, but the bigger effect is that people will say ‘I don’t know what to believe’ and disengage. And then we move back to being much more focused on our own networks as sources of information, and that’s more of a democracy problem than just an elections problem.
“We should also be looking at what the hype cycle causes us to miss, and who benefits. AI influencing elections is a great media story, it’s fascinating, and as a society we’re often drawn to the idea of future dystopias. But this fascination has its consequences, when the public are so bewildered, they spend most of their time trying to figure out what’s going on with new technologies rather than being citizens. In an election year that means time lost that we could be using to question candidates and map out our priorities. What’s so frustrating is that all this bewildering is just collateral damage to these tech companies’ pursuit of growth and profit."
And the time emerging technology costs us in general is something that Dr McPherson says tech companies should be taking more responsibility for.
“Whenever these technologies and platforms are launched, it’s us who has to figure them out – we have to deal with the consequences while they watch and ‘learn from real world use’, as OpenAI recently put it. And that’s time that I’m sure we would rather be spending with our family or friends, or focusing on something meaningful to us. We shouldn’t be their laboratory or experimental subjects, and they shouldn’t think of us as a source of free labour to help them figure out the uses and abuses of their new technologies.”
Although Dr Penn admits a record-breaking election year isn’t the ideal time to be putting it to the test, he is hopeful AI will not have the immediate impact that some headlines have suggested.
“There is reason to worry about misinformation and disinformation and propaganda, but not to the point that human agency doesn’t matter anymore,” he says. “Donald Trump’s election in 2016 wasn’t just a product of ‘computers’ – digitalism doesn’t just change the rules overnight. But in terms of historical precedent, something like 49% of the globe is going into an election year. That won't happen again until the mid-2040s, so it’s a strange time to be testing out ‘in the wild’ a powerful but highly criticised set of new technologies. Sadly, it seems many leaders in Big Tech are just uneducated or callous, so they press on.”
However, Dr Penn has faith in humanity when imagining where civic interventions might take us in the future, and draws on examples from our technological past to suggest that alongside the well-documented threats there are also opportunities.
“The printing press is often held up as an example of a technology that changed human history. There was, however, a 400-year gap between its invention and mass literacy. One missing ingredient was free access to education. So when we think about ‘AI’ and what we want from ‘it,’ we need to think about the social contract we demand for our children and grandchildren. Climate stability? Equal pay? Sick pay? The Industrial Revolution brought the 40-hour working week and the weekend. Those were nice, but we don't have to stop there, do we?”
And alongside both the idealistic and doomsday scenarios, Dr Penn says we should consider AI futures that aren’t quite so sensational, but that still have consequences for democracy, such as the concept of a ‘boring dystopia’.
“There is a kind of folk wisdom around the origins of computing - that it emerged out of genius, when actually to a great extent it was a product of bureaucracy, to help empires and companies manage people and tabulate data. So if you know that it came from bureaucracy, where is it taking us? Modernity has brought along a staggering density of administration, such that the worst-case scenario with AI could be a kind of highly resilient top-down power, including here in the UK.”
Dr Penn is also exploring the concept of ‘decomputerization’ - the commonly felt but rarely named idea that there may be an upper limit to the digital saturation healthy societies can tolerate, pointing to findings like mental health declines, burnout, dating app fatigue, and to glaring ironies like, for instance, half of ‘smart’ appliances never being connected to the internet.
“I trust that that people are smart, if they have the chance to show it, and to engage meaningfully with the world. If you distrust Big Tech, tell your legislator. We need to be radically democratic now, and radically anti-monopolistic and hold these companies to the standards of excellence that they say they embody. It's an opportunity to design the world we want and not just shut doors and wag fingers.”
Verity Harding, one of Time’s 100 Most Influential People in AI, will discuss her new book – AI Needs You: How We Can Change AI’s Future and Save Our Own – with Professor Dame Diane Coyle, from the Bennett Institute for Public Policy, Cambridge at AI Needs You: An evening with Verity Harding (14 March).
Verity, Director of the AI and Geopolitics Project at the Bennett Institute for Public Policy, will draw from her book some inspiring lessons from the histories of three 20th-century tech revolutions – the space race, in vitro fertilisation and the internet – to draw us into the conversation about AI and its possible futures.
She argues that it is critical for society to take the lead in ensuring that AI fulfils its promise to tackle some of the world’s most pressing challenges.
She said: “A necessary part of the process will be for ordinary citizens to make their voices heard. Because how governments and companies alike respond to AI, what they build and the rules under which they build it, will depend greatly on what they hear from their citizens and customers. Once mobilised, public pressure and democratic institutions will have a huge influence over what the vision and purpose of AI could be.”
Cambridge MPhil student Valena Reich, who is researching the Ethics of AI, has developed a workshop with her team at We and AI (NPO) to raise teenagers’ AI literacy around deepfakes, which will take place as part of the Festival, and which she hopes could be rolled out to UK schools in the future.
Workshop on deepfakes and AI-generated media (16 March) aims to support informed, safe and ethical interactions with online and AI-generated media.
Valena said: “This is the first generation to grow up in a world where they cannot simply trust audio and visual information anymore, where essentially what they see online might be completely fake. Up until now, they might have been told by their teachers that looking at Wikipedia for information isn’t enough, that they need to double check their sources. Now, they also need to check everything they see and hear online, and take a moment to step back and assess whether there might be an agenda behind it.”
As well as exploring the technology’s impact in areas including politics, advertising, the arts, and fraudulent crime, the workshop will look at examples of young people themselves being targeted in deepfakes.
Valena said: “We need to ensure they are equipped with critical thinking skills to be able to cope with this, to be able to navigate a world with deepfakes and AI-generated media. It’s certainly interesting to see how much thought teenagers we work with put into this, they are actually quite worried and want to learn these tools and skills.
“It's about independent thinking, questioning things, and also that ethical kind of thinking –because deepfakes have deeper consequences for society. It’s important to know what kind of values we have as a community to stand against misinformation, rather than just thinking ‘Oh well, this is how life is now and we should just accept it’. By raising awareness and training such skills, we want to empower the youth.”
How the metaverse could change our lives, and the importance of making it secure and inclusive will be discussed by Cambridge researchers Shannon Pierson and Dr Matteo Zallio, and Michelle Lim, at Cambridge Consultants, in The metaverse: pros and cons (22 March).
Phygital interaction – the seamless integration of the physical and digital realms – will play a pivotal role in the upcoming stages of the metaverse’s development, according to Dr Zallio, a Cambridge University researcher and founder of the Metavethics Institute.
While much of the discussion around the metaverse has centered on use cases, services, and technologies, Dr Zallio emphasises the critical impact of how AI will influence the development of content in both physical and digital worlds. He stresses the need to understand the impact of spatial computing and how it will be utilised in the future, ensuring accessibility, inclusivity, and safety for all. “It will be about interacting with the space and animated objects, not just with a keyboard and a touchscreen,” he says.
Drawing an analogy between the metaverse's evolution and the development of personal computers, Dr Zallio suggests that we are currently in a phase similar to the advent of the first graphical user interface in the early 1970s. However, he says recent advancements in AI have propelled the progress of spatial computing and the metaverse significantly.
"The democratisation of AI has been instrumental in enabling the creation of a multitude of Virtual Reality environments, emphasising the necessity of robust processing and computational capabilities for crafting hyper-realistic spaces and avatars beyond the limitations of resembling cartoons."
Other AI and technology Cambridge Festival events include:
- A Computer Science and Technology Open Day (16 March) offers the chance to make music like a DJ with computer code, find out how robot cars can navigate their way around humans, and learn about research taking place at the Department of Computer Science and Technology.
- Showing different angles of AI and emerging technologies (16 March) will share the latest information on AI, blockchain and the metaverse to show their power, and also to encourage future generations to develop an interest in this area of engineering.
- A talk, Artificial intelligence: With great power comes great responsibility (19 March), will look at the permeation of AI into all levels of our lives, including police, justice, banks, online shopping, and social networks – and how we ensure an ethical and responsible digital society.
- Faust Shop: Discover your artificial double (16 March) is an augmented theatrical experience, embedded in a lab environment, exploring what technology offers us and what it could take away.
- Misinformation, statistics and lies (26 March) sees Kamal Ahmed, former BBC Economics editor and editor-in-chief of the News Movement, in conversation with Cambridge's Professor David Spiegelhalter and the BBC’s disinformation correspondent Marianna Spring, about the manipulation of statistics in an era of misinformation and how we can all be better at spotting false narratives.
The Cambridge Festival, which runs 13-28 March, is one of the largest of its kind in the country, featuring more than 300 mostly free events, and showcases cutting edge research across the University of Cambridge and beyond.
Main image credit: Westend61
The text in this work is licensed under a Creative Commons Attribution 4.0 International License.