Speaker
Spotlight

Natalia Domagala

Our next speaker spotlight features Natalia Domagala, a Policy Fellow at the Centre for Science and Policy. Drawing from insights from leading voices in AI and technology ethics, Natalia will examine the broader systems and power dynamics behind AI - from data extraction and global inequalities to its influence on culture, knowledge, the environment, and democracy.

Natalia led data ethics policy at the UK Cabinet Office’s Central Digital & Data Office and previously advised on open government and open data at the Department for Digital, Culture, Media and Sport. She co-edited Situating Open Data and recently edited Uncovering Algorithms: Conversations on the Impact of AI. She holds an MSc from the London School of Economics and a BA from Goldsmiths, University of London.

"One concern is the opacity surrounding the use of AI in many areas of our lives, and the fact that many people don’t realise how their everyday experiences are influenced by hundreds or thousands of algorithms daily."
two hands touching each other in front of a pink background

Photo by Igor Omilaev on Unsplash

Photo by Igor Omilaev on Unsplash

Is AI the greatest power grab of the 21st century, and did anyone vote for it?

A small number of companies now control much of the infrastructure, compute capacity and data that underpin the most influential AI systems. These systems increasingly shape how we produce and distribute information, and play a significant role in our everyday lives - from the media and entertainment we consume, to influencing our social interactions, political views, and public service provision. There was no explicit democratic mandate for this arrangement.

The rapid expansion of AI systems into every area of our lives has largely been driven by market dynamics and technological momentum rather than public debate. What we’re seeing now is that the most widely adopted AI models are used by hundreds of millions people, yet the values and safeguards these systems operate within are often defined by a very small group, concentrated in a handful of tech companies. As AI continues to shape public life, including public service provision in some cases, we need to ensure that these systems are governed in ways that reflect broader societal values rather than purely commercial priorities.

This brings me to the idea of citizen assemblies and other deliberative methods for AI governance as a missing democratic mechanism, in addition to regulation and greater oversight. We already use citizen assemblies to deliberate on complex, value-laden issues such as climate policy, bioethics and constitutional reform. AI systems that increasingly mediate knowledge, access to information and economic power arguably warrant the same kind of public deliberation.

Of course, implementing this is far from straightforward when the most widely used models are built and governed by a small number of private companies operating at a global scale. But that tension is precisely why democratic input into AI governance deserves far more serious attention than it currently receives.

As governments rush to regulate AI, are they already too late to control who really holds the power?

Most governments are certainly behind, as they are responding after the technology has already scaled. There were earlier opportunities to shape the ethical foundations of AI more proactively, and to regulate before the AI-related harms occurred: greater scrutiny of large-scale data collection and management practices, stronger requirements around transparency in training data, or earlier interventions to address market concentration could have established clearer safeguards. Instead, many governments adopted relatively light-touch approaches in the hope of encouraging innovation. This has allowed a small number of actors to accumulate substantial technical and economic advantages. As a result, regulation now has to address not only safety and accountability, but also questions of dependency, market power, public oversight, as well as harmful uses of AI such as deep fakes.

Big Tech calls AI ‘innovation’, but is it just digital extraction dressed up as progress?

AI can represent genuine innovation in certain areas, for example, in scientific discovery, accessibility tools, and some forms of medical research. At the same time, there are clear extractive elements in the way many systems are developed. Large models are frequently trained on vast quantities of data scraped from the internet and beyond, including copyrighted works, often without meaningful consent from the individuals who produced that material. In addition, significant amounts of human labour underpin these systems, including the gruelling work of content moderation and data labelling, which often requires constant exposure to abusive and harmful content, yet this work tends to remain invisible and poorly remunerated. From an ethical perspective, the central question is how the benefits and burdens of AI innovation are distributed.

a group of colorful objects

Photo by and machines on Unsplash

Photo by and machines on Unsplash

From elections to education, how much of our culture is already being shaped by algorithms we can’t see or question?

Algorithmic systems already influence a wide range of cultural and social processes: recommendation systems shape which news and information people encounter online, which voices gain prominence, and as a result of that, which ideas circulate widely. On a societal level, automated systems increasingly influence decisions in areas such as recruitment, lending, education, and public service provision. Generative systems are also beginning to shape the framing and dissemination of information, and their extensive use has an immense impact on our ability to think critically.

One concern is the opacity surrounding the use of AI in many areas of our lives, and the fact that many people don’t realise how their everyday experiences are influenced by hundreds or thousands of algorithms daily. When systems play an influential role in public life yet remain difficult to scrutinise or challenge, it becomes harder for societies to debate the values and assumptions embedded within them.

AI is sold as clean and virtual, so why is its environmental footprint becoming a growing global concern?

Part of the issue lies in the language used to describe these technologies. Terms such as ‘AI’ or ‘the cloud’ create a sense of abstraction that distances them from their material foundations. In reality, AI systems depend on extensive physical infrastructure. Large data centres require substantial amounts of electricity and water, particularly for cooling. The environmental footprint and material impacts of AI, including energy consumption and water usage, are only recently beginning to receive sustained public attention.

As AI becomes more widely used and models grow more complex, there will be increasing pressure to construct larger and more numerous data centres. In many contexts, this places additional strain on local water supplies and energy systems. Communities are beginning to respond to these pressures. In Chile, for example, residents in the water-stressed municipality of Quilicura organised an initiative in which the town replaced AI with human intelligence for a day. People around the world were invited to submit everyday questions they would normally ask a machine, highlighting the otherwise hidden water footprint of AI.

If today’s AI headlines are only showing the tip of the iceberg, what’s the deeper story the public urgently needs to hear?

Public discussion often focuses on the capabilities of the latest models, such as what they can generate or automate. However, the deeper story concerns governance and accountability. Key questions include who controls the systems that increasingly shape information flows and decision-making; how responsibility is assigned when harms occur; how environmental and social costs are distributed; and whether democratic institutions can meaningfully oversee technologies developing at extraordinary speed. AI is reshaping economic structures, cultural influence and public governance, and these structural questions are only beginning to enter mainstream debate.

"As AI continues to shape public life, including public service provision in some cases, we need to ensure that these systems are governed in ways that reflect broader societal values rather than purely commercial priorities."

The Cambridge Festival is a mixture of online, on-demand and in-person events covering all aspects of the world-leading research happening at Cambridge. Meet some of the researchers and thought-leaders working in some of the pioneering fields that will impact us all.

Sign up to our mailing list here or keep up to date by following us on social media.

Instagram: Camunifestivals | Facebook: CambridgeFestival |
Bluesky: cambridgefestival.bsky.social| LinkedIn: cambridge-festival