AI and the future
of work

As part of the University of Cambridge and KPMG Future of Work partnership, we bring together expert perspectives on some of the big issues facing organisations and the workforce.
There is no future of work without AI.
Whether it's automating tasks, collaborating on creativity, or redefining decision-making processes, AI is reshaping the way we work – and think about work.
So, how will it affect our roles and workplaces? And how can we, as individuals and organisations, prepare for this new reality?
Three leading experts at the University of Cambridge and KPMG UK’s Head of AI share their insights.
First, we need to keep in mind that “AI is a moving target because it is the frontier of what we’ve developed so far,” says Virginia Leavell, Assistant Professor of Organisational Theory and Information Systems.
And, as we imagine and plan for the future of AI, we're not just predicting it —we're actively creating it. If the dominant discourse is that AI is going to lead to existential threats to humanity and mass job loss, “this matters because people will make decisions now on the basis of that anticipated future,” says Leavell.
How, then, should we be thinking about AI and its role in the future of work?
AI and creativity
Some of the excitement people have for AI comes from the expectation that, by making administrative tasks more efficient, it frees up time to focus on more creative work. But what if AI can do the creative stuff too?
David Stillwell, Professor of Computational Social Science, explores this question and how it may impact organisations and workers.
“Our research shows that AI is as creative as the average human right now. It’s almost bang on the 50th percentile,” says Stillwell.
Stillwell believes the most-likely future will involve collaboration between the two, whereby AI acts as a team member (you say some things, it says some things in response, and you go back and forth), or an idea generator (you get it to list 50 ideas and then you pick the best ones), or a sounding board (you come up with the ideas and it tells you what could go wrong and what to look out for). “We don’t know yet which is the most successful approach to use – or when.”
“Another key question we are researching is, who benefits from this collaboration? Is it people who are already creative that get even better, or does it help those who struggle with creativity?" Stillwell asks.
Along with creativity, Stillwell is looking into other OECD 21st century skills – skills considered critical for the future – like critical thinking and problem solving:
"We’re doing research on teams of AI collaborating together. They are assigned different roles or expertise and you get them to talk to each other to come up with the best ideas.”
Those organisations that are embracing aspects of AI are already seeing productivity benefits.
Leanne Allen, Head of AI at KPMG UK said: "Many organisations are already designing, building and implementing AI, enhancing productivity by augmenting human capabilities such as summarising and drafting documents, drafting emails and fast information retrieval over curated knowledge bases. This is the first wave of AI implementation.”
As AI moves into areas previously thought to be exclusively human and can collaborate not only with humans but with other AI, we may need to reimagine what skills will be most valuable for the future of work.
Allen added: “Many organisations have already entered the second wave of AI agents that are bringing greater effectiveness and accuracy in addition to efficiencies such as in fraud detection, tracking customer behaviour or medical imagery.
As the effectiveness of AI grows, so will the controls for responsible use. Future waves will then see AI transform business models and operating structures, altering the skills needed for today's roles towards value-added work and critical thinking.
“However, to be successful in these future waves, the focus needs to be on adjusting workloads as replacing mundane tasks with continuous critical thinking, problem-solving and decision-making is not sustainable and could lead to cognitive overload and burnout in employees.”
Making sure organisations are 'human-centric'
Given that AI can perform tasks ranging from administrative to creative, does this mean fewer jobs for people? It doesn’t need to, Stillwell argues.
He explains: “There’s usually more demand than there is ability to service that demand. So, organisations that integrate AI, while keeping the same number of employees, are going to have more efficiency, which means more clients or productivity, and therefore more money."
"For example, if AI can automate some animation, it doesn’t have to mean that we need fewer animators, it can also mean that more animation films are made. And then society also benefits because there's more art in the world.”
The integration of AI into work may be inevitable, but how organisations choose to integrate it is not. It’s not just about implementing technological change; it's about ensuring workplaces remain human-centered amidst it: using technology to enhance – not replace – humans.
Allen shared: "Technological change necessitates a learning curve, making it crucial to provide training alongside these new technologies, to both manage risk but also maximise employee capabilities and give them the tools to engage."
"At KPMG UK, we provide comprehensive learning and development opportunities around AI, including courses like our AI for Leaders and AI ethics programmes. We have also developed a set of bespoke KPMG tools to help employees with AI adoption as well as using third party solutions such as Microsoft Copilot."
"For the past two years, we've engaged colleagues across our business through our 'Summer of AI' initiative, providing colleagues with insights from industry experts on advances in AI, deep knowledge sharing sessions, hackathons and much more."
However, while human-centered organisations place individuals’ needs and well-being at the core of organisational decisions and practices, the idea of human-centered organisations is “meaningless if it overlooks inequalities within the organisation,” says Eleanor Drage, Senior Research Fellow in AI Ethics.
As Drage puts it, “which humans are we centering around?”
Ethical considerations
Indeed, as AI becomes more embedded in people’s working lives, ethical considerations are critical.
“It is very important that AI can be trusted for it to be fully accepted across industries and society and ethics must play a big part in that,” Allen explains.
“Organisations need to ensure they are designing, building and using AI in a responsible and ethical way. Aligning to ethical principles such as transparency, fairness, accountability, and security are critical for this.
"However, ethics aren’t clear-cut. It’s a grey area requiring diversity of thought on what is and isn’t ok, as well as identification and mitigation of risks, including potential bias and harms.”
One of the primary concerns is that AI is perpetuating bias in decision-making processes, such as hiring. AI systems are trained on data, and if that data reflects societal inequalities, the AI can reinforce these biases.
“AI is only as good as the data it is built upon, so if you have a historically biased, incomplete or unrepresentative data sets, you are likely to get biased results,” Allen added.
In response to this, some companies suggest we train AI to debias recruitment processes by removing information on things like gender and race.
However, "it’s a real problem if people think that AI can simply remove bias from the hiring process by deleting or ‘scraping’ away aspects of people's identities, like race and gender, as some of these AI hiring companies claim," Drage warns. This approach, she explains, is based on a fundamental misunderstanding of what diversity really means.
“Improving diversity is not about injecting more women or more people of colour at the beginning of the corporate journey. It’s about addressing the culture and inequalities entrenched within the organisations."
"This cannot be outsourced to AI. It's about doing the hard work, addressing things like equal pay and microaggressions. It's going to feel difficult. If it doesn’t, it probably won’t work.”
On the other hand, Drage advocates for using AI technologies in recruitment for job specifications. “AI is very good at using pattern recognition to point out, for example, the use of language in job specs that might be inadvertently appealing more to male or female candidates – so let’s use AI for that.”
What do we need to do now?
AI is reshaping roles, organisations, and industries. So, how can we, as workers and leaders, prepare for this new reality? Our experts have some recommendations.
Key actions for workers
Actively engage in technological change – when and where possible.
Stay informed about AI developments in your sector. This can be useful for identifying how you might like to upskill or reskill.
If you can, use AI tools to assist with your work. Get familiar with their capabilities and limitations.
Leavell suggests that you should see yourself as an agent of change. “The way the AI tools are right now, everything you do is being fed back into the model, so you are literally building it.”
Stillwell encourages people to remember the popularised phrase, “doctors won’t be replaced by AI, but doctors who use AI will replace those who don’t."
AI tools will continue to become increasingly powerful, but they will still rely on human input and oversight.
Key actions for leaders
Allen believes that: "Leaders need to take a strong and clear governance approach that looks at AI's alignment with the organisational strategy, risk elements such as ethics and regulatory compliance, and technical elements such as data and systems."
Organisations should set up systems for employees to experiment with AI tools. "This must be done in an environment where mistakes are seen as learning opportunities rather than failures, which is crucial for innovation," Leavell explains.
“And organisations need to incentivise employees to share what worked, so the organisation can roll it out,” Stillwell adds.
In sectors where workers may have less agency to experiment, leaders need to ensure that AI isn’t being implemented in a top-down way that alienates the very people it’s supposed to support.
For AI to be integrated effectively, and for organisations to be truly human-centric, workers need to be involved in how AI is used in their roles.
Leavell says: “There should be time spent on truly figuring out what employees want the technology to do, so that they can build the AI in a way that improves their work and their lives.”
Drage adds: “It’s important to integrate AI in a way that builds employee trust. Otherwise, you won't have the employee buy-in that you need.”
The integration of AI in workplaces is a cultural change just as much as it is a technological change.
“Think about the way you frame the use of AI in your organisation: it should be seen as an enabler and an opportunity, not a threat,” Allen suggests.
As Leavell summarises: "We need more people and more diversity in the room when talking about AI and the future of work. We need workers in the room, not just managers and developers."
"And we need to talk about it in a way that's radically different from the way it’s usually discussed – because what people think is possible and probable for the future shapes the decisions and actions we make today."

About the researchers
Professor David Stillwell is the Academic Director of the Psychometrics Centre at the University of Cambridge.
Dr Virginia Leavell is Assistant Professor in Organisational Theory and Information Systems at Cambridge Judge Business School.
Dr Eleanor Drage is Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and previous AI Ethics advisor to the UN.
Dr Leanne Allen is a Partner and Head of AI at KPMG UK

Published May 2025
The text in this work is licensed under a Creative Commons Attribution 4.0 International License
Images
Banner: Getty Images, credit: gorodenkoff
Data analytics team: Getty Images, credit: cofotoisme
