The UK’s fastest academic supercomputer, based at the University of Cambridge, will be made available to artificial intelligence (AI) technology companies from across the UK, in support of the government’s industrial strategy.
Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.
Police at the “front line” of difficult risk-based judgements are trialling an AI system trained by University of Cambridge criminologists to give guidance using the outcomes of five years of criminal histories.
Fairness, trust and transparency are qualities we usually associate with organisations or individuals. Today, these attributes might also apply to algorithms. As machine learning systems become more complex and pervasive, Cambridge researchers believe it’s time for new thinking about new technology.
Twenty-six experts on the security implications of emerging technologies have jointly authored a ground-breaking report – sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists.
Cambridge researchers are pioneering a form of machine learning that starts with only a little prior knowledge and continually learns from the world around it.
What makes a city as small as Cambridge a hotbed for AI and machine learning start-ups? A critical mass of clever people obviously helps. But there’s more to Cambridge’s success than that.
In the popular imagination, robots have been portrayed alternatively as friendly companions or existential threat. But while robots are becoming commonplace in many industries, they are neither C-3PO nor the Terminator. Cambridge researchers are studying the interaction between robots and humans – and teaching them how to do the very difficult things that we find easy.
Our lives are already enhanced by AI – or at least an AI in its infancy – with technologies using algorithms that help them to learn from our behaviour. As AI grows up and starts to think, not just to learn, we ask how human-like do we want their intelligence to be and what impact will machines have on our jobs?
Today we begin a month-long focus on research related to artificial intelligence. Here, four researchers reflect on the power of a technology to impact nearly every aspect of modern life – and why we need to be ready.