The University of Cambridge will establish a DeepMind Chair of Machine Learning, thanks to a benefaction from the world-leading British AI company.
A new institute at the University of Cambridge aims to revolutionise cancer care by using cutting edge analytics to maximize the use of big data sets collected from patients.
Will automation, AI and robotics mean a jobless future, or will their productivity free us to innovate and explore? Is the impact of new technologies to be feared, or a chance to rethink the structure of our working lives and ensure a fairer future for all?
Cambridge and Nokia Bell Labs establish new research centre to advance AI-supported multi-sensory personal devices20 Jun 2018
The long-sought dream of wearable and mobile devices that will interpret, replicate and influence people’s emotions and perceptions will soon be a reality thanks to a collaboration between the University and Nokia Bell Labs.
The UK’s fastest academic supercomputer, based at the University of Cambridge, will be made available to artificial intelligence (AI) technology companies from across the UK, in support of the government’s industrial strategy.
Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.
Police at the “front line” of difficult risk-based judgements are trialling an AI system trained by University of Cambridge criminologists to give guidance using the outcomes of five years of criminal histories.
Fairness, trust and transparency are qualities we usually associate with organisations or individuals. Today, these attributes might also apply to algorithms. As machine learning systems become more complex and pervasive, Cambridge researchers believe it’s time for new thinking about new technology.
Twenty-six experts on the security implications of emerging technologies have jointly authored a ground-breaking report – sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists.