So far in the series, we’ve heard that artificial intelligence is becoming ubiquitous and is already changing our lives in many ways, from how we search for and receive information, to how it is used to improve our health and the nature of the ways we work. We’ve already taken a step into the past and explored the history of AI, but now it’s time to look forward. Many philosophers and writers over the centuries have discussed the difficult ethical choices that arise in our lives. As we hand some of these choices over to machines, are we confident they will reach conclusions that we can accept? Can, or should, a human always be in control of an artificial intelligence? Can we train automated systems to avoid catastrophic failures, that humans might avoid instinctively? Could artificial intelligence present an extreme, or even an existential threat to our future? Join our host, philosopher Peter Millican, as he explores this topic with Allan Dafoe, Director of the Centre for the Governance of AI at the Future of Humanity Institute; Mike Osborne, co-director of the Oxford Martin programme on Technology and Employment, who joined us previously to discuss how AI might change how we work; and Jade Leung, Head of Partnerships and researcher with the Centre for the Governance of AI.