Critical Technology

humanities logo icon 200px 0

The new TORCH network 'Crisis, Extremes and Apocalypse' are hosting a workshop entitled 'Critical technology: Is technological risk the main threat to the survival of humanity? Or do we need to rely on technology to survive?

It will feature members from the Future of Humanity Institute and the Oxford Internet Institute including Anders Sandberg, Eric Drexler, Owain Evans, Joss Wright and Miles Brundage.

It will feature presentations by the speakers as well as time for questions and should last around 2h30.

Tea, coffee and cakes will be provided.
 

* Please note the change of time *

Programme

# 4.30- 5.00 pm  Anders Sandberg (FHI)

Apocalypse 2.0: existential risks, technology, and the problem of forethought

‘Many cultures have delighted in expecting the end of the world as they know it. But as technology advances, so does our own responsibility for both being the potential cause of the end and doing something about various risks. This talk will discuss what we know about the interplay between human ability and the prospects of averting bad futures.’

# 5.00 – 5.30 pm  Eric Drexler (FHI)

Structured Transparency: Surveillance, civil society, AI, and existential risk

‘To suppress existential risks from rogue actors may require strong transparency, but how might this be reconciled with privacy and institutional accountability? Novel, technology-enabled transparency structures could provide better solutions than have yet been considered.’

# 5.30- 6.00 pm  Joss Wright (OII)

Urizen's Web: Transparency, Freedom, and Control

‘Information technologies form an indispensible part of our lives, social interactions, and means of assessing and processing the world around us. As we move through this increasingly information-focused world, however, we leave traces that are themselves analysed and employed to influence
our choices and opportunities. This talk will consider freedom and growth in an inexorably transparent and algorithmic world, and question how we can reconcile this probable future with the essential
inconsistency and chaos of humanity.’

# 6.00- 6.20 pm Coffee break

# 6.20- 6.50 pm  Miles Brundage (FHI) 

Developments in AI

‘Developments in artificial intelligence pose many risks as well as opportunities over the long term. What can be done now or in the future to ensure maximum benefits and minimum risks from the transition to an era of advanced AI? I will distinguish between short and long term AI policy considerations, and summarize some current lines of research at the Future of Humanity Institute aimed at laying the groundwork for long term AI policy.’

# 6.50 pm – 7.20 pm  Owain Evans (FHI)

Discussion of the general risks and benefits of AI technologies

# 7.20 pm- 7.40 pm General discussion with questions from the audience