Cambridge Conference on Catastrophic Risk

img_0593-copy

Pessimism, Huw Price says, is a necessary antidote to society’s optimism. Price is the Bertrand Russell Professor of Philosophy at the University of Cambridge and one of the founders of the Cambridge Centre for the Study of Existential Risk (CSER). He was introducing the first ever Cambridge Conference on Catastrophic Risk. We had come to Cambridge to take the pessimistic view seriously.

Scientists and philosophers have recently started to take seriously the idea that for all its remarkable success human race might be at risk of extinction. The systematic study of the risk of catastrophes that could threaten the survival of the human race—what Oxford philosopher Nick Bostrom called “existential risks” (1)—has grown rapidly in recent years. Over the last 12 years a number of academic institutes devoted to the study of catastrophic risk have been founded, including the Oxford Future of Humanity Institute (FHI) in 2005, the Global Catastrophic Risk Institute (GCRI)—with which I’m affiliated—in 2011, CSER in 2012, and the Future of Life Institute (FLI) in 2014.

The greatest threats to our survival—as I have written— come from human activity. As our collective power to shape the world grows, so does our ability to harm ourselves. Natural catastrophes like an asteroid strike or a supervolcano eruption could certainly threaten the human race. But it has become clear since the  development of nuclear weapons and the discovery that we are altering the global climate that we are much more likely to fall victim to a catastrophe of our own making.

Nuclear conflict may still be the most pressing danger we face. The Cold War may be over, but the US and Russia still deploy thousands of strategic nuclear weapons. An exchange of just 100 of these weapons—perhaps triggered by an accident or a false alarm—might be enough to cause a nuclear winter and kill billions of people. (2) Even a war between India and Pakistan could be a global catastrophe.

The Cambridge conference organizers chose to focus primarily on emerging, less well-studied risks. In particular, the conference considered the risks associated with

  • the depreciation of Earth systems
  • biological engineering
  • artificial intelligence

Although it sounds like science fiction, many top researchers take seriously the idea that artificial intelligence—for all its incredible promise—could have dangerous unintended consequences. Concerns about the possible consequences led attendees of the recent Beneficial AI conference to produce a set of Asilomar Principles to guide the development of artificial intelligence. As Viktoriya Krakovna, an AI safety researcher at DeepMind and one of the founders of FLI, explained in her talk at the Cambridge conference, the main concern is not that machine intelligence will turn out to be malevolent, but that a flaw in AI design could cause it to act accidentally counter to our interests.

We don’t know that the worst will happen. Humanity may well avoid a catastrophe. But this kind of pessimism may nevertheless be necessary. We will have to take the danger of catastrophe seriously in order to avoid one.

(1) Nick Bostrom, “Existential Risks” in Journal of Evolution and Technology (Vol. 9, March 2002)

(2) Owen B. Toon, Alan Robock, and Richard P. Turco in “Environmental Consequences of Nuclear War” in Physics Today (December 2008)

Videos of the Cambridge Conference on Catastrophic Risk 2016 keynote talks are available here.

Clare College, Cambridge image courtesy of Robert de Neufville.

Advertisement
This entry was posted in Existential Risk and tagged , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s