When two existential risks are better than one

James Daniel Miller, Smith College

Abstract

Purpose: The great filter and an unfriendly artificial general intelligence might pose existential risks to humanity, but these two risks are anti-correlated. The purpose of this paper is to consider the implications of having evidence that mankind is at significant peril from both these risks. Design/methodology/approach: This paper creates Bayesian models under which one might get evidence for being at risk for two perils when we know that we are at risk for at most one of these perils. Findings: Humanity should possibly be more optimistic about its long-term survival if we have convincing evidence for believing that both these risks are real than if we have such evidence for thinking that only one of these perils would likely strike us. Originality/value: Deriving implications of being greatly concerned about both an unfriendly artificial general intelligence and the great filter.