Will Artificial Superintelligence Kill Us?

Document Type

Article

Publication Date

5-23-2023

Publication Title

LessWrong

Abstract

Through advancements in machine learning, we are progressing towards the development of artificial superintelligence (ASI). As machine learning often yields opaque results, our predictions about ASI behavior in novel situations are uncertain. Alarmingly, some theories suggest that ASI might eliminate humans to secure resources for its own use.

Even if ASI doesn't intentionally harm us, our existence could be endangered if our fundamental necessities, like clean air, water, and a stable climate, interfere with the ASI's optimal functioning. Additionally, intense conflicts among ASIs could render the Earth uninhabitable for humans.

Market forces drive companies to chase ASI development, even if they believe that ASI could cause human extinction. This mainly occurs as every company understands that halting ASI research could give their competitors an edge. Stopping the global pursuit of ever more powerful AI seems unlikely, given the growing financial and military advantages attached to it. In addition, older political leaders, who might benefit from the potential life-extending effects of ASI, could push for rapid ASI development, despite the substantial risk of human extinction.

I believe our best chance at survival depends on the possibility that even unaligned ASIs might see a practical benefit of preserving humanity and sharing a small part of the universe's resources with us. Predicting our survival chances is challenging, but I cautiously estimate them to be between ten and forty percent.

This paper explores the important issues related to ASI risk, evaluates potential differences in expert opinions on each subject, and points out areas for future research to improve our comprehension of this complicated scenario.

Comments

For Open Philanthropy AI Worldviews Contest

Share

COinS