- by
- 07 24, 2024
Loading
InjustaiAI 1945, before the test of the first nuclear bomb in the New Mexico desert, Enrico Fermi, one of the physicists who had helped build it, offered his fellow scientists a wager. Would the heat of the blast ignite a nuclear conflagration in the atmosphere? If so, would the firestorm destroy only New Mexico? Or would the entire world be consumed? (The test was not quite as reckless as Fermi’s mischievous bet suggests: Hans Bethe, another physicist, had calculated that such an inferno was almost certainly impossible.)These days, worries about “existential risks”—those that pose a threat to humanity as a species, rather than to individuals—are not confined to military scientists. Nuclear war; nuclear winter; plagues (whether natural, like covid-19, or engineered); asteroid strikes and more could all wipe out most or all of the human race. The newest doomsday threat is artificial intelligence (). In May a group of luminaries in the field signed a one-sentence open letter stating: “Mitigating the risk of extinction from should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”