A new statement from the Center for AI Safety claims the extinction risk presented by AI should be a global priority. I argue that potential catastrophic risks should be a higher priority
Let's imagine for a moment that all the risks presented by AI were somehow successfully met, and AI could then confidently be declared to no longer be a threat. This may not matter, because the knowledge explosion machinery that created AI will continue to generate ever more, ever larger powers, at what seems an ever accelerating pace.
Instead of focusing on particular emerging technologies one by one by one, we should be focused on the knowledge explosion assembly line which is producing all these emerging powers. If we fail to do that, the knowledge explosion will continue to generate new threats faster than we can figure out how to meet them.
As example, nuclear weapons are the biggest threat we currently face, and after 75 years we still don't have the slightest clue how to get rid of them. And while we've been puzzling over that, the knowledge explosion has produced AI and genetic engineering, which we also don't know how to make safe.
And the 21st century is still young. More and more and more is coming. AI is not the end of what's coming, but only the beginning. Think back a century to 1923. In 1923 they couldn't even imagine many of the technologies that were to come throughout the rest of the 20th century. That's where we are today too.
If we don't learn how to take control of the knowledge explosion, all the hand wringing about threats presented by particular technologies like AI may be pointless, because it won't matter if we solve AI if some other technology crashes the system.
Focusing on particular technologies one by one by one is a loser's game. Until we understand this, we are Thelma and Louise racing towards the cliff.
Great perspective, and I am one of the signatories who shares your view of the key threats being well down the chain from total anihilation. I particularly appreciate this point of yours - that there are "potential catastrophic risks of not developing new AI-based technologies." Would love to know what you think of my call for an IPAI - Intergovernmental Panel on AI akin to the IPCC for climate change: https://revkin.substack.com/p/im-with-the-experts-warning-that