There has been a call to pause AI experiments. However, many of those who signed the letter, like Elon Musk, knew that there was no chance for a pause to be implemented. They merely wanted to be on the record for a pause.
Nick Bostrom has the most comprehensive analysis of Superintelligence scenarios, dangers and Strategies. This analysis was written a few years ago. The AI risk situation is now simplified. We do not need to consider Whole Brain Emulation or scenarios.
The main risk scenario is improving Generative AI combined with other types of Artificial Intelligence could reach superhuman intelligence. It is clear that Generative AI (ChatGPT, Bard, Palm-2, GPT-4 and later versions of Large Language Models) is the source of significantly broad and capable human and near human level artificial intelligence. The systems can also achieve super-human level intelligence in various domains. The systems are rapidly improving.
David Orban and Roman V. Yampolskiy go over some of the current facts.
* Historically, we have been bad at cybersecurity. All of our code libraries have undiscovered bugs and errors.
* We have layers of programs that are flawed
Yampolskiy makes the case and proofs that an AGI will be uncontrollable and unpredictable.
However, there are limitations to the capability of any superintelligence at any point and in various domains. IF the limitations are very high and vastly beyond humanity then the superAGI being uncontrollable and unpredictable would mean humanity would be at the mercy of superintelligence. We would have to hope that superAGI chooses to be good.
If intelligence systems are limited and controllable by humanity then we have to maximize the benefits of systems as powerful but not existential risks.
I would argue that we need to work better in the intermediate cases. We do not know if we can make “superAGI” and if we can how powerful it will be. We need to have a multiple paths. AI researchers still need to work on improving the control of AI and AGI. We should also have work under the assumption that we can make things better if we improve human civilization robustness.
Earthquakes and other disasters of the same scale (ie a 8.0 Earthquake) kills more people in poorer and less developed countries. Those places have more poorly engineered buildings and do not have good emergency response.
We need to engineer a civilization with more passive toughness and survivability.
We need to work on expanding the upper bound of systems that are controllable by humanity.
We need to make civilization less fragile and tougher.
I think we should proliferate useful narrow AI and software systems that improve computer security and human security as rapidly as possible.
We have the case where AI is far better than human at chess for decades. If instead of chess, there was AI for more important and useful general strategizer for gaining competing in business or some other highly valuable competition. We would want to have people enhanced with what are believed or known to be “safe” pre-cursor AI. We would use the Narrow superAI tools to enhance the capabilities and security of each person. It would be like trying to distribute the equivalent of rifles and body armor to the citizenry.
We would also be trying to harden key infrastructure and to make certain critical services distributed.
On this path of civilization robustness, we need to work under the assumption that best efforts will have time to improve the situation. There should be triage of the problems, weaknesses and selection of solutions that can be created and deployed faster.
Improving civilization robustness will be good even if near-term AI is relatively weak. AI is relatively weak means AI development falls short of AGI or the AGI that we get is not significantly better than human.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.