In the US, there is a demand to set up Manhattan-style project to push the development of superhuman AI intelligence, also called AGI.
To counter this demand, three prominent AI scientists Eric Schmidt (former Google CEO), Dan Hendrycks Director, Centre for AI safety) and Alexander Wang (CEO of Scale AI) has written a paper titled Superintelligence Strategy. They have cautioned that exclusive control of superintelligent AI systems by the US may provoke retaliation from China in the form of a cyberattack. The response to Mahattan-style project could provoke hostile reactions.
The suggestion of Manhattan-style project assumes the setting up of AGI project on the lines of atomic bomb programme in the 1940s.
The joint paper challenges this suggestion and could provoke a pre-emptive strike from an adversary. Though comparing nuclear weapons and AI is extreme, world leaders consider AI to be a top military advantage.
Schmidt et al suggest that adversaries will not wait till AGI is weaponised. They can disable threatening AI projects by using Mutual Assured AI Malfunction (MAIM).
Instead, the US should develop methods that deter other countries from creating superintelligent AI.
Some doomers treat AI as catastrophic. They want to slow down its development. Some ostriches want to accelerate the development of AI. A third way is to prioritize the strategies. It is at times wiser to take a defensive approach.
Leave a Reply