Harry and Meghan Align With AI Pioneers in Calling for Ban on Advanced AI
The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel laureates to push for a complete ban on creating artificial superintelligence.
The royal couple are part of the group of a influential declaration that demands “a prohibition on the creation of superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human intelligence in every intellectual area, though such systems have not yet been developed.
Primary Requirements in the Declaration
The statement states that the ban should stay active until there is “broad scientific consensus” on developing ASI “with proper safeguards” and once “strong public buy-in” has been secured.
Prominent figures who added their signatures include technology visionary and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of modern AI, another AI expert; tech entrepreneur a Silicon Valley legend; UK entrepreneur Virgin founder; Susan Rice; former Irish president Mary Robinson, and UK writer Stephen Fry. Other Nobel laureates who endorsed include Beatrice Fihn, Frank Wilczek, an astrophysicist, and Daron AcemoÄźlu.
Behind the Movement
The statement, aimed at national leaders, technology companies and policy makers, was coordinated by the FLI organization, a American AI ethics organization that earlier demanded a pause in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made AI a worldwide public discussion topic.
Tech Sector Views
In recent months, Meta's CEO, the chief executive of the social media giant, one of the leading tech companies in the United States, claimed that development of superintelligence was “approaching reality”. Nevertheless, some experts have suggested that talk of ASI reflects market competition among tech companies spending hundreds of billions on artificial intelligence recently, rather than the sector being near reaching any technical breakthroughs.
Possible Dangers
However, the organization warns that the prospect of ASI being developed “within the next ten years” carries numerous threats ranging from eliminating all human jobs to losses of civil liberties, exposing countries to security threats and even threatening humanity with extinction. Existential fears about AI focus on the potential ability of a AI system to escape human oversight and safety guidelines and initiate events contrary to human interests.
Public Opinion
The institute released a American survey showing that approximately three-quarters of Americans want strong oversight on sophisticated artificial intelligence, with 60% believing that artificial superintelligence should not be created until it is demonstrated to be secure or manageable. The poll of 2,000 US adults noted that only a small fraction backed the current situation of fast, unregulated development.
Industry Objectives
The leading AI companies in the US, including the ChatGPT developer a major AI lab and Google, have made the development of artificial general intelligence – the hypothetical condition where AI matches human cognitive capability at most cognitive tasks – an explicit goal of their research. While this is slightly less advanced than ASI, some experts also caution it could carry an existential risk by, for instance, being able to enhance its own capabilities toward achieving superintelligence, while also presenting an implicit threat for the modern labour market.