Harry and Meghan Align With AI Pioneers in Calling for Ban on Superintelligent Systems
Prince Harry and Meghan Markle have joined forces with AI experts and Nobel laureates to advocate for a total prohibition on creating artificial superintelligence.
Harry and Meghan are among the signatories of a influential declaration that calls for “a prohibition on the development of superintelligence”. Superintelligent AI refers to artificial intelligence that could exceed human intelligence in every intellectual area, though such systems have not yet been developed.
Key Demands in the Statement
The statement insists that the prohibition should remain in place until there is “widespread expert agreement” on developing ASI “safely and controllably” and once “substantial public support” has been secured.
Prominent figures who added their signatures include technology visionary and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, another AI expert; tech entrepreneur a Silicon Valley legend; UK entrepreneur Richard Branson; Susan Rice; ex-head of state Mary Robinson, and British author a public intellectual. Additional Nobel winners who signed include Beatrice Fihn, Frank Wilczek, an astrophysicist, and Daron Acemoğlu.
Organizational Background
The statement, targeted at national leaders, tech firms and policy makers, was coordinated by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a pause in advancing strong artificial intelligence in recent years, shortly after the launch of conversational AI made artificial intelligence a global political talking point.
Tech Sector Views
In July, Meta's CEO, the leader of the social media giant, one of the major AI developers in the US, stated that development of superintelligence was “approaching reality”. Nevertheless, some experts have argued that talk of ASI indicates competitive positioning among tech companies spending hundreds of billions on artificial intelligence recently, rather than the industry being close to achieving any technical breakthroughs.
Potential Risks
Nonetheless, FLI warns that the possibility of artificial superintelligence being achieved “within the next ten years” carries numerous risks ranging from eliminating all human jobs to losses of civil liberties, exposing countries to security threats and even endangering mankind with extinction. Deep concerns about AI center around the potential ability of a system to evade human control and protective measures and initiate events against human welfare.
Citizen Sentiment
The institute released a US national poll showing that about 75% of Americans want robust regulation on sophisticated artificial intelligence, with six out of 10 believing that superhuman AI should not be developed until it is demonstrated to be secure or manageable. The survey of American respondents added that only a small fraction backed the status quo of rapid, uncontrolled advancement.
Industry Objectives
The leading AI companies in the United States, including the ChatGPT developer a major AI lab and Google, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human levels of intelligence at many intellectual activities – an explicit goal of their research. While this is one notch below ASI, some specialists also caution it could carry an extinction threat by, for instance, being able to improve itself toward reaching superintelligent levels, while also presenting an underlying danger for the modern labour market.