The world’s leading experts on Artificial Intelligence (AI) today warned of the lack of regulation and control of the technology and called on world leaders to intervene more, otherwise there would be a ‘catastrophic risk’ for humanity.
‘Large-scale cybercrime, social manipulation and other harms could escalate rapidly,’ and in ‘the event of open conflict, AI systems could autonomously deploy a range of weapons, including biological weapons,’ they add, admitting a ‘very real possibility that the uncontrolled advance of AI could culminate in large-scale loss of life and the biosphere and the marginalisation or extinction of humanity,’ say 25 leading AI authors in a paper published today in the journal Science.
The authors emphasise that ‘it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems – that surpass human capabilities in many critical domains – will be developed in this decade or the next’ and that ‘attempts to introduce initial guidelines’ are not being sufficient.
Lack of research into system security is one of the main concerns of the experts, who estimate that there are less than 3 per cent of scientific publications on the subject, added to which is the absence of ‘mechanisms to prevent misuse and recklessness, particularly with regard to the use of autonomous systems capable of acting independently’, say the authors, a list that includes Nobel laureates, researchers and Turing Prize winners.
In the document, entitled ‘Managing extreme AI risks in the midst of rapid progress’, the signatories recommend that governments ‘create specialised and fast-acting institutions for oversight’, with robust funding, ‘require much stricter risk assessments with mandatory consequences’ and that companies ‘prioritise safety and demonstrate that their systems cannot cause harm’.
In the case of more powerful AI systems, the authors argue that ‘governments must be prepared to take the lead in regulation’, including licensing, ‘restricting their autonomy in key social functions, halting their development and deployment in response to worrying capabilities’, among other matters.
For the document’s signatories, the risks of AI are ‘catastrophic’, because the technology ‘is already making rapid progress in critical areas such as hacking, social manipulation and strategic planning, and could soon pose unprecedented control challenges’.
According to Stuart Russell, from the University of Berkeley, this consensus document ‘calls for strict regulation by governments rather than voluntary codes of conduct drafted by industry’, because advanced AI systems ‘are not toys’.
‘Increasing their capabilities before we know how to make them safe is absolutely reckless. Companies will complain that it’s too difficult to meet regulations – that ‘regulation stifles innovation’,’ he said, adding that “there are more regulations for sandwich shops than for AI companies”.
For Philip Torr, from Oxford University, if care is taken, ‘the benefits of AI will outweigh the disadvantages’, but without this concern, there is a ‘risk of an Orwellian future with a form of totalitarian state that has total control’ of humanity.
Another of the authors, historian Yuval Noah Harari, points out that with this technology, ‘humanity is creating something more powerful than itself, which can escape human control’.
Lusa