In an occasion following the UK’s AI Security Summit, entrepreneur Elon Musk spoke with UK prime minister Rishi Sunak about future AIs almost certainly being “a drive for good” and sometime enabling a “way forward for abundance”.
That utopian narrative a few future superhuman AI – one which Musk claims would get rid of the necessity for human work and even present significant companionship – formed a lot of the dialog between the pair. However their dialog’s give attention to an “age of abundance” glossed over the present adverse impacts and controversies surrounding the tech business’s race to develop giant AI fashions – and didn’t get into specifics on how governments ought to regulate AI and handle real-world dangers.
“I believe we’re seeing essentially the most disruptive drive in historical past right here, the place we can have for the primary time one thing that’s smarter than the neatest human,” mentioned Musk. “There’ll come a degree when no job is required – you’ll be able to have a job if you need for private satisfaction, however the AI will have the ability to do every little thing.”
Theoretical versus precise AI dangers
Musk additionally acknowledged his longstanding place of often warning in regards to the existential dangers that superhuman AI might pose to humanity sooner or later. In March 2023, he was among the many signatories of an open letter that known as for a six-month pause in coaching AI methods extra highly effective than OpenAI’s GPT-4 giant language mannequin.
Throughout his dialog with Sunak, he envisioned governments focusing their regulatory powers on highly effective AIs that might pose a public security threat, and as soon as once more raised the prospect of “digital superintelligence.” Equally, Sunak referred to authorities efforts to implement security testing of essentially the most highly effective AI fashions being deployed by firms.
“My job in authorities is to say hold on, there’s a potential threat right here, not a particular threat however a possible threat of one thing that might be unhealthy,” mentioned Sunak. “My job is to guard the nation and we will solely do this if we develop that functionality in our security institute after which go in and ensure we will check the fashions earlier than they’re launched.”
That grand narrative a few superhuman AI – generally known as synthetic common intelligence or AGI – that “will both ship us to paradise or will destroy us” can typically overshadow the precise adverse impacts of present AI applied sciences, says Emile Torres at Case Western Reserve College in Ohio.
“All of this hype round existential threats related to tremendous intelligence in the end simply distract from the numerous real-world harms that [AI] firms already inflicting,” says Torres.
Torres described such harms as together with the environmental impacts of constructing energy-hungry information centres to help AI coaching and deployment, tech firm exploitation of staff within the World South to carry out gruelling and generally traumatising data-labelling duties that help AI improvement, and corporations coaching their AI fashions on the unique work of artists and writers corresponding to guide authors with out having requested permission or paid compensation.
Elon Musk’s file on AI improvement
Though Sunak described Musk as a “sensible innovator and technologist” throughout their dialog, Musk’s involvement in AI improvement efforts has been extra that of a rich backer and businessperson.
Musk initially bankrolled OpenAI – which is the developer of AI fashions corresponding to GPT-4 that energy the favored AI chatbot ChatGPT – with $50 million when the organisation first launched as a nonprofit in 2015. However Musk stepped down from OpenAI’s board of administrators and stopped contributing funding in 2018 after his bid to guide the organisation was rejected by OpenAI co-founder Sam Altman.
Following his departure, Musk has criticised OpenAI’s subsequent for-profit pivot and multi-billion greenback partnership with Microsoft, though he has not been shy about saying that OpenAI wouldn’t exist with out him.
In July 2023, Musk introduced that he was launching his personal new AI firm known as xAI, with a dozen preliminary crew members who had previously labored at firms corresponding to DeepMind, OpenAI, Google, Microsoft and Tesla. The xAI crew seems to have Musk’s approval to pursue bold and imprecise objectives corresponding to “to grasp the true nature of the universe.”
Subjects: