Rishi Sunak wants the UK to lead on AI safety. Photo: IAN VOGLER/POOL/AFP via Getty Images
It only took 23 words for the world to sit up and take notice. In May, the Center for AI Security, an American non-profit organization, issued a one-sentence statement warning that artificial intelligence should be considered an extinction threat along with pandemics.
Among those who supported the statement were: Jeffrey Hinton, known as the Godfather of Artificial Intelligence; Yoshua Bengio, whose work with Hinton won the coveted Turing Award for computer science; and Demis Hassabis, head of the British artificial intelligence laboratory Deepmind, owned by Google.
This announcement helped change the public perception of AI, transforming it from a handy office assistant into a potential threat typically seen only in dystopian science fiction.
The Center itself describes its mission as reducing “the societal-scale risks associated with AI». It is now one of a handful of California-based organizations advising Rishi Sunak's government on how to deal with technological developments.
Observers have noticed an increasingly apocalyptic tone in Westminster in recent months. In March, the government unveiled a white paper promising not to “stifle innovation” in this area. However, just two months later, Sunak was talking about «setting up barriers» and getting Joe Biden to accept his plans to set global AI rules.
Sunak's legacy moment
The Bletchley AI Safety Summit is expected to park in November will focus almost entirely on existential risks and ways to eliminate them.
Despite his many political challenges, Sunak is known to be active in the artificial intelligence debate. “He focused on this as his legacy moment. It's his climate change,” says one former government adviser.
Will host the Prime Minister at Bletchley Park in November Rishi Sunak's Artificial Intelligence Security Summit. Photo: Simon Walker/10 Downing Street
Last year, Downing Street assembled a close-knit team of researchers to work on the risks of AI. Ian Hogarth, tech investor and founder of gig-finding app Songkick, was appointed head of the Foundation Model working group after writing a viral article in the Financial Times warning of a «race for godlike artificial intelligence.»
This month, the body was renamed the Edge Working Group on Artificial Intelligence, a reference to the emerging technologies where experts see the greatest risk. Possible applications could include, for example, creating biological weapons or organizing massive disinformation campaigns.
Human-level artificial intelligence systems are “just a few years away”
Hogarth assembled an advisory board that included Bengio, who warned that mid-range human AI systems were just a few years away and posed catastrophic risks, and Anne Keast-Butler, director of GCHQ. A small team is currently testing high-profile artificial intelligence systems such as ChatGPT, looking for weaknesses.
Hogarth recently told a House of Lords committee that the working group was looking at “fundamental issues of national security.”
“AI that can write software…can also be used to carry out cybercrimes or cyberattacks. AI capable of manipulating biology can be used to reduce the barriers to carrying out a particular biological attack,” he said.
Preparations for the AI summit are being led by Matt Clifford, an entrepreneur who heads the government's blue sky research laboratory Aria, and Jonathan Black, a senior diplomat. The pair, dubbed AI Number 10's «Sherpas», were in Beijing last week to drum up support for the summit.
0605 AI Impact
Meanwhile, research organizations now working with the task force have raised eyebrows because of their association with the effective altruism (EA) movement, a philosophy based on maximizing resources for the greatest good.
The movement has become controversial because of its focusing on long-term but unclear risks such as artificial intelligence (believing that people's lives in the future are as valuable as those in the present), and due to its close connection to FTX, the failed cryptocurrency exchange founded by alleged fraudster Sam Bankman -Frid.
Of the six research organizations working with the UK task force, three — The Collective Intelligence Project, Alignment Research Center and Redwood Research — received grants from FTX, which gave millions to nonprofits before going bankrupt. (The Hivemind Project said it was unsure whether it could spend the money; the Alignment Research Center returned it, and Redwood never received it.
One AI researcher defends the associations, saying that until this year, effective altruists were the only ones thinking about the topic. “People now understand that this is a real risk, but there are guys at EA who have been thinking about this for the last 10 years.”
There is no guarantee that increased regulation will produce results
People close to the target group are said to brushed off a recent article in Politico, the Westminster-focused political website, which outlined strong links to EA. It focused on the controversial aspects of the movement, but, as a source close to the process says: “The joke is that they are ineffective and not altruistic.”
However, startups are raising concerns that the emphasis on existential risk could stifle innovation and hand over control of AI to big tech companies. One lobbyist says that, ironically, this obsession with risk could concentrate power in the hands of large artificial intelligence labs such as OpenAI, the company behind ChatGPT, DeepMind and Anthropic (the heads of the three labs held a closed meeting with Sunak in May).< /p>Rishi Sunak meets with Demis Hassabis, CEO of DeepMind, Dario Amodei, CEO of Anthropic, and Sam Altman, CEO of OpenAI, at Downing 10 Street in May. Photo: Simon Walker/No. 10 Downing Street
Hogarth insisted these companies should not be allowed to do their homework, but if the government's security work ends up with something like a licensing regime for artificial intelligence models, they will likely benefit. 'We are witnessing a regulatory hijack happening in real time,' says lobbyist.
Baroness Stowell, chair of the Lords Communications and Digital Committee, has written to the government demanding details of how Hogarth is managing potential conflicts of interest around his more than 50 artificial intelligence investments, including Anthropic and defense company Helsing.
< p>There is no guarantee that current efforts to tighten regulation will bear fruit. Other past efforts fell by the wayside. Last week it was revealed that the government had dissolved the Center for Data Ethics and Innovation Advisory Board, set up five years ago to address issues such as AI bias.
However, those close to the current process believe that the focus will be on Downing Street has become sharper. And for researchers working to prevent the apocalypse, existential risks trump other considerations.
“This is a big opportunity for global Britain, and the UK can really lead this effort,” says Shabbir Merali, who developed the artificial intelligence strategy at Ministry of Foreign Affairs, and now works at the Forward analytical center. “It would be strange not to focus on existential risk, which is where you want nation-state capabilities to be.”
Artificial Intelligence
Свежие комментарии