«Father of the atomic bomb» Robert Oppenheimer famously told Harry Truman that he had «blood on his hands.» ”after the president dropped two explosives on Japan to end World War II.
Today's computer scientists may have just as much regret for their creations, if current warnings about artificial intelligence (AI) are to be believed.< /p>
The closest thing to an Oppenheimer AI moment came this week when Geoffrey Hinton, the British computer scientist known as the «Godfather of AI», quit Google in protest at what he helped create.< /p>
Hinton, who has been studying neural networks since the 1970s, said he hasn't been able to sleep for months and now regrets his life's work.
jpg» />Geoffrey Hinton warns that the 'frightening' advancement of artificial intelligence could mean Internet users no longer 'know what's true'. Photo: Julian Simmonds. that «these things can become smarter than people.»
“I thought it was far from it. I thought it would be 30 to 50 years or even more,” he told the New York Times. «Obviously I don't think so anymore.»
He said at the conference that humanity is under threat «when smart things can outsmart us.»
I'm afraid that highly intelligent computer programs tasked with the task will relentlessly pursue the goal without concern for the wider impact.< /p>
Thought experiments involving an all-powerful AI destroying the human race are numerous.
Is artificial intelligence developing too fast? Poll
Two decades ago, Nick Bostrom, a philosopher at Oxford University, imagined a machine designed to make paper clips that was so efficient that it turned all matter in the universe, including all of humanity, into them.
Stuart Russell, a British computer scientist at the University of Berkeley, unveiled a bot tasked with cleaning up the oceans that inadvertently sucked all the oxygen out of the air.
“If you are building powerful general purpose AI that has autonomy or the ability to improve itself, we should be concerned about that,” says Mustafa Suleiman, co-founder of British AI pioneer Deepmind, who now leads the startup. up Inflection AI.
Just a few months ago, AI was just another hyped technology that many thought was just a buzzword used by startups to raise money.
However, the incredibly rapid adoption of the technology promises — or threatens — to completely change every aspect of our lives.
It's all due to the popularity of ChatGPT, which has gone from an internal software project of the startup OpenAI in San Francisco to 100 million users. within two months of its release last November. This makes it the fastest growing technology in history.
This is more than just a novelty. Shares in online tutoring company Chegg plunged 50% on Tuesday after it was revealed students were turning to ChatGPT rather than its tutorials.
Chatbot's ability to confidently answer questions, write lyrics and talk in a disarmingly human way led experts to radically justify their expectations of the advent of «artificial general intelligence» (AGI), the point at which AI surpasses human capabilities and 1803: the artificial intelligence hype takes over cryptography. decade.
Hinton is far from the first in the field to warn of the looming dangers associated with the advancement of AI.
In March, tech leaders including Elon Musk signed an open letter calling for a six-month moratorium on the technology. It warned that companies were engaged in «an uncontrolled race to develop and deploy increasingly powerful digital minds that no one — not even their creators — can understand, predict or reliably control.»
Governments have begun to take note. Joe Biden urged AI companies to secure models before they are made public, and Vice President Kamala Harris called AI executives from companies such as Google, Microsoft and OpenAI to the White House this week to discuss the administration's actions. . concerns.
Companies have agreed to a public evaluation of their systems, and the Biden administration has said it will issue guidance on the ethical use of AI by the government.
In the UK, the Competition and Markets Authority said it would begin testing AI amid fears that development is being dominated by a handful of large firms.
Unlike previous transformative technologies such as nuclear weapons and space travel, AI is being created by private sector, not the state.
However, many believe that the fears are exaggerated.
“I don’t lose sleep,” says Michael Wooldridge, director of the AI Research Foundation at the Alan Turing Institute. “No one has yet told me a really believable story. Existential worries are, in a way, pretty glamorous. But I'm very concerned that it just distracts us from what could hurt people in the very near future.»
Wooldridge said he refused to sign a recent open letter and argues that he should have focused on more immediate short-term risks such as the «industrialization of disinformation» rather than a Terminator-style machine uprising.
«What we're going to see is social media is flooded with so many very plausible-sounding stories that it's going to be really hard to tell fact from fiction,» he says.
Last month, US Republicans released an AI-generated video attacking Joe Biden that showed chaos on the US border and crime-infested cities. Amnesty International recently published fake photographs of human rights violations in Colombia. In both cases, the images included a disclaimer that they were AI generated, but most machine creations don't, and there's no guarantee that viewers will read and understand fine print when it's included.
Amnesty International has been criticized for using fake images of what it claims was a «human rights violation» in Colombia. Photo: Twitter
Image generation tools mean that deepfakes, from politicians to porn, can now be created instantly with a few written commands. .
Another immediate risk is the exponential growth of cybercrime as once laborious tasks become massive.
The most sophisticated hacks involve carefully preparing the victim, often for days or weeks, to hand over passwords and sensitive data. This is a time-consuming and often useless process. Bots that can talk to us, imitating the voices and verbal tics of relatives and partners, could perform this exhausting running around with relative ease.
Perhaps soon telephone conversations should begin with secret codes to confirm the voice on the other end of the line is a human being.
In addition to its nefarious applications, AI is likely to have a huge impact on the economy and the world of work.
Sir Patrick Vallance, the government's former chief scientific adviser, told MPs this week that the impact on jobs could be as strong as the industrial revolution.
Governments around the world are ready for what it could mean. It's another question if they can move fast enough to do something about it.
«The world has changed in the last six months,» says Shabbir Merali, a former advisor to Liz Truss, who created AI this week . a report for the centre-right think tank Vperyod.
The report calls for drastic steps to address the spread of AI, such as taxing machines rather than workers to deal with a potential surge in unemployment. Merali and his colleague are also calling for a new AI regulator and a sovereign chatbot for the UK to limit foreign interference.
The government has said it plans to lay out its AI policy within a year. Too late, Merali says.
«Things can change in 12 months,» he says. «These things aren't going anywhere, and the pace will continue.»
Свежие комментарии