Pope Francis in a papal down jacket fooled the Internet Photo: Pablo Xavier
Momentarily, the Internet fooled. The image of Pope Francis in a shiny white papal down jacket has spread like wildfire across the web.
However, the image of the unusually dapper 86-year-old head of the Vatican turned out to be a fake. The fake picture was created as a joke using artificial intelligence technology, but was realistic enough to fool the untrained eye.
AI fakes are rapidly spreading on social media as machine learning tools grow in popularity. In addition to invented images, chatbot-based artificial intelligence tools such as OpenAI's ChatGPT, Google's Bard, and Microsoft's Bing have been accused of creating a new avenue for disinformation and fake news to spread.
Trained on billions of pages of articles and millions of books, these bots can give convincing-sounding, human-like answers, but often make up facts, a phenomenon known in the industry as a hallucination. Some AI models have even learned to program, which opens up the possibility of using them for cyberattacks.
In addition to fears of false news and «deeply fake» images, a growing number of future watchers are concerned that AI is becoming an existential threat to humanity.
Scientists at Microsoft last month went so far as to say that one algorithm, ChatGPT-4, has «sparks of…human-level intelligence.» Sam Altman, the creator of OpenAI, the American startup behind ChatGPT technology, admitted in a recent interview: «We're a little scared by this.»
Now the backlash against so-called «Generative AI» is brewing as the heavyweights of Silicon Valley face off against -for the risks and potentially endless benefits of this new technological wave.
Two weeks ago, more than 3,000 researchers, scientists and entrepreneurs, including Elon Musk, wrote an open letter demanding a «suspension» for six months of the development of the most advanced OpenAI chatbot tool, the so-called «large language model» or LLM.
Elon Musk has joined 3,000 others in warning about the development of advanced AI tools. Photo: Taylor Hill/Getty Images“We call on all artificial intelligence laboratories to immediately suspend training of artificial intelligence systems more powerful than GPT-4 for at least 6 months,” the scientists wrote. “If such a pause cannot be taken quickly, governments should intervene and impose a moratorium.”
Musk and others fear the destructive potential of AI that the almighty “general intelligence” could have profound consequences. danger to humanity.
But their demand for a six-month ban on the development of more advanced AI models was met with skepticism. Jan LeKun, a leading artificial intelligence expert at Meta, the parent company of Facebook, likened the attempt to the Catholic Church's attempt to ban the printing press.
«Imagine what could happen if commoners had access to books» he said. says on Twitter.
"They could read the Bible for themselves and society would be destroyed.”
Others pointed out that some of those who signed the letter have plans of their own. Musk, who has openly clashed with OpenAI, wants to develop his own competing project. At least two of the signers were researchers at DeepMind, which is owned by Google and is working on its own AI bots. AI skeptics warn that the letter captivates the hype around the latest technology with statements such as «should we develop non-human intelligences that could eventually outnumber us, outwit us, become obsolete and replace us?»
Many of the AI tools currently being developed are essentially «black boxes» with little public information about how they actually work. Despite this, they are already being implemented in hundreds of enterprises. For example, OpenAI's ChatGPT is used by payment company Stripe, Morgan Stanley, and Klarna, which offers «buy now, pay later.»
Richard Robinson, founder of legal start-up RobinAI, which is working with $4 billion AI company Anthropic, another OpenAI competitor, says even the creators of large language models don't fully understand how they work. However, he adds: «I also think there is a real risk that regulators will overreact to these developments.»
Government watchdogs are already gearing up for a battle with AI companies over privacy and data concerns. In Europe, Italy has threatened to ban ChatGPT over claims that it harvests information from the Internet with little regard for consumers. human.
Isn't artificial intelligence developing too fast? Poll
The Italian privacy watchdog said the bot does not have an «age verification system» that would block anyone under the age of 18 from accessing it. Under European data rules, the regulator can impose fines of up to €20m (£18m) or 4% of OpenAI's turnover, unless it changes its data practices.
In response OpenAI has said it has stopped offering ChatGPT in Italy. “We believe we offer ChatGPT in accordance with the GDPR and other privacy laws,” the company said.
France, Ireland, and Germany are exploring similar regulatory measures. In the UK, the Information Commissioner's Office said: «There really can be no justification for generative AI to misunderstand the implications of privacy.»
However, despite privacy guards raising alarming flags, the UK has not yet gone so far as to threaten to ban ChatGPT. In a white paper on AI regulation released earlier this month, the government decided to phase out a formal AI regulator. Edward Machin, Associate at Ropes & Grey, says: «The UK is going its own way, it's taking a much lighter approach.»
Several AI experts told The Telegraph that the real worries about ChatGPT, Bard and others have less to do with the long-term effects of some killer, all-powerful AI, but the damage it could cause here and now.
Juan José López Murphy, head of artificial intelligence and data science at tech company Globant, says there will be challenges in the near future to help people spot deepfakes or misinformation generated by chatbots. “This technology already exists… it’s about how we misuse it,” he says.
“Training ChatGPT all over the internet is potentially dangerous due to the bias of the internet,” says computer expert Dame Wendy Hall. She suggests calls for a development moratorium are likely to be ineffective as China rapidly develops its own tools.
It looks like OpenAI is ready for reprisals. On Friday, the company posted a blog post saying: “We believe that powerful AI systems should be subject to a thorough security assessment. Regulation is needed to ensure acceptance of such practices.”
Mark Warner of the UK AI Department, who works with OpenAI in Europe, says regulators will still need to plan for the possibility of super powerful AI may be on the horizon.
«It looks like artificial general intelligence may be coming sooner than many expect,» he says, urging labs to stop the rat race and cooperate on security.
» We need to be aware of what might happen in the future, we need to think about regulation now so we don't create a monster,» says Dame Wendy.
«There's no need for that. Being so scary at the moment moment… I think this future is still a long way off.
Fake images of the Pope may seem a long way from world domination, but if experts are to be believed, the gap is starting to close.
Свежие комментарии