ChatGPT founder Sam Altman has warned of the existential risk of AI to humanity. Photo: Dustin Chambers/Bloomberg
Those concerned that AI is an existential risk to humanity worry that new systems are being developed before we have a handle on existing ones.
Anyway, release GPT-5 is expected to be an AI event in 2024.
In any case, the release of GPT-5 is expected to be an AI event in 2024.
Computer software development is usually is a tweak to previous versions to make minor improvements.
Creating new artificial intelligence systems, known as large language models, often means starting a new process. An unprecedented amount of data is fed into an unprecedentedly powerful next-generation microchip system, resulting in a model several times more powerful than the previous one.
GPT-1, the original model created in 2018, was trained on 117 millions of data points known as parameters. GPT-3 required more than a thousand times more — 175 billion, and GPT-4 — another 10 times more — 1.7 trillion.
Demands on computing resources have also increased. GPT-4 reportedly required 16,000 high-end Nvidia A100 chips, versus 1,024 for the previous generation. Little is known about the next wave of models, but they will likely be trained on Nvidia's new H100 chips, a much more powerful successor that will be the first to be specifically designed for training AI models.
“ The history of computer science and artificial intelligence suggests that «that scaling up leads to significant improvements,» says Oren Etzioni, former executive director of the Allen Institute for Artificial Intelligence.
«The transition from GPT-3 to GPT-4 was so dramatic that you'd be a fool not to try it again.»
Google, which unveiled its new Gemini model in December, is preparing to release a more powerful Gemini Ultra in the new year. Anthropic, an artificial intelligence lab backed by Amazon, may also launch the new system.
However, scientists are divided on what exactly «more powerful» would mean. Today's large language models are approaching the upper limits of performing certain tasks. Google's Gemini already outperforms humans on widely used language comprehension tests and programming exams.
Google's Gemini already outperforms people in a widely used language comprehension test. Author: JASON ANDREW/NYTNS/Redux/eyevine
That doesn't make him any less prone to common criticisms of today's AI models: they lack creativity, they just regurgitate what they were fed; and that they have a poor grasp of truth, making them prone to «hallucinate» facts.
Nathan Benaich, founder of investment firm Air Street Capital and co-author of the annual State of Artificial Intelligence report, says the next generation of systems will “multimodal” — capable of understanding text, images, video and audio. This, he said, will bring them closer to understanding the world.
Demis Hassabis, head of Google's Deepmind lab, said this could include sensations such as touch, which could lead to systems being built into robots that can understand the world.
Matt Clifford, a tech entrepreneur who led government work at the November AI Security Summit, says the next wave of models could demonstrate abilities like reasoning and planning — qualities we might associate with human intelligence.
AI that can switch from one model to another. tasking another would be a step toward autonomous “agents”—systems that can perform tasks on behalf of people, such as booking vacations or reading and responding to emails.
The consequences of this can be profound. Although today's artificial intelligence systems threaten to take away jobs in fields such as copywriting and design, they usually need to be accompanied in the writing or illustrating process. Those who can turn their words into action (such as a customer service bot that can book flights) will be more dangerous.
However, these predictions are largely guesswork. And even today's artificial intelligence models are too complex to be fully understood.
This is one reason why the next wave of models will face increased government scrutiny. Nine companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, Mistral, OpenAI and Elon Musk's x.ai — have agreed to have their systems reviewed by the UK government's Artificial Intelligence Safety Institute before releasing them.
The companies signed similar commitments to the White House. The most advanced version of Google's Gemini model is believed to be undergoing official review ahead of its upcoming release.
Likewise, the next wave of artificial intelligence systems may be a failure. Skeptics believe that most of the easy wins have already been achieved, and that improvements from here on out will be negligible, no matter how much computing power is deployed.
But if the capabilities of next year's models remain unknown, it seems clear that that existing artificial intelligence technologies will be used more widely. One of the hot spots is likely to be elections, as more than two billion people will go to the polls in countries such as the US, India and the UK in 2024.
“I think we will see deepfakes and their influence under the microscope,” says Benaich. “We've already seen concerns about their use in Bangladesh, Slovakia and Turkey.
“The evidence on their effectiveness is mixed, but we should definitely expect US and European regulators to start taking note.”< /p>
Artificial intelligence may have captured people's imaginations in 2023, but its impact may not begin to be felt until 2024.
Свежие комментарии