Connect with us

    Hi, what are you looking for?

    The Times On Ru
    1. The Times On RU
    2. /
    3. Technology
    4. /
    5. “The superbrain is ready.” The academician spoke about the future ..

    Technology

    “The superbrain is ready.” The academician spoke about the future of intelligent systems


    < br />

    The rapid development of artificial intelligence (AI) technologies and their application in a wide variety of areas is the main technological trend of the outgoing year. The starting point was the launch of the ChatGPT chatbot in November 2022. The significance of this event is compared by many with the advent of the Internet and mobile phones. An expert in the field of computing and control systems, Academician of the Russian Academy of Sciences Igor Kalyaev, spoke in an interview about why there is no point in creating a technical model of the human brain, and the transition to “strong” AI is only possible using supercomputers operating on qualitatively new principles during the First All-Russian School on artificial intelligence and big data for students and young scientists, which took place at the end of November at the National Center for Physics and Mathematics (NCFM) in Sarov. Interviewed by Vladislav Strekopytov.

    — Igor Anatolyevich, you are the co-chairman of two scientific directions of the NCFM: “National Center for Research of Supercomputer Architectures”, within the framework of which a photonic computer is being created, and “Artificial intelligence and big data in technical, industrial, natural and social systems.” How are these two directions connected?

    — Essentially, these are two sides of the same coin. Artificial intelligence cannot exist without supercomputers, as it requires processing huge amounts of data. Only supercomputers are able to cope with this work in an acceptable period of time.

    There is now an exponential growth in the level of machine learning tasks, and this requires enormous computing power. Modern systems built on the classical silicon element base cannot cope and are approaching their physical limit. As part of the first direction, we want to create a supercomputer on a new element base, built on new physical principles. Photonic supercomputers will dramatically reduce the machine learning time of complex neural networks.
    In turn, as part of the second direction, we plan to use AI technologies to increase the efficiency of supercomputers, which themselves are becoming so complex that they cannot be optimally used without the use of AI . Therefore, both directions are interconnected.

    — Is it enough will there only be an increase in the productivity of computers for the transition from “weak” AI to “strong”?

    — Artificial intelligence is the ability of a computer system to perform intellectual tasks, that is, those for which a human would require intelligence. At the same time, it is not at all necessary for the machine to have intelligence as such. Modern AI systems solve some specific problems better than any human. A calculator, for example, calculates better than a human. Back in 1997, the supercomputer Deep Blue beat world champion Garry Kasparov* at chess. And it was not a matter of intelligence, but of speed: he could calculate the development of the situation on the board 21 steps ahead and choose the most optimal option. And certainly no person can calculate the structure of a protein, but a machine can do it.
    But all these are examples of “weak” AI. Just tools, amplifiers of our mental activity. They act in accordance with the algorithms that humans have put into them.
    It will be possible to talk about “strong” machine intelligence when AI systems, based on their existing skills, a priori knowledge and accumulated experience, will be able to create algorithms themselves and develop skills for solving previously unknown problems. For this, simply increasing computing power is not enough. It is necessary to look for some fundamentally new ways of processing information, to invent computing devices that work on the principles inherent in the human brain.

    < strong>— Is it possible to say that one AI system has a stronger intelligence, and another one has a weaker one? How to evaluate the strength of machine intelligence?

    —You need to have evaluation criteria and quantitative characteristics. In physics there is a concept of power – this is work done per unit of time. I propose to introduce the concept of intellectual power as the amount of intellectual work performed in a specific period of time.
    For systems with “strong” AI, the amount of intellectual work will be determined by the increment in the algorithmic complexity of the new skill, in other words, the algorithm that the system formed when solving a previously unknown problem for it relative to the total algorithmic complexity of the skills it already has and used in its creation. But here the question arises: how effective will the new computational algorithm be? And for this you need to set an efficiency criterion. Even the simplest task of moving cargo from point A to point B can be optimized according to at least two criteria – path and time.
    In my understanding, the power of AI is equal to the algorithmic complexity of the new algorithm it generated, multiplied by the coefficient of its efficiency relative to some criterion, and divided by the time during which this algorithm was generated.
    — And in the case of neural networks?< /strong>

    —In principle, this approach can be applied to them as well. The computational complexity of a neural network can also be calculated: how many operations a computer must perform to get a result.
    The processes of creating AI based on modern computers are developing in parallel in two main directions – logical and neuromorphic. The logical approach is aimed at creating computer systems designed to solve one or more intellectual problems. That is, tasks that would require the use of intelligence if they were solved by a person. The neuromorphic approach is aimed at creating computer systems that imitate the functioning of the brain, and ultimately at creating its artificial analogue.
    So far, all AI achievements are directly related to the growth of machine productivity. The creation of a supercomputer with a performance of one teraflops in 1997 led to the emergence of the Deep Blue program, which beat Kasparov*. In 2004, the Blue Brain program, already operating at 100 teraflops, simulated the work of ten thousand neurons. In 2008, a supercomputer with a performance of one petaflop and at the same time the SyNAPSE program appeared, simulating the work of one million neurons and ten trillion synapses, which corresponds to approximately four percent of the human brain. In 2016, the AlphaGo program, using tens of petaflops of power, beat the world champion in Go, and this game is much more complex in rules and number of positions than chess. The well-known ChatGPT program “lives” on the AZURE AI supercomputer with a performance of 30 petaflops, and its training consumed as much electricity as the entire city of New York consumes for three weeks.
    At the same time, the machines themselves are not becoming smarter in the generally accepted sense of the word. They simply function faster, which allows them to analyze a larger number of options and process a larger volume of information in a shorter period of time. But they all work according to algorithms laid down by humans.

    — Perhaps Is it possible to create a machine prototype of the human brain?

    — At the beginning of 2018, an experiment was conducted in China. Simulating the second activity of one percent of the brain on the world's fastest supercomputer at that time, Sunway Taihulight, took about four minutes. If we extrapolate this result, it turns out that to simulate one hundred percent of the human brain in real time, a supercomputer with a performance of 1020-1021 flops will be required. Theoretically, it could appear by 2030, albeit the size of a 17-story building with a base of 300 by 300 meters. And it will consume 15 gigawatts of electricity, which is equivalent to three Sayano-Shushenskaya hydroelectric power stations. So artificial intelligence is unlikely to be able to compare with the human brain.
    My personal opinion: we will never create a technical analogue of the human brain. This is a road to nowhere. But you can look at the question from a different angle. The human brain has about 80 billion neurons, approximately 150×1012 synapses, and each synapse has about 1000 switches – analogues of logical elements in a conventional microprocessor.
    Already, tens of billions of computers and other gadgets are connected to the Internet, and in the near future this figure will reach the same 80 billion. Therefore, in principle, the superbrain is already ready. Whether it will really work like a human one is a big question. But the fact that it will have the properties of emergence, consisting in the emergence of new possibilities that not a single element included in it possesses, is certain. And this is the next level of AI development, which scientists call emergent intelligence.
    * A person performing the functions of a foreign agent.

    < br />

    Click to comment

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Take A Look

    You may be interested in:

    Technology

    Hundreds of scientists have studied the genes of 9,500 plant species Researchers from all over the world have studied different types of flowers. They...

    News

    Greek police at the site where Dr Mosley's body was discovered. Photo: Jeff Gilbert The film crew on the boat were 330 yards offshore when...

    Politics

    The news about the tragic death of Alexandra Ryazantseva, an activist of the Euromaidan movement and a member of the Ukrainian armed forces, has...

    Business

    Repair with SberServices service and Domklik conducted a study and found out in which cities, according to Russians, it is more profitable to purchase...