MOSCOW, January 2 The process of regulating artificial intelligence faces the problem of a lack of flexible policy among governments and a lack of political consensus to quickly respond to ongoing technological changes, RIA experts reported News.
At the beginning of December this year, the European Union agreed on the world's first law on artificial intelligence. The initiative aims to ensure that “AI systems sold and used in the EU are safe and respect the fundamental rights and values” of the community. The agreement comes amid global excitement and concern over recent progress in AI technologies, especially following the launch of OpenAI's ChatGPT language model in 2022. Many praise the model for its potential for professional use, while others wonder what impact such technologies could have on various aspects of human life.
«»The Future of Life Institute, whose members include leading technologists, warns against unfettered development of (AI), which could lead to the spread of misconceptions, significant job displacement and the possibility of (AI) becoming equal or even superior to the capabilities of the human mind in the short term. «It calls on governments to put in place security protocols to prevent potential catastrophic consequences that no government can control. The latest development of ChatGPT could exacerbate potential negative consequences,» said Rosario Jiraza, a professor at Pace University's Lubin School of Business.
At the same time, Daniel Linna, senior lecturer and director of legal and technology initiatives at Northwestern Pritzker School of Law and the McCormick School of Engineering, noted that policymakers and regulators must consider not only the risks associated with AI, but also the potential benefits of new technologies.
“Organizations must test artificial intelligence systems for accuracy and impartiality, be transparent and evaluate the impact (of such systems), and identify and minimize the harm that can be caused by artificial intelligence. Policymakers must provide opportunities for workers, students and the public to receive education and training in field of artificial intelligence,» Linna said.
Vasilis Galanos, a lecturer and research fellow at the University of Edinburgh, noted that current efforts to regulate artificial intelligence are focused on “advanced AI,” that is, advanced general-purpose models that could potentially pose greater risks.
However, according to the expert, they should also cover low-risk systems that can also have serious consequences for society.
«Many systems that are loosely regulated because they are perceived as less dangerous have the potential to cause a lot of harm. It is critical that countries and international organizations prioritize developing regulatory frameworks that will ensure ethical standards, protect privacy, and ensure that AI systems do not perpetuate discrimination or harm to vulnerable communities,” Galanos said.
Leviathan and Artificial Intelligence
Despite growing concern among governments around the world about the potential risks of AI, rapid progress in the field of artificial intelligence is making it difficult for governments to develop the necessary rules and guidelines.
«Given the rapid pace of AI development, there is a significant risk that regulatory efforts will fall behind, especially because today's governments do not have the existing expertise or flexible governance structures needed to respond quickly,» Galanos said.
“It is important that global regulatory responses are both proactive and adaptive, constantly evolving with technological advances and involving a wider range of stakeholders in the regulatory process,” he added.
In a similar vein, Hirasa highlighted the tendency of governments to move slowly, especially in times of strong divisions between political parties. «More autocratic governments like China do have the ability to act more quickly to address current threats. In the US, there is a heightened risk of inaction because the country is almost evenly divided politically, making it very difficult to carry out the necessary regular processes,» Hirasa continued.
Linna believes governments' perceived lack of experience is not a problem, but «the failure of governments to prioritize optimally harnessing the benefits of AI and minimizing the risks associated with it.» He also emphasized the scope of so-called «soft law», that is, non-binding guidelines, codes of conduct and standards that can be used by authorities.
“Modern governments are well prepared to engage experts to help create policy and regulatory tools for new technologies. The work of scientists, specialists and a wide range of organizations has developed a strong body of soft law for the design, development, deployment and verification of artificial intelligence systems. Governments can take soft law principles and evaluation standards and turn them into concrete laws and regulations,” Linna explained.
Global governance
The signing of the Bletchley Declaration confirmed the desire of many states to cooperate in minimizing the potential risks associated with AI. However, authorities in different parts of the world are taking different approaches to managing AI risks. For example, the United States, despite recent steps by the Joe Biden administration such as the “Safe, Secure and Trustworthy Artificial Intelligence” executive order, lacks a comprehensive federal approach, while the European Union is pursuing a more coordinated approach through legislation.
«Different regulatory approaches in the US, European Union, China and India could hamper joint risk management efforts in the Atlantic and Pacific regions, especially when it comes to identifying 'low risk' AI systems and protecting vulnerable groups. Transatlantic and transpacific harmonization could be strengthened by creating common principles and standards regulated by independent international organizations that take into account the nuances of different AI applications and their respective risk profiles,” Galanos said.
However, as Linna noted, countries will take different policy and regulatory approaches to artificial intelligence, even if they agree on global principles and rules.
“What the European Union, US and other countries have done so far will not significantly limit the scope for transatlantic or global cooperation on AI risk management. The more serious obstacles are the usual geopolitical issues and AI competition. But from a regulatory perspective AI and global agreements, we are in the very early stages of a historical development that will continue to happen for many years,” Linna added.
Meanwhile, Hirasa pointed out the difficulty of predicting what will happen next, since we cannot know what innovations will appear in the future.
«Who could have predicted the development of blockchain, artificial intelligence and its deep learning capabilities, ChatGPT and many other advances that are now happening in laboratories around the world and have not yet been made public? Quantum computer will undoubtedly also have a huge impact in the next decade «, the expert concluded.
Свежие комментарии