In July 2024, the international analytical agency IDC published a study on the main risks of generative artificial intelligence. Experts note that such technologies are used in marketing, supply chain management, retail, medical data analysis, software code generation and other areas. More than 45% of CEOs and 66% of IT directors believe that vendors do not fully understand the risks associated with generative AI. Particular attention should be paid to issues related to privacy, information protection and security, as well as the security of data sets used to train neural networks.
Maxim Buzinov, Head of the R&D Laboratory of the Cybersecurity Technologies Center of the Solar Group, told what requirements for transparency, aspects of responsibility and mitigation of risks associated with AI the company takes into account in the development of information security solutions.
First of all, this is the validity of data and the subjectivity factor. The second aspect is the synchronization of work with data between data scientists and developers, who initially focus on various characteristics of data sets: model accuracy for the former and performance and scalability metrics for IT specialists. Therefore, the company is developing a methodology for automatic testing of AI-based products in order to reduce the risks of vulnerabilities in ready-made solutions and ensure transparency and accountability of AI software modules.
The expert notes that high risks are also associated with the confidentiality of data sets and data that may contain malicious components. Therefore, within the R&D laboratory, a separate infrastructure is being formed to test information security solutions based on AI to check reliability and fault tolerance under various operating scenarios.
Equally important, according to Maxim Buzinov, are the explainability criteria and interpretability of AI-based solutions, especially in medical diagnostic solutions, as well as taking into account the vulnerabilities of large language models.
«For example, attackers can use model hints (promt) to implement jailbreaking injections. They can «confuse» the model and also be used to obtain sensitive information. If the LLM output is not checked for security, the attacker can insert his own special prompt so that LLM generates malicious code that leads to the loss of credentials,» the expert adds.
Along with current research in the field of AI, the R&D laboratory of Solar Group is also in dialogue with the largest vendors of the information security market and leading scientific communities on the main areas of AI regulation. The focus is on a risk-oriented approach, responsibility, safe work with data, no harm, voluntary certification and compliance with the AI Code of Ethics, etc. In addition, the company runs internship programs in machine learning and AI in the field of data analysis and mathematical modeling, including in partnership with Bauman Moscow State Technical University.
More details.
Свежие комментарии