Connect with us

Привет, что-то ищете?

The Times On Ru
  1. The Times On RU
  2. /
  3. Технологии
  4. /
  5. Dangerous toys: The Pentagon began to improve killer machines based ..

Технологии

Dangerous toys: The Pentagon began to improve killer machines based on artificial intelligence

The US military wants to protect AI robots from “hostile attacks”

The US Department of Defense is launching technology that will allow artificial intelligence-powered killer machines to stay in control on the field combat due to visual “noise” misleading the robots. The Pentagon's Office of Innovation wants to protect artificial intelligence systems from «hostile attacks.» The study examines how visual «noise» leads to fatal errors in AI identification.

The US military wants to protect AI robots from 'hostile attacks'

Pentagon officials have sounded the alarm about «unique classes of vulnerabilities to artificial intelligence or autonomous systems» that they hope the new research will address.

The Daily Mail reports ahead of the program, dubbed Guaranteing AI Robustness against Deception (GARD), starting in 2022, aims to determine how visual data or other electronic signals entering an artificial intelligence system can be altered by calculated noise injection.   

Computer scientists at one of the defense contractors GARD have been experimenting with kaleidoscopic patches designed to trick artificial intelligence systems into creating fake IDs.

"Essentially, you can by adding noise to image or sensor, disrupt the machine learning algorithm,” a senior Pentagon official who led the research recently explained.

The news, the Daily Mail notes, comes amid fears that the Pentagon is «creating killer robots in the basement», which is said to have led to stricter artificial intelligence rules for the US military, requiring all systems to be approved before deployment.

“Knowing this algorithm, you can also sometimes create physically feasible attacks,” added Matt Turek, deputy director of the Information Innovation Office at the Defense Advanced Research Projects Agency (DARPA).

< p>It is technically possible to «cheat» AI algorithm, causing it to make critical mistakes — cause the AI ​​to incorrectly identify various patterned patches or stickers for a real physical object that is not actually there.

For example, a bus full of civilians could be mistakenly identified by AI as a tank if it were tagged with the right «visual noise,» as one national security reporter working for the website ClearanceJobs suggested as an example.

In short, such cheap and lightweight tactics of «making noise» could cause vital military AI to mistake enemy fighters for allies during a critical mission and vice versa.

Researchers in the modestly budgeted GARD program spent $51,000 studying the tactics, Pentagon audits show visual monitoring and signal suppression from 2022.

Their work was published in a 2019-2020 study illustrating how visual noise that may appear merely decorative or inconsequential to the human eye, such as the Magic Eye poster, 1990s, can be interpreted by artificial intelligence as a solid object.

Computer scientists at defense contractor MITER Corporation managed to create visual noise that artificial intelligence mistook for apples on a grocery store shelf, a bag left on the street, and even people.  

"Whether these are physically implemented attacks or noise patterns that are added to artificial intelligence systems, Turek said Wednesday, the GARD program has created state-of-the-art defenses against them."

“Some of these tools and capabilities were provided by the CDAO [Artificial Intelligence Office],” says Turek.

The Pentagon created the CDAO in 2022; it serves as a hub to facilitate faster adoption of artificial intelligence and related machine learning technologies across the military.

The Department of Defense recently updated its AI rules despite 'much confusion' regarding how it plans to use machines that make autonomous decisions on the battlefield.

Horowitz explained at an event in January of this year that «the directive does not prohibit the development of any artificial intelligence systems,» but will «clarify what is and is not permitted» and support a «commitment to responsible behavior.» while developing lethal autonomous systems.

While the Pentagon believes the changes should reassure the public, some said they were "unconvinced" these efforts, notes the Daily Mail.

Mark Brakel, director of the advocacy group Future of Life Institute (FLI), told DailyMail.com in January this year: «These weapons carry a huge risk of inadvertent escalation.»

He explained that the weapons at the base artificial intelligence can misinterpret something, such as a ray of sunlight, and perceive it as a threat, thus attacking foreign powers without reason and without deliberate hostile «visual noise.»

Brackel said the result could be destructive because «without real human control, AI weapons are like the Norwegian missile incident, close to nuclear Armageddon on steroids, and they could increase the risk of incidents in hotspots like the Taiwan Strait.»

The US Department of Defense is pushing hard to modernize its arsenal with autonomous drones, tanks and other weapons that select and attack targets without human intervention.

Оставить комментарий

Leave a Reply

Ваш адрес email не будет опубликован. Обязательные поля помечены *

Стоит Посмотреть

Новости По Дате

Апрель 2024
Пн Вт Ср Чт Пт Сб Вс
1234567
891011121314
15161718192021
22232425262728
2930  

Вам может быть интересно:

Спорт

~6 0~br> Zen В воскресенье вице-чемпионка Олимпийских игр 2022 года Александра Игнатова (Трусова) на контрольных прокатах сборной России в Санкт-Петербурге исполнила произвольную программу с...

Спорт

Zen 60~br>Александр Усик в центре скандала. Чемпион мира по боксу был задержан в аэропорту Кракова, Польша. Что произошло и почему столько шума — в...

Культура

ZenМОСКВА, 16 сентября. На форуме «Облачные города. Форум о будущем городов БРИКС» сити-менеджеры, лидеры технологического бизнеса, а также урбанисты и футурологи из более чем...

Политика

Проблема расхождений в еженедельных новостях В случае ядерного взрыва пулемет следует держать перед собой на вытянутых руках, чтобы расплавленный металл не капал на вашу...