Connect with us

Привет, что-то ищете?

The Times On Ru
  1. The Times On RU
  2. /
  3. Бизнес
  4. /
  5. Thousands of child abuse images found in AI training tool

Бизнес

Thousands of child abuse images found in AI training tool

Artificial intelligence imaging tools were used to create a fake image of Donald Trump being arrested. Photo: Eliot Higgins/Twitter

They are designed to be “trained” on millions of existing images and captions, allowing them to create new images.

Concerns about the illegal inclusion of images in datasets have been raised before, but the Stanford study is believed to include the most comprehensive evidence yet of child abuse material.

David Thiel, chief technologist at the Stanford Internet Observatory, who led the study said other image datasets could have similar problems, although they are highly protected and difficult to examine.

Parts of the LAION-5b dataset were used. for training Stable Diffusion 1.5, a system released last year.

Stability AI, the London-based technology company that now runs Stable Diffusion, said the version in question was developed by a separate organization called RunwayML.

Stability AI said it has since applied much stricter rules to data sets and banned users to create explicit content in subsequent releases.

“Stability AI only hosts versions of Stable Diffusion that include filters in their API. These filters remove unsafe content from access to models. By removing this content before it reaches the model, we can help prevent the model from generating unsafe content.

“This report focuses on the LAION-5b dataset as a whole. AI stability models were trained on a filtered subset of this dataset. In addition, we subsequently refined these models to mitigate residual behavior.

“Stability AI is committed to preventing the misuse of AI. We prohibit the use of our image models and services for illegal activities, including attempts to edit or create CSAM.»

A RunwayML spokesperson said: “Stable Diffusion 1.5 has been released in collaboration with Stability AI and researchers from LMU Munich. This collaboration has been repeatedly announced by Stability itself and numerous media outlets.»

Stable Diffusion is released as free and editable software, meaning that earlier versions of it are still downloaded and distributed online.< /p>

The Internet Watch Foundation, a UK hotline for reporting child abuse material, separately said it was working with LAION to remove links to abuse images.

Susie Hargreaves, chief executive of the IWF, said: “The IWF is working with the team behind the LAION dataset to help them filter and remove URLs known to link to child sexual abuse material.

“The IWF found that a relatively small number of links to illegal content were also included in the LAION dataset. Without strict moderation and content filtering, there is always a danger that criminal material from the open Internet will end up in these giant data sets.

“We are pleased that the LAION team wants to proactively address this issue. , and we look forward to working with them on a sustainable solution.”

LAION is run by a team of volunteer researchers in Germany and aims to provide a free alternative to the huge image libraries created by private companies such as OpenAI.

Responding to the study, LAION stated: “LAION is a non-profit organization that provides datasets, tools and models to advance machine learning research. We are committed to open public education and sustainable resource use by reusing existing datasets and models.

“LAION's datasets (more than 5.85 billion records) are sourced from the freely available Common Crawl web index and only offer links to content on the public web, without images. We have developed and published our own stringent filters to detect and remove illegal content from LAION datasets before publishing them.

“We are collaborating with universities, researchers and non-governmental organizations to improve these filters and are currently working with the Internet Watch Foundation to identify and remove content believed to violate laws. We invite Stanford researchers to join LAION to improve our datasets and develop effective filters for detecting malicious content.

“LAION has a zero-tolerance policy for illegal content, and out of an abundance of caution, we are temporarily removing LAION datasets to ensure they are safe before republishing them.”

Google said it used a range of methods for filtering offensive and illegal material, and that only the first version of its Imagen system was trained using the LAION dataset.

“We have extensive experience combating online child sexual abuse and exploitation, and our approach to generative artificial intelligence is no different,» the company said.

«We don't do that. We do not allow the creation or distribution of child sexual abuse material (CSAM) on our platforms, and we have built in safeguards into Google's artificial intelligence models and products to detect and prevent relevant outcomes.

«We will continue to act responsibly, working closely with industry experts to ensure we are evolving and strengthening our defenses to stay ahead of emerging trends abuses as they occur.”

Оставить комментарий

Leave a Reply

Ваш адрес email не будет опубликован. Обязательные поля помечены *

Стоит Посмотреть

Новости По Дате

Декабрь 2023
Пн Вт Ср Чт Пт Сб Вс
 123
45678910
11121314151617
18192021222324
25262728293031

Вам может быть интересно:

Политика

Арестович: межконтинентальная баллистическая ракета поразила Южмаш Алексей Арестович. Фото: кадр из видео. Бывший советник офиса президента Украины Алексей Арестович* (включен в список террористов и...

Технологии

Подведены итоги международного форума, посвященного долголетию Механизмы старения и пути воздействия на них обсудили участники первого международного форума «Путь долгожителей», собравшего 122 специалиста из...

Спорт

< br>Zen Александра Трусова и Анна Щербакова начинают раскрывать тайны, которые ранее замалчивались. Спорт рассказывает о том, как два выдающихся фигуриста преодолели жесткое соперничество...

Общество

Фото: Pixabay.com. В Дзержинске две школьницы пришли поздравить учительницу с тортом и, ругаясь, размазали его по лицу учительницы. Поведение самой учительницы также вызывает вопросы....