OpenAI boss Sam Altman warned that generative AI could pose an «existential risk» to humanity. Credit: JIM LO SCALZO/EPA-EFE/Shutterstock
A senior regulator told The Telegraph: «The reason they put these models out for all of us to download now is because they want our the data for the models were taken into account. So was that consent properly requested? It is a matter of regulation in the UK and across Europe.”
The Information Commissioner can issue notices requiring companies to explain their activities, issue writ of execution requiring companies to stop acting, or impose fines of up to £17 million. in accordance with data protection laws.
The representative of the British information commissioner, John Edwards, said: «We will act where organizations do not comply with the law and consider the consequences for individuals.»
Ofcom also plans to introduce tougher rules for AI companies to ensure the technology is not misused.
The agency, which is the new online safety regulator for social media and technology companies, plans to require an assessment risks of any new AI.
The crackdown comes after Rishi Sunak met with executives from three of the biggest artificial intelligence companies — OpenAI, Anthropic and Google-backed DeepMind — last week amid growing concerns about the technology's impact on society.
The prime minister said the technology needed to have the right «fences» and that he discussed the risk of disinformation as well as broader «existential» threats.
The Competition Watchdog has already launched an investigation into the AI market, including a security check consequences of technology.
The privacy issue came to the fore in March when the Italian data protection authority temporarily blocked ChatGPT because «there were no legal grounds justifying the massive collection and storage of personal data.»
OpenAI responded by enforcing the rules in all of Europe allows anyone to opt out of processing via an online form. It also expanded its privacy policy to include the right to delete information that users believe is inaccurate, similar to the right to be forgotten in data laws.
Is artificial intelligence advancing too fast? Poll
Andrew Strait, Associate Director of the Ada Lovelace Institute, said: “There is a problem with consent as a basis for processing data at the ChatGPT scale. It is very difficult for ordinary people to explain what is happening with their data.
“Are they disappearing? Does it capture your information? Is it reusable? Consent works best when you have a clear understanding of what you are agreeing to.»
The Information Commissioner's spokesman said: «Organizations developing or using generative AI should consider their protection obligations from the outset. data
.
p>
“Data protection law still applies even if the personal information being processed comes from publicly available sources. If you develop or use generative AI that processes personal data, you must do so in a lawful manner. This includes consent or legitimate interests.”
Lorna Woods, professor of internet law at the University of Essex, said: “Data protection rules apply whether or not you have made something public.”
p>
Свежие комментарии