Google's artificial intelligence tool incorrectly claimed that former US President Barack Obama was the country's first Muslim leader. Source: AP Photo/Alastair Grant
Google has restricted the search results its artificial intelligence (AI) systems return after they produced a series of errors, including the claim that Barack Obama was Muslim.
The company said it was adding new limits to its AI Review feature, which provided AI summaries of responses to search queries.
This includes limiting the number of health-related queries the AI will answer, limiting types of websites used by the system, as well as introducing restrictions in which AI responses were considered useless.
Google presented an overview of AI. feature earlier this month and debuted widely in the US last week ahead of a planned worldwide launch later this year. The feature reads and interprets information from other websites to provide concise answers to users' questions.
However, last week, users discovered that it was giving incorrect answers to questions, often using prank sites or messages on the Internet -forums, or misinterpreting reliable sources.
These included recommending people eat «one small rock a day,» based on an article on the satirical news site The Onion, or use glue to make pizza, citing a comment on Reddit. It said Mr Obama became the first Muslim US president after misunderstanding an academic textbook.
However, Google said several screenshots circulated widely on social media last week, such as calling out pregnant women smoking or those claiming that it is acceptable to leave dogs in hot cars were bogus.
Google said it has created «additional trigger refiners» for health queries, meaning AI reviews are less likely to show up for medical questions. The company said it has built better mechanisms for prank sites and limited queries «where AI reviews haven't proven as useful.»
However, it continues to use the feature and insists users find it useful.
Google's Gemini chatbot was also limited after it began generating historically inaccurate images of black Nazis
In In recent months, Google has faced a number of challenges related to its AI services. The company stopped generating images of people using its Gemini chatbot after users discovered it would draw historically inaccurate images of black Nazi soldiers.
Its previous chatbot Gemini was found to have expressed political opinions, such as arguing that Brexit was a bad idea.
Свежие комментарии