GCHQ’s director has said artificial intelligence software could have a profound impact on the way it operates, from spotting otherwise missed clues to thwart terror plots to better identifying the sources of fake news and computer viruses.
Jeremy Fleming’s remarks came as the spy agency prepared to publish a rare paper on Thursday defending its use of machine-learning technology to placate critics concerned about its bulk surveillance activities.
“AI, like so many technologies, offers great promise for society, prosperity and security. Its impact on GCHQ is equally profound,” he said. “While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ.”
AI is considered controversial because it relies on computer algorithms to make decisions based on patterns found in data. It is used alongside human analysts in investigations.
GCHQ will not formally say exactly how it uses AI software or which data it analyses, but it relies in part on monitoring people’s phone and messaging data and watching social media profiles.
But the agency, which is based in Cheltenham, recognises that it needs to engage better with the public after the Snowden disclosures highlighted the scale of its mass surveillance nearly a decade ago.
Key figures at GCHQ believe AI is able to point MI5 and counter-terrorism police to missed clues in identifying potentially deadly threats as part of what insiders said would amount to a “step change” in its operation.
The agency also indicated that AI could, in theory, help to better identify sources of fake news or spot deep fake images, which come typically from Russia, and more quickly spot and trace malicious virus software that often emerges from China or North Korea.
It could also support the National Crime Agency analyse billions of child abuse images, searching for hidden evidence of digital image manipulation and so minimising the amount of time human investigators have to spend examining shocking content.
Using algorithms to investigate and monitor individuals has generated controversy in some areas of law enforcement. There are concerns, for example, that tools such as facial recognition software have exhibited a race bias when previously tested. It also emerged earlier this month that German police want to be able to create fake computer-generated images in sting operations against paedophiles.
Daragh Murray, a senior lecturer at the human rights centre and school of law at the University of Essex, said the problem for intelligence agencies such as GCHQ was that it was unclear what was ruled in and ruled out.
“The difficulty with ethical frameworks, and we have seen this consistently across the tech sector, is that they are notoriously imprecise. They do not establish concrete obligations,” he said.
One expert who has worked closely with GCHQ said AI was better suited to “augmenting the analytic process” by analysing large datasets and passing on the results to human investigators.
Alexander Babuta, a research fellow with the Royal United Services Institute, said GCHQ’s problem was making sense of the vast volumes of data that it collects.
“The challenge is filtering it and triaging it to a manageable level so human analysts can make sense of it. That is a time-consuming and resource-intensive process, but it could also be less intrusive because the data is being handled by machines, not people,” he said.
But there are limitations to the power of the technology. GCHQ does not believe it is yet possible to predict when someone has been radicalised to the point where they might commit a terrorism offence, a technology that carries echoes of the film Minority Report, in which people are arrested “pre-crime”.
Свежие комментарии