Vaccines face a wave of anti-vax disinformation
Credit: Bloomberg
For Lyric Jain, the 25-year old founder of anti-fake news start-up Logically, the struggle is personal. “We had a nasty incident in my family where my grandmother fell victim to some misinformation about a cancer diagnosis,” he says, turning to alternative remedies and spurning scientific treatments.
At Cambridge University in 2016, he also saw the bitterly partisan divisions that misinformation was creating. Cambridge was one of the most pro-Remain areas in the country, while his home town of Stone in Staffordshire was staunchly pro-Leave.
These experiences led to the founding of Logically in 2017. Jain’s start-up, which has raised more than $12m, according to Crunchbase, offers a suite of products and apps designed to combat fake news head on.
Humans in the loop
Logically uses artificial intelligence tools to detect rising stories on social media or online, using natural language processing to analyse and score claims shared on social media. These can then be flagged to social networks, Governments, police or election officials to counter risks.
“We need to keep humans in the loop,” Lyric says, “we have worked with law enforcement on misinformation. Other cases we have forwarded them to Government ministries, particularly around Covid and 5G — and on various Russian operations in Europe and the US.”
A hybrid of media start-up and cyber security, Logically also has its own browser extensions that check the trustworthiness of websites and a news app, that features debunked claims and exclusive investigations. It took to fact-checking the US debates in real time, using artificial intelligence to spot falsehoods for human fact-checkers to flat in its app.
There is a small but growing cohort of start-ups in the UK that are tackling the thorny, and often politically charged subject, of fake news.
Subversive conspiracies, such as QAnon, have spread from the US to the UK
Credit: AFP
It comes as government’s step up their efforts into online harms and misinformation. The Government is setting up an Online Harms unit within broadcast regulator Ofcom, which will include disinformation in its remit. The Government also set up a central Counter Disinformation Unit, dealing with 70 news items per week.
Social media giants such as Facebook have spent years investing in thousands of fact-checkers and moderators to tackle harmful content online. It has more than 15,000 content reviewers and 30,000 staff in total working on safety. Using artificial intelligence to remove harmful content is the end goal. In 2017, Facebook boss Mark Zuckerberg admitted it would take “many years” for AI to truly understand news enough to interpret it.
They still face criticism — one report estimated 95pc of vaccine fake news is not acted upon.
But technology is playing a growing role in the process.
In the UK, one start-up close to the Government’s coronavirus response that has a history in tackling disinformation, in particular terror propaganda online, is Faculty AI.
Founded by Marc Warner and Angie Ma, one of Faculty’s early projects was building a model for the Home Office that was able to detect ISIS propaganda with 99pc of terror videos uploaded in real time. Faculty is able to create AI projects bespoke for organisations, managing huge volumes of data with its technology.
As well as Government clients, there is also a burgeoning market from corporates to stay ahead of the latest attack campaign or to take down fake social media profiles or rumours.
Marc Warner's Faculty AI has been used to target terror propaganda online
Credit: Sam Frost
One British start-up taking on this kind of misinformation on the web is Digital Shadows. Digital Shadows, which raised $10m last year, focuses on scouring the dark web and fringe forums for threats to businesses and phishing campaigns. Its main tool, SearchLight, is essentially a dashboard of threats from shadowy corners of the web.
It can also be used for takedowns, removing websites that might violate copyright or be posing as a real brand in an effort to scam customers. These tools can also be used to remove fake social media profiles of brands or important individuals.
Factmata is another UK start-up tackling online disinformation for brands. “Our mission is to reduce online misinformation and disinformation by building a technology that helps anyone understand and deal with harmful narratives,” says Dhruv Ghulati, chief executive of Factmata.
It starts this by ranking any website against a list of metrics such as hyperpartisanship, clickbait, hate speech, sexism and subjectivity — analysing millions of URLs and flagging the most risky.
This is used to build a blacklist for advertisers of sites considered toxic or simply peddling fake news.
“We want to be a layer on the internet and able to flag the safety of content,” Ghulati says. Of course, this is not a simple task. As well expert PhDs who are able to build algorithms, these have to be able to scale to a level where they can monitor billions of websites and entries per day. “We have spent a long time training the AI to work at the scale you need it to work on the internet,” he adds.
Big Tech interest
The rise of start-ups tackling fake news has not escaped the gaze of Big Tech firms. In June last year, Twitter acquired London start-up Fabula AI, founded by Imperial academic Prof Michael Bronstein. Fabula used cutting edge “geometric deep learning” — in essence looking at how patterns form in data sets so big normal algorithms cannot begin to tackle them. This technology was picked up by Twitter to catch misinformation as it spread. An MIT study, looking at a decade of Tweets, found fake stories spread more quickly than genuine news or ideas, making it possible to spot them.
Hope everybody is watching @OANN right now. Other media afraid to show. People are coming forward like never before. Large truck carrying hundreds of thousands of fraudulent (FAKE) ballots to a voting center? TERRIBLE — SAVE AMERICA!
— Donald J. Trump (@realDonaldTrump) December 1, 2020
In fact, many of these start-ups have been quietly working with big social media networks for some time, albeit secretly.
“Our customers today are big social media platforms that we can’t mention by name,” says Crisp’s Hildreth.
For Hildreth, the key to targeting fake news remains a close collaboration between technology and expert human investigators, cyber experts and ex-intelligence officials, who can monitor campaigns and find their source.
With the need to vaccinate billions of people more urgent than ever, and anti-vax disinformation more organised than ever, they will be more busy than ever.
Свежие комментарии