Alphabet Inc’s Google has announced that it will implement additional measures to identify and remove terrorist or violent extremist content on its video sharing platform YouTube.
In a blog post, the tech giant said it would take a tougher position on videos containing supremacist or inflammatory religious content by issuing a warning and not monetising or recommending them for user endorsements, even if they do not clearly violate its policies.
“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now,” said Google’s general counsel Kent Walker.
The blog post Google pledged to take four additional steps to counter ‘online terror.’
First, we are increasing our use of technology to help identify extremist and terrorism-related videos.
Second, because technology alone is not a silver bullet, we will greatly increase the number of independent experts in YouTube’s Trusted Flagger programme.
Third, we will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements.
Finally, YouTube will expand its role in counter-radicalisation efforts. Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe.
The company also plans to reach potential Islamic State recruits through targeted online advertising and redirect them towards anti-terrorist videos in a bid to change their minds about joining.
“We have also recently committed to working with industry colleagues—including Facebook, Microsoft, and Twitter—to establish an international forum to share and develop technology and support smaller companies and accelerate our joint efforts to tackle terrorism online,” said Walker in the blog post.
Social media companies have been pressed by countries like Germany, France and Britain to do more to remove militant content and hate speech as incidents of killings, bombings and shootings by Islamist militants increase in the recent years.
A recent report by Reuters highlighted that Facebook has recently offered additional insight on its efforts to remove terrorism content, a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting.
Facebook has ramped up use of artificial intelligence such as image matching and language understanding to identify and remove content quickly, the report said.