Google shuts YouTube channel implicated in Kremlin political propaganda ops

A YouTube channel that had been implicated in Russia disinformation operations to target the U.S. 2016 election has been taken down by Google.

Earlier this week The Daily Beast claimed the channel, run by two black video bloggers calling themselves Williams and Kalvin Johnson, was part of Russian disinformation operations — saying this had been confirmed to it by investigators examining how social media platforms had been utilized in a broad campaign by Russia to try to influence US politics.

The two vloggers apparently had multiple social media accounts on other platforms. And their content was pulled from Facebook back in August after being identified as Russian-backed propaganda, according to the Daily Beast’s sources.

Videos posted to the YouTube channel, which was live until earlier this week, apparently focused on criticizing and abusing Hillary Clinton, including accusing her of being a racist as well as spreading various conspiracy theories about the Clintons, along with pro-Trump commentary.

The content appeared intended for an African American audience, although the videos did not gain significant traction on YouTube, according to The Daily Beast, which said they had only garnered “hundreds” of views prior to the channel being closed (vs the pair’s Facebook page having ~48,000 fans before it was closed, and videos uploaded there racking up “thousands” of views).

A Google spokesman ignored the specific questions we put to it about the YouTube channel, sending only this generic statement: “All videos uploaded to YouTube must comply with our Community Guidelines and we routinely remove videos flagged by our community that violate those policies. We also terminate the accounts of users who repeatedly violate our Guidelines or Terms of Service.”

So while the company appears to be confirming it took the channel down it’s not providing a specific reason beyond TOS violations at this stage. (And the offensive nature of the content offers more than enough justification for Google to shutter the channel.)

However, earlier this week the Washington Post reported that Google had uncovered evidence that Russian operatives spent money buying ads on its platform in an attempt to interfere in the 2016 U.S. election, citing people familiar with the investigation.

The New York Times also reported that Google has found accounts believed to be associated wth the Russian government — claiming Kremlin agents purchased $4,700 worth of search ads and more traditional display ads. It also said the company has found a separate $53,000 worth of ads with political material that were purchased from Russian internet addresses, building addresses or with Russian currency — though the newspaper’s source said it’s not clear whether the latter spend was definitively associated with the Russian government.

Google has yet to publicly confirm any of these reports. Though it has not denied them either. Its statement so far has been that: “We are taking a deeper look to investigate attempts to abuse our systems, working with researchers and other companies, and will provide assistance to ongoing inquiries.”

The company has been called to testify to a Senate Intelligence Committee on November 1, along with Facebook, and Twitter. The committee is examining how social media platforms may have been used by foreign actors to influence the 2016 US election.

Last month Facebook confirmed Russian agents had utilized its platform in an apparent attempt to sew social division across the U.S. — revealing it had found purchases worth around $100,000 in targeted advertising or some 3,000+ ads.

Twitter has also confirmed finding some evidence of Russian interference in the 2016 US election on its platform.

The wider question for all these user generated content platforms is how their stated preference for free speech (and hands off moderation) can co-exist with weaponized disinformation campaigns conducted by hostile foreign entities with apparently unfettered access to their platforms — especially given the disinformation does not appear limited to adverts, with content itself also being implicated (including, apparently, people being paid to create and post political disinformation).

User generated content platforms have not historically sold themselves on the pro quality of content they make available. Rather their USP has been the authenticity of the voices they offer access to (though it’s also fair to say they offer a conglomerate mix). But the question is what happens if social media users start to view that mix with increasing mistrust — as something that might be being deliberately adulterated or infiltrated by malicious elements?

The tech platforms’ lack of a stated editorial agenda of their own could result in the perception that the content they surface is biased anyway — and in ways many people might equally view with mistrust. The risk is the tech starts to looks like a fake news toolkit for mass manipulation.

Published at Wed, 11 Oct 2017 10:01:36 +0000