‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth

It was one of January’s most viral videos. Logan Paul, a YouTube celebrity, stumbles across a dead man hanging from a tree. The 22-year-old, who is in a Japanese forest famous as a suicide spot, is visibly shocked, then amused. “Dude, his hands are purple,” he says, before turning to his friends and giggling. “You never stand next to a dead guy?”

Paul, who has 16 million mostly teen subscribers to his YouTube channel, removed the video from YouTube 24 hours later amid a furious backlash. It was still long enough for the footage to receive 6m views and a spot on YouTube’s coveted list of trending videos.

The next day, I watched a copy of the video on YouTube. Then I clicked on the “Up next” thumbnails of recommended videos that YouTube showcases on the right-hand side of the video player. This conveyor belt of clips, which auto-play by default, are designed to seduce us to spend more time on Google’s video broadcasting platform. I was curious where they might lead.

The answer was a slew of videos of men mocking distraught teenage fans of Logan Paul, followed by CCTV footage of children stealing things and, a few clicks later, a video of children having their teeth pulled out with bizarre, homemade contraptions.

I had cleared my history, deleted my cookies, and opened a private browser to be sure YouTube was not personalising recommendations. This was the algorithm taking me on a journey of its own volition, and it culminated with a video of two boys, aged about five or six, punching and kicking one another.

“I’m going to post it on YouTube,” said a teenage girl, who sounded like she might be an older sibling. “Turn around and punch the heck out of that little boy.” They scuffled for several minutes until one had knocked the other’s tooth out.

There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

Company insiders tell me the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works – that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them – YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

Lately, it has also become one of the most controversial. The algorithm has been found to be promoting conspiracy theories about the Las Vegas mass shooting and incentivising, through recommendations, a thriving subculture that targets children with disturbing content such as cartoons in which the British children’s character Peppa Pig eats her father or drinks bleach.

Lewd and violent videos have been algorithmically served up to toddlers watching YouTube Kids, a dedicated app for children. One YouTube creator who was banned from making advertising revenues from his strange videos – which featured his children receiving flu shots, removing earwax, and crying over dead pets – told a reporter he had only been responding to the demands of Google’s algorithm. “That’s what got us out there and popular,” he said. “We learned to fuel it and do whatever it took to please the algorithm.”

Google has responded to these controversies in a process akin to Whac-A-Mole: expanding the army of human moderators, removing offensive YouTube videos identified by journalists and de-monetising the channels that create them. But none of those moves has diminished a growing concern that something has gone profoundly awry with the artificial intelligence powering YouTube.

Yet one stone has so far been largely unturned. Much has been written about Facebook and Twitter’s impact on politics, but in recent months academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”

If YouTube’s recommendation algorithm really has evolved to promote more disturbing content, how did that happen? And what is it doing to our politics?

read more here: www.theguardian.com

Facebook Pages that share false news won’t be able to buy ads

The company has already been working with outside fact-checkers like Snopes and the AP to flag inaccurate news stories. (These aren’t supposed to be stories that are disputed for reasons of opinion or partisanship, but rather outright hoaxes and lies.) It also says that when a story is marked as disputed, the link can can no longer be promoted through Facebook ads.

The next step, which the company is announcing today, involves stopping Pages that regularly share these stories from buying any Facebook ads at all, regardless of whether or not the ad includes a disputed link.

Facebook was criticized last year for its role in helping to spread of fake/false news. (The company is using the term “false news” for now — “fake news” has become heavily politicized and almost meaningless.) Product Director Rod Leathern said the company has been trying to fight back in three ways — ending the economic incentive to post false news stories, slowing the spread of those stories and helping people make more informed decisions when they see a false story.

In this case, Leathern said blocking ad-buying is meant to change the economic incentives. Facebook is concerned that “there are Pages posting this information that are using Facebook Ads to build audiences” to spread false news. By changing the ad policy, Facebook makes it harder for companies to attract that audience.

read more here:
https://techcrunch.com/2017/08/28/facebook-fake-news-ads/?ncid=rss&utm_source=dlvr.it&utm_medium=twitter