More Than $1B in Fines Loom in Facebook Scandal

As Mark Zuckerberg prepared for his Tuesday Senate testimony and Wednesday House hearing, The Washington Post reported there’s a call for the Facebook CEO’s ouster and that record fines may loom for the embattled social media giant. (For full coverage of Zuckerberg’s Tuesday appearances before the Senate Commerce and Judiciary Committees, click here and here.)

The Post reports that Scott Stringer, New York City’s comptroller and custodian of the city’s $193 billion pension fund, which holds $895 million in Facebook stock, wrote a letter March 27 pushing Facebook to add three new independent directors and replace Zuckerberg with an independent chairman.

“Part of my fiduciary role is to ask questions of this company as it relates to issues they’re facing,” said Stringer in an interview last week about his letter. Regarding the Cambridge Analytica revelations, he said, “There’s regulatory risk. There’s revenue risk. There’s reputational risk. And there’s also a genuine risk to our democracy.”

The paper also reports that three former Federal Trade Commission (FTC) officials said Facebook’s disclosure that its search tools were used to collect data on most of its 2.2 billion users could potentially trigger record fines and create new legal vulnerability for not having prevented risks to user data.

Those officials, all of whom were at the FTC during the privacy investigation that led to a 2011 consent with Facebook, said the company’s latest mishap may violate the decree’s provisions requiring the implementation of a privacy program.

But Facebook’s chief operating officer, Sheryl Sandberg, dismissed concerns about the fines in an interview with Bloomberg News, “I think we’re very confident that that was in compliance with the FTC consent decree,” she said.

Still, David Vladeck, who was head of the FTC’s bureau of consumer protection when the decree was drafted and signed by Facebook, told The Washington Post it is possible that this episode is a violation of the consent decree and that Facebook may face fines of $1 billion or more.

“The agency will want to send a signal … that [it] takes its consent decrees seriously,” Vladeck said.

In another blow, Charter Communications Chairman and CEO Tom Rutledge, blogged Monday that he hopes Congress cracks down on Facebook and privacy issues.

“Despite our reliance on websites and social media, the truth is, most people don’t know that when they engage in these activities online, many internet companies are collecting a significant amount of information about them and selling it to others for advertising, research and even voter persuasion purposes,” Rutledge writes. “So we are urging Congress to pass a uniform law that provides greater privacy and data security protections and applies the same standard to everybody in the Internet ecosystem, including us.”

Facebook continues efforts to regain trust. On Monday, it announced a new research initiative with seven nonprofits to study the effect of social media on elections. Under the new initiative, social science researchers will propose research projects for peer review based on a set of general research goals. If a proposal is approved, the researchers will receive the anonymized data from Facebook and accompanying funding from the foundations.

Crucially, Facebook “will not have any right to review or approve their research findings prior to publication,” although it may have influence over which projects are approved.

read more here: responsemagazine.com

Why I left Fox News

By Ralph Peters

You could measure the decline of Fox News by the drop in the quality of guests waiting in the green room. A year and a half ago, you might have heard George Will discussing policy with a senator while a former Cabinet member listened in. Today, you would meet a Republican commissar with a steakhouse waistline and an eager young woman wearing too little fabric and too much makeup, immersed in memorizing her talking points.

This wasn’t a case of the rats leaving a sinking ship. The best sailors were driven overboard by the rodents.

As I wrote in an internal Fox memo, leaked and widely disseminated, I declined to renew my contract as Fox News’s strategic analyst because of the network’s propagandizing for the Trump administration. Today’s Fox prime-time lineup preaches paranoia, attacking processes and institutions vital to our republic and challenging the rule of law.

Four decades ago, as a U.S. Army second lieutenant, I took an oath to “support and defend the Constitution.” In moral and ethical terms, that oath never expires. As Fox’s assault on our constitutional order intensified, spearheaded by its after-dinner demagogues, I had no choice but to leave.

My error was waiting so long to walk away. The chance to speak to millions of Americans is seductive, and, with the infinite human capacity for self-delusion, I rationalized that I could make a difference by remaining at Fox and speaking honestly.

I was wrong.

As early as the fall of 2016, and especially as doubts mounted about the new Trump administration’s national security vulnerabilities, I increasingly was blocked from speaking on the issues about which I could offer real expertise: Russian affairs and our intelligence community. I did not hide my views at Fox and, as word spread that I would not unswervingly support President Trump and, worse, that I believed an investigation into Russian interference was essential to our national security, I was excluded from segments that touched on Vladimir Putin’s possible influence on an American president, his campaign or his administration.

I was the one person on the Fox payroll who, trained in Russian studies and the Russian language, had been face to face with Russian intelligence officers in the Kremlin and in far-flung provinces. I have traveled widely in and written extensively about the region. Yet I could only rarely and briefly comment on the paramount security question of our time: whether Putin and his security services ensnared the man who would become our president. Trump’s behavior patterns and evident weaknesses (financial entanglements, lack of self-control and sense of sexual entitlement) would have made him an ideal blackmail target — and the Russian security apparatus plays a long game.

As indictments piled up, though, I could not even discuss the mechanics of how the Russians work on either Fox News or Fox Business. (Asked by a Washington Post editor for a comment, Fox’s public relations department sent this statement: “There is no truth to the notion that Ralph Peters was ‘blocked’ from appearing on the network to talk about the major headlines, including discussing Russia, North Korea and even gun control recently. In fact, he appeared across both networks multiple times in just the past three weeks.”)

All Americans, whatever their politics, should want to know, with certainty, whether a hostile power has our president and those close to him in thrall. This isn’t about party but about our security at the most profound level. Every so often, I could work in a comment on the air, but even the best-disposed hosts were wary of transgressing the party line.

Fox never tried to put words in my mouth, nor was I told explicitly that I was taboo on Trump-Putin matters. I simply was no longer called on for topics central to my expertise. I was relegated to Groundhog Day analysis of North Korea and the Middle East, or to Russia-related news that didn’t touch the administration. Listening to political hacks with no knowledge of things Russian tell the vast Fox audience that the special counsel’s investigation was a “witch hunt,” while I could not respond, became too much to bear. There is indeed a witch hunt, and it’s led by Fox against Robert Mueller.

read more here: washingtonpost.com

‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth

It was one of January’s most viral videos. Logan Paul, a YouTube celebrity, stumbles across a dead man hanging from a tree. The 22-year-old, who is in a Japanese forest famous as a suicide spot, is visibly shocked, then amused. “Dude, his hands are purple,” he says, before turning to his friends and giggling. “You never stand next to a dead guy?”

Paul, who has 16 million mostly teen subscribers to his YouTube channel, removed the video from YouTube 24 hours later amid a furious backlash. It was still long enough for the footage to receive 6m views and a spot on YouTube’s coveted list of trending videos.

The next day, I watched a copy of the video on YouTube. Then I clicked on the “Up next” thumbnails of recommended videos that YouTube showcases on the right-hand side of the video player. This conveyor belt of clips, which auto-play by default, are designed to seduce us to spend more time on Google’s video broadcasting platform. I was curious where they might lead.

The answer was a slew of videos of men mocking distraught teenage fans of Logan Paul, followed by CCTV footage of children stealing things and, a few clicks later, a video of children having their teeth pulled out with bizarre, homemade contraptions.

I had cleared my history, deleted my cookies, and opened a private browser to be sure YouTube was not personalising recommendations. This was the algorithm taking me on a journey of its own volition, and it culminated with a video of two boys, aged about five or six, punching and kicking one another.

“I’m going to post it on YouTube,” said a teenage girl, who sounded like she might be an older sibling. “Turn around and punch the heck out of that little boy.” They scuffled for several minutes until one had knocked the other’s tooth out.

There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

Company insiders tell me the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works – that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them – YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

Lately, it has also become one of the most controversial. The algorithm has been found to be promoting conspiracy theories about the Las Vegas mass shooting and incentivising, through recommendations, a thriving subculture that targets children with disturbing content such as cartoons in which the British children’s character Peppa Pig eats her father or drinks bleach.

Lewd and violent videos have been algorithmically served up to toddlers watching YouTube Kids, a dedicated app for children. One YouTube creator who was banned from making advertising revenues from his strange videos – which featured his children receiving flu shots, removing earwax, and crying over dead pets – told a reporter he had only been responding to the demands of Google’s algorithm. “That’s what got us out there and popular,” he said. “We learned to fuel it and do whatever it took to please the algorithm.”

Google has responded to these controversies in a process akin to Whac-A-Mole: expanding the army of human moderators, removing offensive YouTube videos identified by journalists and de-monetising the channels that create them. But none of those moves has diminished a growing concern that something has gone profoundly awry with the artificial intelligence powering YouTube.

Yet one stone has so far been largely unturned. Much has been written about Facebook and Twitter’s impact on politics, but in recent months academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”

If YouTube’s recommendation algorithm really has evolved to promote more disturbing content, how did that happen? And what is it doing to our politics?

read more here: www.theguardian.com

Facebook Pages that share false news won’t be able to buy ads

The company has already been working with outside fact-checkers like Snopes and the AP to flag inaccurate news stories. (These aren’t supposed to be stories that are disputed for reasons of opinion or partisanship, but rather outright hoaxes and lies.) It also says that when a story is marked as disputed, the link can can no longer be promoted through Facebook ads.

The next step, which the company is announcing today, involves stopping Pages that regularly share these stories from buying any Facebook ads at all, regardless of whether or not the ad includes a disputed link.

Facebook was criticized last year for its role in helping to spread of fake/false news. (The company is using the term “false news” for now — “fake news” has become heavily politicized and almost meaningless.) Product Director Rod Leathern said the company has been trying to fight back in three ways — ending the economic incentive to post false news stories, slowing the spread of those stories and helping people make more informed decisions when they see a false story.

In this case, Leathern said blocking ad-buying is meant to change the economic incentives. Facebook is concerned that “there are Pages posting this information that are using Facebook Ads to build audiences” to spread false news. By changing the ad policy, Facebook makes it harder for companies to attract that audience.

read more here:
https://techcrunch.com/2017/08/28/facebook-fake-news-ads/?ncid=rss&utm_source=dlvr.it&utm_medium=twitter