The Technology 202: Big Tech under pressure to limit spread of 8chan and other extremist content


Ctrl + N

Social media companies are facing greater pressure to limit the spread of content from 8chan and other sites known to foster violent extremism, after this weekend’s shooting in El Paso became the third this year to apparently begin with the gunman posting a hate-filled screed to the fringe site. 

As of this morning, 8chan is offline after security and hosting companies dropped the site. But it can still communicate with followers through a verified Twitter account. The #untwitter8chan hashtag began trending as users demanded the platform to remove its account; some people called on Twitter users to block major advertisers on the platform to send a message to the San Francisco social network. 

Alex Stamos, the former Facebook chief security officer who now serves as the director of the Stanford Internet Observatory, tells me 8chan can use Twitter to effectively broadcast where to find them on the darker corners of internet now that their main site is offline. 

From Gwen Snyder, a Philadelphia-based researcher of white supremacy:

The groundswell highlights how mainstream social networks, which have largely evaded the political glare in the wake of the shooting, can still play a role in limiting the spread of incendiary content from sites that might otherwise stay in the shadows. 

Stamos tells me he wants “responsible” tech companies such as Facebook, Google and Twitter to start treating content from 8chan and similar sites like spam or sites that spread malware. He’s calling for a social media blockade against the 8chan operators and links from fringe sites so that they aren’t amplified. 

“Creating a cliff between those sites and the first click on 8chan would be a small win,” Stamos said in a tweet this week.

Some key lawmakers agree that Big Tech needs to take more responsibility to limit violence that begins online. “Truthfully, this kind of radicalization isn’t just happening on fringe platforms like the one used by the El Paso shooter,” Sen. Mark Warner (D-Va.) warned in a statement this week. “Extremists in all forms can easily exploit the reach, scale, and openness of even the most popular social media platforms like Facebook and YouTube, using them as a tool to recruit other extremists and spread hate.

“Social media companies today have an incredible amount of power and an equal amount of responsibility,” he continued. “In the face of increasing online radicalization, social media platforms should make it a priority to adapt their approach to our ever-changing online threats, and update how they understand and define dangerous or hateful content.”

But right now, the companies have very inconsistent content moderation policies and policies that complicate that. 

Twitter wouldn’t comment about whether it would leave 8chan’s verified Twitter online in the face of public pressure. Facebook also didn’t comment about whether it would consider blocking all 8chan links from its service — but the company does not allow content that praises or supports the El Paso shooting, including shares of the shooter’s 8chan manifesto. Facebook also blocks links to sites that it identified to contain the manifesto, including to places it was shared on 8chan. Google did not respond to requests for comment about how it might take action to block 8chan, but the company has previously kicked the website out of search results.

However, Stamos noted that YouTube is currently hosting a video of 8chan owner Jim Watkins posted yesterday, in which he says 8chan is cooperating with law enforcement and criticized security company Cloudflare’s decision to terminate service to 8chan. 

Stamos says there should be a new coordinating body, run by industry but perhaps formed by the U.S. government, that could decide and maintain a list of what websites the social networks should voluntarily ban. This would ensure companies take a more unified front against content from 8chan and other fringe sites that spread disinformation, an idea that already has some backing from members of Congress.

“You have to have consistency for this to be effective,” Stamos told me. 

But other security and privacy experts worried this could only make matters worse. Cindy Cohn, the executive director of the Electronic Frontier Foundation, thinks the idea of one body making content moderation decisions across several platforms could create bigger problems when mistakes are inevitably made — or people try to game the system. 

“I wouldn’t want a central location where censorship decisions get made,” Cohn told me. 

Cohn also warned companies against reacting to headlines or trending hashtags when making content moderation decisions — and instead recommended they create clear guidelines about their approach and explain their decisions to remove or keep accounts.  “These companies are not transparent enough about these decisions,” she said. 

Meanwhile, political scrutiny of the issue is mounting. My colleague Drew Harwell reported yesterday that the House Homeland Security Committee demanded in a letter that Watkins, an American Web entrepreneur living in the Philippines, provide answers on how 8chan had responded in the wake of the three mass shootings this year that were promoted and celebrated on the site. 

The same committee also announced that it would consider domestic terrorism legislation next month, which would include “a bipartisan commission of experts to come up with recommendations at the intersection of homeland security and social media to address emerging (non-cyber) threats.”

BITS, NIBBLES AND BYTES

BITS: Twitter is telling newcomers to 2020 congressional and gubernatorial races that they have to win their primary before the website verifies their profiles, CNN’s Maegan Vasquez and Donie O’Sullivan report. But politicians say the policy could allow for fake accounts posing as candidates to proliferate and is at odds with Twitter’s purported mission to combat political interference.

The concerns aren’t hypothetical: Iran used accounts posing as candidates for the U.S. House of Representatives to influence voters in the 2018 midterms, cybersecurity firm FireEye revealed in May. Russian used similar tactics in 2016, impersonating the Republican Party of Tennessee and amassing over a 100,000 followers before Twitter shut the account down.

The campaign that flagged Twitter’s new policy to CNN only waited 24 hours for page verification from Facebook and Instagram, Maegan and Donie report.

“They should know that they’re being used to disseminate misinformation and character assassination,” Ray Buckley, chairman of the New Hampshire Democratic Party, told CNN. Buckley claims the party has already seen impersonators on Twitter. (Impersonation violates the platform’s rules.)

NIBBLES: Leaders on the House Energy and Commerce Committee are asking the White House to strike language that gives Internet companies a powerful legal shield from liability for content from a trade agreement with Mexico and Canada awaiting congressional approval. Both Democrats and Republicans have ramped up scrutiny of Section 230 of the Communications Decency Act, the law which grants immunity to internet companies for content posted by users, over the past year.

“While we take no view on that debate in this letter, we find it inappropriate for the United States to export language mirroring Section 230 while such serious policy discussions are ongoing,” Rep. Frank Pallone Jr. (D-N.J.) and Rep. Greg Walden (R-Ore.) wrote in a letter to U.S. Trader Representative Robert E. Lighthizer.

Both parties have expressed differing concerns with the statute. Sen. Josh Hawley (R-Mo.) introduced a bill in June that would revoke the legal shield from tech companies that couldn’t prove that they were “politically neutral,” but he hasn’t found any Democratic support for the legislation. Democrats have expressed concern that Section 230 leaves companies off the hook for political interference and online hate. House Speaker Nancy Pelosi (D-Calif.) warned tech companies that the law could be “in jeopardy” in April. Tech companies, on the other hand, have long argued that Section 230 is crucial to their business models.

BYTES: Google and Amazon have been profiting off sales of firearms and gun accessories — despite explicit company policies banning the promotion of the items, my colleague Greg Bensinger reports. Both companies offered rifle magazines for sale as recently as Monday, within just days of three mass shootings in the U.S., Greg found. 

“The availability of the goods speaks to the limitations of the company’s algorithms to keep even prohibited items from making their way to the websites,” Greg explains. For instance, Google Shopping surfaced a listing of shotgun rounds that could “place all projectiles on a man-sized target at seven yards,” despite the fact that it “bans the promotion of products that ’cause damage, harm or injury.” A search for “bump stock,” which Google banned after the device was used in the 2017 Las Vegas massacre, produces no results, on the other hand

While Amazon also relies on independent sellers, because of its involvement with the fulfillment process some violating items may be shipped directly from Amazon’s warehouses. Both Google and Amazon removed some items once notified by The Post of their appearance. On Amazon, a paid advertisement for a rifle magazine remained. (Amazon CEO Jeff Bezos also owns The Washington Post.)

PRIVATE CLOUD

— News from the private sector:

Amazon Mail-Order Pharmacy Faces Pushback

Amazon’s PillPack mail-order pharmacy is accused by a health-technology company of improperly obtaining patient prescription data, in the latest sign that Amazon’s foray into healthcare is running up against industry incumbents.

Wall Street Journal

PUBLIC CLOUD

— Democratic senators Edward J. Markey (N.J.) and Richard Blumenthal (Conn.) want Facebook CEO Mark Zuckerberg to answer questions about how long flaws that allowed users of Facebook’s Messenger Kids app to speak with unapproved contacts went undetected and if all the users affected have been notified. The inquiry revives concerns that Messenger Kids may violate federal law.

“Children’s privacy and safety online should be Messenger Kids’ top priority,” the senators wrote in a letter to Zuckerberg. Facebook began alerting parents to the design flaw in July, Russell Brandom at The Verge first reported, but the company did not disclose how long the flaw existed.

The pair of lawmakers also want to know if Facebook considers itself released from liability for any violations of the Children’s Online Privacy Protection Act, the federal law that requires companies to protect the privacy of users under 13, given the wide-ranging immunity granted to the company by the Federal Trade Commission for violations that occurred before a June 24 settlement.

The Campaign for a Commercial-Free Childhood, which claimed the recent settlement “sold out kids,” asked the FTC to investigate allegations that Facebook collected children’s personal information without explicit consent of their parents last October. As a part of the new inquiry, Markey and Blumenthal are asking Facebook to provide information about what, if any, communications with the FTC resulted from that request.

— News from the public sector:

#TRENDING

—  Tech news generating buzz around the Web:

Tourists Are Fueling a Boom in Personal Translation Devices

While smartphone apps remain a popular — and common — translation tool, Pocketalk has carved out its own niche. Dedicated for just one purpose, the gadget has a sensitive microphone, and accesses machine translation and voice-recognition software from Google, Baidu and others, improving accuracy. More than 500,000 Pocketalk units have been sold since it debuted in 2017.

Bloomberg

WIRED IN




Source link