×
Social Networks

'How Lies on Social Media Are Inflaming the Israeli-Palestinian Conflict' (msn.com) 181

The New York Times reports on misinformation that's further inflaming the Israeli-Palestinian conflict: In a 28-second video, which was posted to Twitter this week by a spokesman for Prime Minister Benjamin Netanyahu of Israel, Palestinian militants in the Gaza Strip appeared to launch rocket attacks at Israelis from densely populated civilian areas.

At least that is what Mr. Netanyahu's spokesman, Ofir Gendelman, said the video portrayed. But his tweet with the footage, which was shared hundreds of times as the conflict between Palestinians and Israelis escalated, was not from Gaza. It was not even from this week. Instead, the video that he shared, which can be found on many YouTube channels and other video-hosting sites, was from 2018. And according to captions on older versions of the video, it showed militants firing rockets not from Gaza but from Syria or Libya.

The video was just one piece of misinformation that has circulated on Twitter, TikTok, Facebook, WhatsApp and other social media this week about the rising violence between Israelis and Palestinians, as Israeli military ground forces attacked Gaza early on Friday. The false information has included videos, photos and clips of text purported to be from government officials in the region, with posts baselessly claiming early this week that Israeli soldiers had invaded Gaza, or that Palestinian mobs were about to rampage through sleepy Israeli suburbs. The lies have been amplified as they have been shared thousands of times on Twitter and Facebook, spreading to WhatsApp and Telegram groups that have thousands of members, according to an analysis by The New York Times.

The effect of the misinformation is potentially deadly, disinformation experts said, inflaming tensions between Israelis and Palestinians when suspicions and distrust have already run high.

Facebook

Facebook Loses Challenge To Irish Watchdog's Data Curbs (bloomberg.com) 16

Facebook lost a court fight over an initial order from a European Union privacy watchdog threatening its transfers of users' data across the Atlantic. From a report: An Irish court on Friday rejected the social network's challenge, saying it didn't establish "any basis" for calling into question the Irish Data Protection Commission's decision. The dispute is part of the fallout from July's shock decision at the EU's Court of Justice, which toppled the so-called Privacy Shield, an EU-approved trans-Atlantic transfer tool, over fears citizens' data isn't safe once shipped to the U.S. That EU court ruling was quickly followed by a preliminary order from the Irish authority telling Facebook it could no longer use an alternative tool, known as standard contractual clauses, to satisfy privacy rules when shipping data to the U.S.
Facebook

Facebook-Backed Diem Abandons Swiss License Application, Will Move To the US (cnbc.com) 34

Facebook-backed digital currency project Diem -- formerly known as Libra -- said Wednesday it has withdrawn its application for a Swiss payment license and will instead shift its operations to the United States. From a report: The Diem Association, which oversees development of the Diem digital currency, had been pursuing a payment system license with Switzerland's FINMA watchdog. Diem has now dropped plans to secure Swiss regulatory approval, while its U.S. subsidiary has partnered with Silvergate, a California state-chartered bank, to issue the token. "While our plans take the project fully within the US regulatory perimeter and no longer require a license from FINMA, the project has benefited greatly from the intensive licensing process in Switzerland and the constructive feedback from FINMA and more than two dozen other regulatory authorities from around the world convened by FINMA to consider the project," Stuart Levy, Diem's CEO, said in a statement. Diem said it plans to move its operational headquarters from Geneva to Washington, D.C., where its U.S. unit is based.
Social Networks

UK To Require Social Media To Protect 'Democratically Important' Content (theguardian.com) 53

Long-awaited proposals in the UK to regulate social media are a "recipe for censorship," campaigners have said, which fly in the face of the government's attempts to strengthen free speech elsewhere in Britain. From a report: The online safety bill, which was introduced to parliament on Wednesday, hands Ofcom the power to punish social networks which fail to remove "lawful but harmful" content. The proposals were welcomed by children's safety campaigns, but theyhave come under fire from civil liberties organisations. "Applying a health and safety approach to everybody's online speech combined with the threat of massive fines against the platforms is a recipe for censorship and removal of legal content," said Jim Killock, the director of the Open Rights Group. "Facebook does not operate prisons and is not the police. Trying to make platforms do the job of law enforcement through technical means is a recipe for failure."

The centre-right CPS thinktank was similarly critical. "It is for parliament to determine what is sufficiently harmful that it should not be allowed, not for Ofcom or individual platforms to guess," it said. "If something is legal to say, it should be legal to type," CPS's director, Robert Colvile, added. In its update to the bill from the white paper first drafted by Theresa May's government in 2019, the Department for Digital, Culture, Media and Sport added sections intended to prevent harm to free expression. Social networks will now need to perform and publish "assessments of their impact on freedom of expression."

China

Army of Fake Fans Boosts China's Messaging on Twitter (apnews.com) 69

China's ruling Communist Party has opened a new front in its long, ambitious war to shape global public opinion: Western social media. From a report: Liu Xiaoming, who recently stepped down as China's ambassador to the United Kingdom, is one of the party's most successful foot soldiers on this evolving online battlefield. He joined Twitter in October 2019, as scores of Chinese diplomats surged onto Twitter and Facebook, which are both banned in China. Since then, Liu has deftly elevated his public profile, gaining a following of more than 119,000 as he transformed himself into an exemplar of China's new sharp-edged "wolf warrior" diplomacy, a term borrowed from the title of a top-grossing Chinese action movie. "As I see it, there are so-called 'wolf warriors' because there are 'wolfs' in the world and you need warriors to fight them," Liu, who is now China's Special Representative on Korean Peninsula Affairs, tweeted in February. His stream of posts -- principled and gutsy ripostes to Western anti-Chinese bias to his fans, aggressive bombast to his detractors -- were retweeted more than 43,000 times from June through February alone. But much of the popular support Liu and many of his colleagues seem to enjoy on Twitter has, in fact, been manufactured.

A seven-month investigation by the Associated Press and the Oxford Internet Institute, a department at Oxford University, found that China's rise on Twitter has been powered by an army of fake accounts that have retweeted Chinese diplomats and state media tens of thousands of times, covertly amplifying propaganda that can reach hundreds of millions of people -- often without disclosing the fact that the content is government-sponsored. More than half the retweets Liu got from June through January came from accounts that Twitter has suspended for violating the platform's rules, which prohibit manipulation. Overall, more than one in ten of the retweets 189 Chinese diplomats got in that time frame came from accounts that Twitter had suspended by Mar. 1. But Twitter's suspensions did not stop the pro-China amplification machine. An additional cluster of fake accounts, many of them impersonating U.K. citizens, continued to push Chinese government content, racking up over 16,000 retweets and replies before Twitter kicked them off late last month and early this month, in response to the AP and Oxford Internet Institute's investigation.

Facebook

Facebook Ordered To Stop Collecting German WhatsApp Data (bloomberg.com) 32

Facebook was ordered to stop collecting German users' data from its WhatsApp unit, after a regulator in the nation said the company's attempt to make users agree to the practice in its updated terms isn't legal. From a report: Johannes Caspar, who heads Hamburg's privacy authority, issued a three-month emergency ban, prohibiting Facebook from continuing with the data collection. He also asked a panel of European Union data regulators to take action and issue a ruling across the 27-nation bloc. The new WhatsApp terms enabling the data scoop are invalid because they are intransparent, inconsistent and overly broad, he said. "The order aims to secure the rights and freedoms of millions of users which are agreeing to the terms Germany-wide," Caspar said in a statement on Tuesday. "We need to prevent damage and disadvantages linked to such a black-box-procedure." The order strikes at the heart of Facebook's business model and advertising strategy. It echoes a similar and contested step by Germany's antitrust office attacking the network's habit of collecting data about what users do online and merging the information with their Facebook profiles. That trove of information allows ads to be tailored to individual users -- creating a cash cow for Facebook.
Facebook

Facebook Is Testing Pop-Up Messages Telling People To Read a Link Before They Share It (techcrunch.com) 61

Following Twitter's lead, Facebook is trying out a new feature designed to encourage users to read a link before sharing it. TechCrunch reports: The test will reach 6% of Facebook's Android users globally in a gradual rollout that aims to encourage "informed sharing" of news stories on the platform. Users can still easily click through to share a given story, but the idea is that by adding friction to the experience, people might rethink their original impulses to share the kind of inflammatory content that currently dominates on the platform.

The strategy demonstrates Facebook's preference for a passive strategy of nudging people away from misinformation and toward its own verified resources on hot-button issues like COVID-19 and the 2020 election. While the jury is still out on how much of an impact this kind of gentle behavioral shaping can make on the misinformation epidemic, both Twitter and Facebook have also explored prompts that discourage users from posting abusive comments.

United States

DHS Launches Warning System To Find Domestic Terrorism Threats On Public Social Media (nbcnews.com) 70

An anonymous reader quotes a report from NBC News: The Department of Homeland Security has begun implementing a strategy to gather and analyze intelligence about security threats from public social media posts, DHS officials said. The goal is to build a warning system to detect the sort of posts that appeared to predict an attack on the U.S. Capitol on Jan. 6 but were missed or ignored by law enforcement and intelligence agencies, the officials said. The focus is not on the identity of the posters but rather on gleaning insights about potential security threats based on emerging narratives and grievances. So far, DHS is using human beings, not computer algorithms, to make sense of the data, the officials said. "We're not looking at who are the individual posters," said a senior official involved in the effort. "We are looking at what narratives are resonating and spreading across platforms. From there you may be able to determine what are the potential targets you need to protect."

The officials didn't describe what criteria or methods the analysts would use to parse the data. They said DHS officials have been consulting with social media companies, private companies and nonprofit groups that analyze open-source social media data. Law enforcement officers and intelligence analysts are legally entitled to examine -- without warrants -- what people say openly on Twitter, Facebook and other public social media forums, just as they can take in information from reading newspapers. But civil liberties groups generally oppose government monitoring of social media, arguing that it doesn't produce much intelligence and risks chilling free speech.

Facebook

Facebook Should Halt Instagram Kids Plan, Attorneys General Say (bloomberg.com) 41

Forty-four attorneys general sent a letter to Mark Zuckerberg asking him to abandon plans to create a version of Instagram for children under 13. From a report: "Facebook has historically failed to protect the welfare of children on its platforms," according to the letter, signed by attorneys general from New York and Massachusetts, among others. "The attorneys general have an interest in protecting our youngest citizens, and Facebook's plans to create a platform where kids under the age of 13 are encouraged to share content online is contrary to that interest."
Social Networks

Twitter and TikTok are Losing the War Against COVID Disinformation (usatoday.com) 146

America's leading social media companies "pledged to put warning labels on COVID-19 and COVID vaccines posts to stop the spread of falsehoods, conspiracy theories and hoaxes that are fueling vaccine hesitancy in the USA," reports USA Today.

"With the exception of Facebook, nearly all of them are losing the war against COVID disinformation." That's the conclusion of a new report shared exclusively with USA TODAY. As the pace of the nation's immunizations slows and public health agencies struggle to get shots in arms, Advance Democracy found that debunked claims sowing unfounded fears about the vaccines are circulating largely unfettered on Twitter and TikTok, including posts and videos that falsely allege the federal government is covering up deaths caused by the vaccines or that it is safer to get COVID-19 than to get the vaccine.

Twitter began labeling tweets that include misleading or false information about COVID-19 vaccines in March. It also started using a "strike system" to eventually remove accounts that repeatedly violate its rules. Yet none of the top tweets on Twitter using popular anti-vaccine hashtags like #vaccineskill, #novaccine, #depopulation and #plandemic had labels as of May 3, according to Advance Democracy, a research organization that studies disinformation and extremism. What's more, when USA TODAY searched these hashtags on Twitter, unlabeled posts were served up along with advertisements for major consumer brands including Cheetos, Volvo, CVS, even Star Wars...

After coming under fire for its slow response to COVID-19 misinformation, Facebook has made significant progress in labeling COVID-19 posts, according to Daniel Jones, president of Advance Democracy... As of May 3, all of the top 10 posts discussing COVID-19 vaccines that used the #vaccineskill hashtag were labeled, compared to only two of the top 10 on March 28, Advance Democracy found... Facebook told USA TODAY it has removed more than 16 million pieces of content on Facebook and Instagram for violating its COVID and vaccine policies since the beginning of the pandemic....

As of May 3, TikTok failed to consistently apply labels to anti-vaccination hashtags used in videos with millions of views, the report said. Nine of the top 10 videos related to COVID-19 vaccines using the hashtag #NoVaccine did not have a label. Videos with the #NoVaccine label racked up 20.5 million views...

The Advance Democracy research did not look at vaccine-related content on Facebook-owned Instagram or Google's YouTube.

"Promises to address public health misinformation online are only consequential if there is action and follow through..." Jones told USA Today.

"This pandemic is not over, and with the rate of vaccinations on the decline, directing users to reliable information on vaccines is more important than ever," Jones said.
Facebook

Facebook Criticized For 'Arbitrary' Suspension of Trump -- by Its Own Oversight Board (npr.org) 183

"It never occurred to me that a Facebook-appointed panel could avoid a clear decision about Donald Trump's heinous online behavior," writes a New York Times technology reporter. "But that is what it's done..."

They call the board's decision "kind of perfect, actually, since it forces everyone's hand — from the Facebook chief executive Mark Zuckerberg to our limp legislators in Congress..."

The editor of the conservative National Review adds: If Facebook had set out to demonstrate that it has awesome power over speech in the United States, including speech at the core of the nation's political debate, and is wielding that power arbitrarily, indeed has no idea what its own rules truly are or should be, it wouldn't have handled the question any differently... The oversight board underlines the astonishing fact that in reaching its most momentous free-speech decision ever in this country, in determining whether a former president of the United States can use its platform or not, Facebook made it up on the fly. "In applying this penalty," the board writes of the suspension, "Facebook did not follow a clear, published procedure." This is like the U.S. Supreme Court handing down decisions in the absence of a written Constitution, or a home-plate umpire calling balls and strikes without an agreed-upon strike zone...
John Samples, a member of the Oversight Board, has even said explicitly that their decision was not about former president Trump — but about Facebook itself. The Washington Post reports: Samples said the board found that Facebook enforced a rule that didn't exist at the time. Trump was suspended indefinitely, rather than permanently or for a specific period of time, as defined by the company's own rules. "In a sense we were being tough with them," Samples said.

Other members said the board's call should reassure anyone concerned that Facebook wields too much control over online speech. "Anyone who's concerned about Mark Zuckerberg's power and his company's power over our speech online should actually praise this decision," Julie Owono, executive director of Internet Sans Frontières, said at a virtual event hosted by the Stanford Cyber Policy Center. "The board refused to support an arbitrary suspension..."

The flurry of media appearances marked a critical moment in the board's existence, as it tries to prove its legitimacy, define its powers and establish its relationship with Facebook.

NPR notes that former Danish Prime Minister Helle Thorning-Schmidt, a board co-chair, even called Facebook "a bit lazy" for failing to set a specific penalty in the first place... "What we are telling Facebook is that they can't invent penalties as they go along. They have to stick to their own rules," Thorning-Schmidt said in an interview with Axios. The board's criticism didn't stop at Facebook's imposing what it called a "vague, standardless penalty." It slammed the company for trying to outsource its final verdict on Trump. "Facebook has a responsibility to its users and to its community and to the broader public to make its own decisions," Jamal Greene, another board co-chair and constitutional law professor at Columbia, said Thursday during an Aspen Institute event. "The board's job is to make sure that Facebook is doing its job," he said.

Tensions between the board's view of the scope of its role and Facebook's were also evident in the board's revelation that the company wouldn't answer seven of the 46 questions it asked about the Trump case. The questions Facebook refused to answer included how its own design and algorithms might have amplified the reach of Trump's posts and contributed to the Capitol assault. "The ones that the company refused to answer to are precisely related to what happened before Jan. 6," Julie Owono, an oversight board member and executive director of the digital rights group Internet Sans Frontières, said at the Aspen Institute event.

"Our decision says that you cannot make such an important decision, such a serious decision for freedom of expression, freedom of speech, without the adequate context."

Facebook

Months-long Twitter Backlash Had Zero Impact on WhatsApp's User Base (techcrunch.com) 47

An anonymous reader shares a report: It's safe to say WhatsApp didn't have the ideal start to 2021. Less than a week into the new year, the Facebook-owned instant messaging app had already annoyed hundreds of thousands of users with its scary worded notification about a planned policy update. The backlash grew fast and millions of people, including several high-profile figures, started to explore rival apps Signal and Telegram.

Even governments, including India's -- WhatsApp's biggest market by users -- expressed concerns. (In the case of India, also an antitrust probe.) The backlash prompted WhatsApp to offer a series of clarifications and assurances to users, and it also postponed the deadline for enforcing the planned update by three months. Now with the May 15 deadline just a week away, we are able to quantify the real-world impact the aforementioned backlash had on WhatsApp's user base: Nada. The vast majority of users that WhatsApp has notified about the planned update in recent months have accepted the update, a WhatsApp spokesperson told TechCrunch. And the app continues to grow, added the spokesperson without sharing the exact figures.

Twitter

Twitter Begins To Show Prompts Before People Send 'Mean' Replies (nbcnews.com) 93

Nasty replies on Twitter will require a little more thought to send. From a report: The tech company said it is releasing a feature that automatically detects "mean" replies on its service and prompts people to review the replies before sending them. "Want to review this before Tweeting?" the prompt asks in a sample provided by the San Francisco-based company. Twitter users will have three options in response: tweet as is, edit or delete. The prompts are part of wider efforts at Twitter and other social media companies to rethink how their products are designed and what incentives they may have built in to encourage anger, harassment, jealousy or other bad behavior. Facebook-owned Instagram is testing ways to hide like counts on its service.
Education

American Schools' Phone Apps Send Children's Info To Ad Networks, Analytics Firms (theregister.com) 43

LeeLynx shares a report from The Register: The majority of Android and iOS apps created for US public and private schools send student data to assorted third parties, researchers have found, calling into question privacy commitments from Apple and Google as app store stewards. The Me2B Alliance, a non-profit technology policy group, examined a random sample of 73 mobile applications used in 38 different schools across 14 US states and found 60 percent were transmitting student data. The apps in question send data using software development kits or SDKs, which consist of modular code libraries that can be used to implement utility functions, analytics, or advertising without the hassle of creating these capabilities from scratch. Examples include: Google's AdMob, Firebase, and Sign-in SDKs, Square's OK HTTP and Okio SDKs, and Facebook's Bolts SDK, among others.

The data that concerns Me2B includes: identifiers (IDFA, MAID, etc), Calendar, Contacts, Photos/Media Files, Location, Network Data (IP address), permissions related to Camera, Microphone, Device ID, and Calls. About 49 percent of the apps reviewed sent student data to Google and about 14 percent communicated with Facebook, with the balance routing info to advertising and analytics firms, many among them characterized as high risk by the Me2B researchers. Among the public school apps, 67 per cent sent data to third parties; private school apps proved less likely to send data to third parties (57 percent).
Interestingly, the research group found a signifiant difference across mobile platforms. According to The Register, "91 percent of student Android apps sent data to high-risk third parties while only 26 percent of iOS apps did so, and 20 percent of Android apps piped data to very high-risk third parties while only 2.6 percent of iOS did so."

The report adds: "Nonetheless, the researchers expressed concern that 95 percent of third-party data channels in the surveyed student apps are active even when the user is not signed in and that these apps send data as soon as the app is loaded."
Facebook

Signal Tried To Use Instagram Ads To Display the Data Facebook Collects and Sells. Facebook Banned Signal's Account. (mashable.com) 55

Privacy-oriented messaging app Signal tried to run a very candid ad campaign on Facebook-owned Instagram, but it wasn't meant to be. From a report: Signal explained how it went down in a blog post Tuesday. The idea was to post ads on Instagram which use the data an online advertiser may have collected about users, and basically show the user what that data might be for them. "You got this ad because you're a teacher, but more importantly you're a Leo (and single). This ad used your location to see you're in Moscow. You like to support sketch comedy, and this ad thinks you do drag," one of the ads said. According to Signal, the ad "would simply display some of the information collected about the viewer which the advertising platform uses."

The fact that Facebook and similar companies collect your data isn't a secret. According to Signal, however "the full picture is hazy to most -- dimly concealed within complex, opaquely-rendered systems and fine print designed to be scrolled past." In other words, you may have consented to this because you weren't bothered to investigate the details, but you may feel differently if you knew exactly what online advertisers know about you. However, Facebook wasn't having it, and shut down both the campaign and Signal's ad account.

Facebook

Trump's Facebook Ban Should Not Be Lifted, Network's Oversight Board Rules (theguardian.com) 328

Donald Trump's Facebook account should not be reinstated, the social media giant's oversight board said on Wednesday, barring an imminent return to the platform. From a report: However, the board has punted the final decision over Trump's account back to Facebook itself, suggesting the platform make a decision in six months regarding what to do with Trump's account and whether it will be permanently deleted. Facebook suspended Trump's account after the Capitol attack of 6 January, when a mob of Trump supporters stormed Congress in an attempt to overturn the former president's defeat by Joe Biden in the 2020 presidential election. Trump was initially suspended from Facebook and Instagram for 24 hours, as a result of two posts shared to the platform in which he appeared to praise the actions of the rioters. The company then extended the president's ban "at least until the end of his time in office." His account was suspended indefinitely pending the decision of the oversight board, a group of appointed academics and former politicians meant to operate independently of Facebook's corporate leadership.
Facebook

New Emails Show Steve Jobs Referred To Facebook As 'Fecebook' Amid App Store Conflict (9to5mac.com) 59

The Apple vs. Epic legal battle has brought new documents to light, revealing the strained relationship between Apple and Facebook that dates as far back as 2011. 9to5Mac reports: Around this time, Facebook had not yet released a dedicated app for the iPad, which debuted in 2010. Apple's Scott Forstall, then serving as the company's software chief, sent an email to Phil Schiller and Steve Jobs regarding a meeting he had with Mark Zuckerberg about bringing Facebook to the iPad. At the heart of Facebook's concerns was that Apple would not allow the Facebook for iPad application to include "embedded apps." Forstall wrote: "I just discussed with Mark how they should not include embedded apps in the Facebook iPad app -- neither in an embedded web view or as a directory of links that would redirect to Safari. Not surprisingly, he wasn't happy with this as he considers these apps part of the 'whole Facebook experience' and isn't sure they should do an iPad app without them. Everything works in Safari, so he is hesitant to push people to a native app with less functionality, even if the native app is better for non-third party app features."

Zuckerberg suggested a few compromises to Forstall: Do not include a directory of apps in the Facebook app, links, or otherwise; Do not have third-party apps run in the embedded web view; Allow user posts in the news feed related to apps; and Tapping on one of these app-related links would (1) fast switch to a native app if one exists and the user has it installed, (2) take the user to the App Store if a native app exists and the user has not installed it, (3) link out to Safari otherwise.

"I think this is all reasonable, with the possible exception of #3," Forstall wrote in the email. Steve Jobs responded and wrote, "I agree -- if we eliminate Fecebooks third proposal it sounds reasonable." Note Jobs's spelling of Facebook there. A few days later, Forstall followed up and said that Zuckerberg did not like Apple's counterproposal. [...] CNBC adds: "When Facebook's iPad app eventually launched, it said that it would not support its own Credits currency on iOS for apps like Farmville -- a compromise along the lines of what Apple's executives discussed.

Yahoo!

Verizon Sells Internet Trailblazers Yahoo and AOL for $5 Billion (apnews.com) 64

AOL and Yahoo are being sold again, this time to a private equity firm. From a report: Wireless company Verizon will sell Verizon Media, which consists of the once-pioneering tech platforms, to Apollo Global Management in a $5 billion deal. Verizon said Monday that it will keep a 10% stake in the new company, which will be called Yahoo. Yahoo at the end of the last century was the face of the internet, preceding the behemoth tech platforms to follow, such as Google and Facebook. And AOL was the portal, bringing almost everyone who logged on during the internet's earliest days. Verizon spent about $9 billion buying AOL and Yahoo over two years starting in 2015, hoping to jump-start a digital media business that would compete with Google and Facebook.
Canada

Canadian Government Accused of Trying to Introduce Internet Censorship (vancouversun.com) 293

"After more than 25 years of Canadian governments pursuing a hands-off approach to the online world, the government of Justin Trudeau is now pushing Bill C-10, a law that would see Canadians subjected to the most regulated internet in the free world," argues the Vancouver Sun (in an article shared by long-time Slashdot reader theshowmecanuck): Although pitched as a way to expand Canadian content provisions to the online sphere, the powers of Bill C-10 have expanded considerably in committee, including a provision introduced last week that could conceivably allow the federal government to order the deletion of any Facebook, YouTube, Instagram or Twitter upload made by a Canadian. In comments this week, NDP leader Jagmeet Singh indicated his party was open to providing the votes needed to pass C-10, seeing the bill as a means to combat online hate...

The users themselves may not necessarily be subject to direct CRTC regulation, but social media providers would have to answer to every post on their platforms as if it were a TV show or radio program. This might be a good time to mention that members of the current Liberal cabinet have openly flirted with empowering the federal government to control social media. In a September Tweet, Infrastructure Minister Catherine McKenna said that if social media companies "can't regulate yourselves, governments will." Guilbeault, the prime champion of Bill C-10, has spoken openly of a federal regulator that could order takedowns of any social media post that it deems to be hateful or propagandistic...

Basically, if your Canadian website isn't a text-only GeoCities blog from 1996, Bill C-10 thinks it's a program deserving of CRTC regulation. This covers news sites, podcasts, blogs, the websites of political parties or activist groups and even foreign websites that might be seen in Canada...

The penalties prescribed by Bill C-10 are substantial. For corporations, a first offence can yield penalties of up to $10 million, while subsequent offences could be up to $15 million apiece. If TikTok, Twitter, Facebook and YouTube are suddenly put in a situation where their millions of users must follow the same rules as a Canadian cable channel or radio station, it's not unreasonable to assume they may just follow Facebook's example [in Australia] and take the nuclear option.

The Internet

Investigation Finds Links Between Seamy Slander Sites and Reputation-Management Services (nytimes.com) 51

This week the New York Times published their online investigation into the seamy world of the professional slander industry. (Alternate URL.)
At first glance, the websites appear amateurish. They have names like BadGirlReports.date, BustedCheaters.com and WorstHomeWrecker.com. Photos are badly cropped. Grammar and spelling are afterthoughts. They are clunky and text-heavy, as if they're intended to be read by machines, not humans. But do not underestimate their power...

One woman in Ohio was the subject of so many negative posts that Bing declared in bold at the top of her search results that she "is a liar and a cheater" — the same way it states that Barack Obama was the 44th president of the United States. For roughly 500 of the 6,000 people we searched for, Google suggested adding the phrase "cheater" to a search of their names. The unverified claims are on obscure, ridiculous-looking sites, but search engines give them a veneer of credibility. Posts from Cheaterboard.com appear in Google results alongside Facebook pages and LinkedIn profiles....

That would be bad enough for people whose reputations have been savaged. But the problem is all the worse because it's so hard to fix. And that is largely because of the secret, symbiotic relationship between those facilitating slander and those getting paid to remove it.

Who, exactly? The Times spoke to:
  • Cyrus Sullivan, the Portland-based owner of one site who also runs a reputation-management service "to help people get 'undesirable information' about themselves removed from their search engine results. The 'gold package' cost $699.99. For those customers, Mr. Sullivan would alter the computer code underlying the offending posts, instructing search engines to ignore them...."
  • 247Removal's owner Heidi Glosser, who "charges $750 or more per post removal, which adds up to thousands of dollars for most of her clients. To get posts removed, she said, she often pays an 'administrative fee' to the gripe site's webmaster. We asked her whether this was extortion. 'I can't really give you a direct answer,' she said." She appeared to have links to...
  • Web developer Vikram Parmar, who seemed to be running several sites that produced slander while also simultaneously running sites that made money by removing that slander.

But finally, the Times reminded their readers that "in certain circumstances, Google will remove harmful content from individuals' search results, including links to 'sites with exploitative removal practices.' If a site charges to remove posts, you can ask Google not to list it.

"Google didn't advertise this policy widely, and few victims of online slander seem aware that it's an option. That's in part because when you Google ways to clean up your search results, Google's solution is buried under ads for reputation-management services..."


Slashdot Top Deals