Dhaka, Jul 3 (AP/UNB) - Aleksandr Kogan, the data scientist at the center of Facebook's Cambridge Analytica privacy scandal, said he is dropping a defamation lawsuit against the social network rather than engage in an expensive, drawn-out legal battle.
Kogan, 33, sued the social giant in March, claiming it scapegoated him to deflect attention from its own misdeeds, thwarting his academic career in the process. The suit sought unspecified monetary damages and a retraction and correction of what Kogan said were "false and defamatory statements."
"We thought there was a one percent chance they would do the right thing," Kogan told The Associated Press. Facebook is "brilliant and ruthless," he added. "And if you get in their way they will destroy you."
A Facebook spokesperson said the company had "no comment to share concerning this development."
The former Cambridge University psychology professor created an online personality test app in 2014 that vacuumed up the personal data of as many as 87 million Facebook users . The vast majority of those were unwitting online friends of the roughly 200,000 people Kogan says were paid about $4 to participate in his "ThisIsYourDigital Life" quiz.
Cambridge Analytica, a political data-mining firm founded by conservative power brokers including billionaire Robert Mercer and former White House aide Steve Bannon, paid Kogan $800,000 to conduct his research and to provide the firm with a copy of the data. The project's aim was to create voter profiles based on Facebook users' online behavior to help in tailored political-ad targeting, according to Christopher Wylie, a former data scientist at the firm.
In March 2018, when the scandal broke, Facebook executives charged that Kogan had lied to them about how the data he harvested would be used. Facebook deputy general counsel Paul Grewal claimed at the time in a statement to The New York Times that Kogan perpetrated "a scam — and a fraud." CEO Mark Zuckerberg accused Kogan of violating Facebook rules "to gather a bunch of information, sell it or share it in some sketchy way."
Kogan said such accusations were "either unfair or untrue." Facebook shut down Kogan's app in late 2015 after it was exposed in press accounts and he said he then destroyed his copy of the rogue data at its request. But it didn't ban him from the social media platform until the Cambridge Analytica scandal broke last year.
Evidence presented to a U.K. parliamentary committee indicated that Cambridge Analytica had not deleted the Kogan-acquired dataset on 30 million Facebook users by February 2016. Britain's Information Commissioner's Office said Cambridge Analytica used some of that data "to target voters during the 2016 U.S. presidential campaign process." Data collected included age, gender, posts, email addresses and pages users "liked," depending on their privacy setting, the regulator said.
Cambridge Analytica worked for the eventual 2016 GOP presidential nominee, Donald Trump. Had Trump not won the election, "my life (would be) very different," Kogan said.
Kogan and other developers say Facebook allowed such wholesale gathering of friend data at the time, although access was later throttled back for all but select partners.
"They created these great tools for developers to collect the data and made it very easy. This is not a hack. This was 'Here's the door. It's open. We're giving away the groceries. Please collect them," Kogan told CBS News' 60 Minutes last year.
Other developers tell similar tales of Facebook's lax attitude toward user data and their own naïve complicity. If true, Facebook would have been in direct violation of a 2011 consent order with the Federal Trade Commission for allowing third-party apps like Kogan's to collect data on users without their knowledge or consent.
Kogan's university appointment ended in September, his company has gone bust and he has been doing freelance programming, he said. "I think it would be damn near impossible to get an academic job," Kogan said by phone from Buffalo, New York, where he currently lives with his wife.
Facebook's privacy transgressions are also the subject of investigations in Europe and by a number of U.S. state attorney generals. Canada has sued the company over its alleged failure to protect user data, as has the attorney general of the District of Columbia. As well, A federal judge in northern California last month allowed a class action lawsuit over Facebook's privacy transgressions to move forward.
Kogan told the AP he now regrets invading so many people's privacy. "In hindsight it was clearly a really bad idea to do that project."
Dhaka, July 2 (UNB) - Over the past few months, Mark Zuckerberg has spoken at length about his grand plan for fixing Facebook, reports BBC.
In short, it involves “pivoting” - as they say - to a more private social network. One which focuses on closed spaces, like groups or messaging, rather than the public News Feed.
He unveiled this plan in March, a year after the Cambridge Analytica scandal hit.
At the time, I noted that critics were concerned that the shift would mean Facebook was abdicating some of its responsibilities. Making Facebook more private would arguably not remove the problems of abuse - though it would make it harder for outsiders to find instances of Facebook’s failures.
Recent stories have demonstrated that concern was perhaps justified.
On Monday, ProPublica revealed the existence of a private Facebook group which contained disturbing jokes allegedly posted by US Border Patrol agents.
The investigative site said comments included mockery of migrants that had died in custody, as well as aggressive, sexist remarks about prominent female politicians. The group has existed for more three years and has almost 10,000 members.
A Facebook spokesperson told the BBC: "We want everyone using Facebook to feel safe. Our community standards apply across Facebook, including in secret groups. We're co-operating with federal authorities in their investigation."
Separately, a report last month from California-based investigative group Reveal exposed groups where police officers, from more than 50 different departments across the country, shared racist memes, islamophobia and conspiracy theories.
And the Washington Post detailed a flurry of groups offering bogus cancer treatment “advice”, such as to "use baking soda or frankincense” instead of chemotherapy. These groups are able and allowed to flourish - the Post reported at least two with more than 100,000 members.
Facebook said it provides related news stories to posts that might contain misinformation, but we don’t have any statistics on how effective this measure is.
(Facebook has, however, banned some women who had shared mastectomy scars as an act of solidarity and encouragement with others facing their own battle with cancer.)
Hidden from view
What makes these examples of abuse more significant than what we’ve seen in the past? They show how Facebook’s strategy has the ability to push its problems into the shadows.
ProPublica was only able to observe the Border Patrol group thanks to someone sending them screenshots - otherwise it was entirely hidden from view.
Reveal had to use specially-written software code that cross-referenced members of hate groups against users who were signed up to legitimate pages about police work.
The Washington Post reporter was able to access some groups, but was swiftly banned and blocked when it became clear who she was.
Even Facebook finds it more difficult to find itself accountable when it comes to groups.
The site has said its ability to use algorithms and AI to detect hate speech and misinformation still falls short, and therefore it still relies heavily on users reporting inappropriate content.
In groups, this of course becomes far less likely: the inappropriate content is the reason people joined the group in the first place. And Facebook has shown limited willingness to proactively look for these kind of abuses itself.
Groups have, of course, been a feature on Facebook since the early days. But never before have they had such prominence.
Facebook, as directed by its leader, is aggressively pushing users to use groups more often. There’s an advertising campaign - which includes hand-painted murals - and a new button placed front and centre in its mobile app. Private is the new public.
“This vision could backfire terribly,” warned French journalism professor, Frederick Pilloux, in 2018. “An increase in the weight of 'groups' means reinforcement of Facebook’s worst features - cognitive bubbles - where users are kept in silos fueled by a torrent of fake news and extremism.”
Make no mistake: few, if any, of the problems Facebook is “working hard” on at the moment would have come to light were it not for external pressure from journalists, lawmakers, academics and civil rights groups.
The examples I’ve raised here pose a question: is Facebook fixing itself, or merely making it harder for us to see it's broken?
Menlo Park, Jul 2 (AP/UNB) — A Facebook mail facility near company headquarters was evacuated Monday after a routine check found mail possibly containing the nerve agent sarin.
Authorities put the site under quarantine as they conducted additional testing. Four buildings were evacuated and three have been cleared for people to come back in, said Facebook spokesman Anthony Harrison in a statement. The suspicious package was delivered around 11 a.m. to one of the company's mail rooms, he said.
"Authorities have not yet identified the substance found," Harrison wrote.
There were no reports of injuries, Menlo Park Fire Marshal Jon Johnston said. Incoming mail undergoing routine processing by machine tested positive for sarin, but it could have been a false positive, Johnston said.
"Right now we don't have anybody that has any symptoms," he said. "We're just doing verification."
The FBI is assisting in the investigation, as is common in incidents such as this one.
The federal Centers for Disease Control and Prevention says sarin is a chemical warfare agent that is a clear, colorless, odorless and tasteless liquid. It can evaporate into the environment, prompting symptoms within seconds.
A drop of sarin on skin can cause sweating and muscle twitching, and exposure to large doses can result in paralysis and respiratory failure leading to death.
The CDC says people who are mildly exposed usually recover completely.
Bosto, Jul 1 (AP/UNB) — Facebook says it will make advertisements for jobs, loans and credit card offers searchable for all U.S. users following a legal settlement designed to eliminate discrimination on its platform.
The plan disclosed in an internal report Sunday voluntarily expands on a commitment the social medial giant made in March when it agreed to make its U.S. housing ads searchable by location and advertiser.
Ads were only delivered selectively to Facebook users based on such data as what they earn, their education level and where they shop.
The audit's leader, former American Civil Liberties Union executive Laura Murphy, was hired by Facebook in May 2018 to assess its performance on vital social issues.
Murphy has consulted with dozens of civil rights groups on the subject as part of her yearlong audit, assisted by lawyers from the firm Relman, Dane & Colfax. Sunday's 26-page report , which also deals with content moderation and enforcement and efforts to prevent meddling in the 2020 U.S. elections and census, was her second update.
The searchable housing ads database will roll out by the end of 2019, Facebook says, and Murphy said she expects the employment and financial product offerings databases to be available within the next year.
Murphy said she's "very excited" about the move she believes will positively impact the social mobility of millions in the United States.
Targeted ads tailored to individuals are Facebook's bread and butter — accounting for all but a sliver of its more than $50 billion in annual revenues last year. It's unlikely that making the ads searchable would have a significant effect on Facebook's business. Analysts have cautioned, however, that any restrictions on Facebook's ability to target ads could scare off advertisers.
The move is likely part of Facebook's strategy to show regulators that is doing a good job policing its own service — putting it in compliance with existing anti-discrimination law — and doesn't need a heavy-handed approach from lawmakers. It comes as the company is facing increasing regulatory pressures.
As part of the settlement with plaintiffs including the ACLU and the National Fair Housing Alliance, Facebook agreed in March to stop targeting people based on age, gender and zip code and to also eliminate such categories as national origin and sexual orientation.
The groups had sued claiming Facebook violated anti-discrimination laws by preventing audiences including single mothers and the disabled from seeing many housing ads — while some job ads were not reaching women and older workers.
Galen Sherwin, senior staff attorney at the ACLU and the group's lead attorney in the case, said making the three Facebook databases searchable by anyone "definitely creates greater access to information about economic opportunities."
Civil rights groups are concerned that the secretive, proprietary algorithms that govern how the company steers ads— even when not consciously targeting specific groups — could still be discriminatory.
"I wish we could see into the black box," Sherwin said.
Facebook still faces a U.S. Department of Housing and Urban Development complaint over housing ad-targeting and delivery. Murphy, the auditor, said she thinks the company understands it's "going to have to look at the algorithms" behind them.
The company also faces privacy and anti-trust investigations in the U.S. and Europe over its invasive data collection practices and struggles to police hate speech globally with sometimes lethal repercussions.
Facebook is currently in talks to create an external oversight board to monitor such issues and its level of independence is one subject of debate.
Sunday's audit update also addresses Facebook's efforts to shed "harmful content," including a new U.S. pilot program where dedicated monitors will focus on hate speech alone. A few dozen are involved so far, the company said. All come from the more than 20,000 outsourced content moderators who screen the 2.3 billion-user platform, the company said.
Audit team recommendations include ending a carve-out for humor as an exception in hate speech and devising better mechanisms for blocking harassment, which can be especially overwhelming when automated.
Simply defining actionable hate speech — which can vary by nation, region, language and cultural context — is a tall order.
The report says Facebook is committed to stepping up efforts to fight voting suppression in 2020 elections and plans to have ready by fall policies to counter attempts to interfere in the census.
San Francisco, Jun 28 (AP/UNB) — Presidents and other world leaders and political figures who use Twitter to threaten or abuse others could find their tweets slapped with warning labels.
The new policy , announced by the company on Thursday, comes amid complaints from activists and others that President Donald Trump has gotten a free pass from Twitter to post hateful messages and attack his enemies in ways they say could lead to violence.
From now on, a tweet that Twitter deems to involve matters of public interest, but which violates the service's rules, will be obscured by a warning explaining the violation.
Users will have to tap through the warning to see the underlying message, but the tweet won't be removed, as Twitter might do with a regular person's posts.
Twitter said the policy applies to all government officials, candidates and similar public figures with more than 100,000 followers. In addition to applying the label, Twitter won't use its algorithms to "elevate" or otherwise promote such tweets.
"It's a step in the right direction," said Keegan Hankes, research analyst for the Southern Poverty Law Center's Intelligence Project, who focuses on far-right extremist propaganda online. But, he added, Twitter is essentially arguing "that hate speech can be in the public interest. I am arguing that hate speech is never in the public interest."
Twitter refused to comment on whether any of Trump's past tweets violated its rules and would not say what role, if any, his Twitter activity played in the creation of the new warning-label policy.
The new stance could fuel additional Trumpian ire toward social media. The president routinely complains, without evidence, that social media sites are biased against him and other conservatives.
Twitter's rules prohibit threatening violence against a person or group, engaging in "targeted harassment of someone," or inciting others to do so, such as wishing a person is harmed. It also bans hate speech against a group based on race, ethnicity, gender or other categories.
Up to now, the company has exempted prominent leaders from many of those rules, contending that publishing controversial tweets from politicians helps hold them accountable and encourages discussion.
But there have been longstanding calls to remove Trump from the service over what some have called abusive and threatening behavior.
Some activists complained this week after the president threatened Iran with "obliteration" in some areas if it attacks the U.S. Trump has also tweeted a video of himself beating up a man with a CNN logo in place of his head and retweeted seemingly faked anti-Muslim videos.
"Donald Trump has changed political discourse on Twitter and everywhere else, given the level of toxic statements he has made about vulnerable communities in America," Hankes said.
Other politicians could likewise become subject to warning labels.
In 2018, French prosecutors filed preliminary charges against far-right French politician Marine Le Pen for tweeting brutal images of Islamic State violence. Twtter prohibits material that is "excessively gory."
And in March, Brazilian President Jair Bolsonaro stirred outrage by sharing a video on Twitter of a man urinating on the head of another man during a Carnival party.
Insults and mockery fall into a gray area. Calling someone a "lowlife, a "dog" or a "stone cold LOSER," as Trump hasdone , may not in itself be a violation. But repeated insults against someone might amount to prohibited harassment.
Jennifer Grygiel, a social media expert and professor at Syracuse University, said Twitter "obviously" enacted the new policy because of Trump's Twitter activity.
But Grygiel said the new rule doesn't go far enough. Because of the president's outsize ability to start wars, move stock markets or influence other world events, Twitter should instead review leaders' tweets before they are sent out and block them if necessary, Grygiel said.
Twitter's new policy doesn't apply to past tweets.
Twitter said it is still possible for a government official or other figure to tweet something so egregious that it warrants removal. A direct threat of violence against an individual, for instance, would qualify.
The company said warning-label decisions will be made by a group that includes members of its trust and safety, legal and public policy teams, as well as employees in the regions where particular tweets originate.