London, Feb 18 (AP/UNB) — British lawmakers issued a scathing report Monday that accused Facebook of intentionally violating privacy and anti-competition laws in the U.K., and called for greater oversight of social media companies.
The report on fake news and disinformation on social media sites followed an 18-month investigation by Parliament's influential media committee. The committee recommended that social media sites should have to follow a mandatory code of ethics overseen by an independent regulator to better control harmful or illegal content.
The report called out Facebook in particular, saying that the site's structure seems to be designed to "conceal knowledge of and responsibility for specific decisions."
"It is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws," the report states. It also accuses CEO Mark Zuckerberg of showing contempt for the U.K. Parliament by declining numerous invitations to appear before the committee.
"Companies like Facebook should not be allowed to behave like 'digital gangsters' in the online world, considering themselves to be ahead of and beyond the law," the report added.
Parliamentary committee reports are intended to influence government policy, but are not binding.
Facebook said it shared "the committee's concerns about false news and election integrity" and was open to "meaningful regulation."
"While we still have more to do, we are not the same company we were a year ago," said Facebook's U.K. public policy manager, Karim Palant.
"We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse."
Facebook and other internet companies have been facing increased scrutiny over how they handle user data and have come under fire for not doing enough to stop misuse of their platforms by groups trying to sway elections.
The report echoes and expands upon an interim report with similar findings issued by the committee in July . And in December , a trove of documents released by the committee offered evidence that the social network had used its enormous trove of user data as a competitive weapon, often in ways designed to keep its users in the dark.
Facebook faced its biggest privacy scandal last year when Cambridge Analytica, a now-defunct British political data-mining firm that worked for the 2016 Donald Trump campaign, accessed the private information of up to 87 million users.
New York, Feb 15 (AP/UNB) — A report says Facebook and the Federal Trade Commission are negotiating a "multibillion dollar" fine for the social network's privacy lapses.
The Washington Post said Thursday that the fine would be the largest ever imposed on a tech company. Citing unnamed sources, it also said the two sides have not yet agreed on an exact amount.
Facebook has had several high-profile privacy lapses in the past couple of years. The FTC has been looking into the Cambridge Analytica scandal since last March. The data mining firm accessed the data of some 87 million Facebook users without their consent.
At issue is whether Facebook is in violation of a 2011 agreement with the FTC promising to protect user privacy. Facebook and the FTC declined to comment.
London, Feb 9 (AP/UNB) — Instagram has agreed to ban graphic images of self-harm after objections were raised in Britain following the suicide of a teen whose father said the photo-sharing platform had contributed to her decision to take her own life.
Instagram chief Adam Mosseri said Thursday evening the platform is making a series of changes to its content rules.
He said: "We are not where we need to be on self-harm and suicide, and we need to do more to protect the most vulnerable in our community."
Mosseri said further changes will be made.
"I have a responsibility to get this right," he said. "We will get better and we are committed to finding and removing this content at scale, and working with experts and the wider industry to find ways to support people when they're most in need."
The call for changes was backed by the British government after the family of 14-year-old Molly Russell found material related to depression and suicide on her Instagram account after her death in 2017.
Her father, Ian Russell, said he believes the content Molly viewed on Instagram played a contributing role in her death, a charge that received wide attention in the British press.
The changes were announced after Instagram and other tech firms, including Facebook, Snapchat and Twitter, met with British Health Secretary Matt Hancock and representatives from the Samaritans, a mental health charity that works to prevent suicide.
Instagram is also removing non-graphic images of self-harm from searches.
Facebook, which owns Instagram, said in a statement that independent experts advise that Facebook should "allow people to share admissions of self-harm and suicidal thoughts but should not allow people to share content promoting it."
New York, Feb 01 (AP/UNB) — Facebook says Apple is restoring its access to a key development tool that the iPhone maker disabled Wednesday.
Late Tuesday, TechCrunch reported that Facebook paid teens and other users who agreed to download an app called Facebook Research. That app could extensively track their phone and web use. Apple said Facebook was abusing the tool , known as a developer enterprise certificate, to distribute the app on iPhones in a way that allowed the social network to sidestep Apple restrictions on data collection.
By revoking the certificate for the iOS software that powers the iPhone and iPad, Apple closed off Facebook's efforts to sidestep Apple's app store and its tighter rules on privacy.
Apple did not immediately respond to a message for comment Thursday afternoon. Facebook did not say whether it agreed to any conditions for the certificate restoration.
In an internal memo sent on Wednesday, Facebook told employees it is "working closely" with Apple to reinstate access. It also told workers to install the public versions of apps from the app store. Apps that it said "may not work" included internal versions of Facebook, Workplace, Instagram and the Ride app, which helps workers with transportation. WhatsApp was not affected.
While Facebook engineers could still write code and work on iPhone apps during the shutoff, their ability to test them in the field was limited.
In a statement, Facebook said it is "in the process of getting our internal apps up and running." The company noted that the issue had no impact on its consumer services.
During the shutoff, Facebook also lost the ability to create and push out iPhone apps such as internal tools and apps to its own employees. That's a big deal since Facebook publishes tools and future products to its own team to test before providing them to the public, said Marty Puranik, CEO and founder of cloud hosting company Atlantic.Net.
Puranik, who regularly works with developers, said the certificate revocation also meant developers lost the ability to publish their iPhone apps without vetting by Apple. Those in the program can skip Apple's compliance and user safety checks, which leads to faster updates.
Still, the shutoff didn't seem to debilitate Facebook's ability to work. Its developers work on code on Facebook's internal systems. And version 206.0 of the Facebook app for iPhones was sent out on Thursday morning, while the shutoff was still in effect.
Google said Thursday that Apple has also revoked its enterprise certificate, blocking Google employees from testing new app features on iPhones.
But the company seemed confident it would quickly regain its access. "We're working with Apple to fix a temporary disruption to some of our corporate iOS apps, which we expect will be resolved soon," the company said in a statement.
Google declined to say why it lost the certificate, but came a day after the company voluntarily withdrew market-research app called "Screenwise Meter" that had been distributed to consumers, although not to teens.
Google and Apple have a lucrative business relationship worth billions of dollars a year, since Googles pays a commission for the ads that it sells as the built-in search engine on iPhones.
New York, Feb 01 (AP/UNB) — Facebook says it has removed 783 Iran-linked pages, accounts and groups from its service for what it calls "coordinated inauthentic behavior." That's the social network's term for fake accounts run with the intent of disrupting politics and elections.
Facebook has been disclosing such purges more regularly in recent months, including ones linked to groups in Myanmar , Bangladesh and Russia .
The accounts on Facebook and Instagram typically misrepresented themselves as locals in more than two dozen countries ranging from Afghanistan, Germany, India, Saudi Arabia and the U.S.
Facebook said Thursday the accounts spent about $30,000 on advertisements, paid for in U.S. dollars, British pounds, Canadian dollars and euros.
The company said Twitter helped its investigation by sharing information about suspicious activity it found on its own service. The companies, along with others in the tech industry, have been cooperating more when it comes to such account takedowns by sharing information.
Such cooperation can help the companies avoid regulatory scrutiny by showing critics and lawmakers that they can set aside differences when it comes to battling outside threats that affect their users.
The latest removed accounts, Facebook said, typically represented themselves as locals in various countries, often using fake accounts and posting news stories on current events. This included using stories from Iranian state media about conflicts in Syria and Yemen.