Instagram head faces senators amid anger over possible harms
The head of Instagram on Wednesday met with deep skepticism on Capitol Hill over new measures the social media platform is adopting to protect young users.
Adam Mosseri appeared before a Senate panel and faced off with lawmakers angry over revelations of how the photo-sharing platform can harm some young users. Senators are also demanding the company commit to making changes and increase its transparancy.
Sen. Richard Blumenthal, D-Conn., who heads the Senate Commerce subcommittee on consumer protection, dismissed as “a public relations tactic” some safety measures announced by the popular photo-sharing platform.
“I believe that the time for self-policing and self-regulation is over,” Blumenthal said. “Self-policing depends on trust. Trust is over.”
Under sharp questioning by senators of both parties, Mosseri defended the company’s conduct and the efficacy of its new safety measures. He challenged the assertion that Instagram has been shown by research to be addictive for young people. Instagram, which along with Facebook is part of Meta Platforms Inc., has an estimated 1 billion users of all ages.
Read: Facebook, WhatsApp, Instagram suffer worldwide outage
On Tuesday, Instagram introduced a previously announced feature that urges teenagers to take breaks from the platform. The company also announced other tools, including parental controls due to come out early next year, that it says are aimed at protecting young users from harmful content.
Senators of both parties were united in condemnation of the social network giant and Instagram, the photo-sharing juggernaut valued at some $100 billion that Facebook acquired for $1 billion in 2012.
The hearing grew more confrontational and emotionally charged as it went on.
“Sir, I have to tell you, you did sound callous,” Sen. Marsha Blackburn of Tennessee, the panel’s senior Republican, told Mosseri near the end of the hearing.
Senators repeatedly tried to win commitments from Mosseri for Instagram to provide full results of its internal research and its computer formulas for ranking content to independent monitors and Congress. They also tried to enlist his support for legislation that would curb the ways in which Big Tech deploys social media geared toward young people.
Mosseri responded mostly with general endorsements of openness and accountability, insisting that Instagram is an industry leader in transparency.
The issue is becoming increasingly urgent. An alarming advisory issued Tuesday by U.S. Surgeon General Vivek Murthy warned about a mental health crisis among children and young adults that has been worsened by the coronavirus pandemic. He said tech companies must design social media platforms that strengthen, rather than harm, young people’s mental health.
Meta, which is based in Menlo Park, California, has been roiled by public and political outrage over the disclosures by former Facebook employee Frances Haugen. She has made the case before lawmakers in the U.S., Britain and Europe that that the company's systems amplify online hate and extremism and that the company elevates profits over the safety of users.
Haugen, a data scientist who had worked in Facebook’s civic integrity unit, buttressed her assertions with a trove of internal company documents she secretly copied and provided to federal securities regulators and Congress.
The Senate panel has examined Facebook’s use of information from its own researchers that could indicate potential harm for some of its young users, especially girls, while it publicly downplayed the negative impacts. For some Instagram-devoted teens, peer pressure generated by the visually focused app led to mental-health and body-image problems, and in some cases, eating disorders and suicidal thoughts, the research detailed in the Facebook documents showed.
Read: Facebook working on Instagram for kids under 13
The revelations in a report by The Wall Street Journal, based on the documents leaked by Haugen, set off a wave of recriminations from lawmakers, critics of Big Tech, child-development experts and parents.
“As head of Instagram, I am especially focused on the safety of the youngest people who use our services,” Mosseri testified. “This work includes keeping underage users off our platform, designing age-appropriate experiences for people ages 13 to 18, and building parental controls. Instagram is built for people 13 and older. If a child is under the age of 13, they are not permitted on Instagram.”
Mosseri outlined the suite of measures he said Instagram has taken to protect young people on the platform. They include keeping kids under 13 off it, restricting direct messaging between kids and adults, and prohibiting posts that encourage suicide and self-harm.
But, as researchers both internal and external to Meta have documented, the reality is different. Kids under 13 often sign up for Instagram with or without their parents’ knowledge by lying about their age. And posts about suicide and self-harm still reach children and teens, sometimes with disastrous effects.
Ohio retirement fund sues Facebook over investment loss
Ohio’s largest public employee pension fund has sued Facebook — now known as Meta — alleging that it broke federal securities law by purposely misleading the public about the negative effects of its social platforms and the algorithms that run them.
The lawsuit by the Ohio Public Employees Retirement System specifically claims that Facebook buried inconvenient findings about how the company has managed those algorithms as well as the steps it said it was taking to protect the public.
Read:Plenty of pitfalls await Zuckerberg’s ‘metaverse’ plan
The suit also contends claims that Facebook knew that its platform facilitated dissention, illegal activity, and violent extremism, but refused to correct it. The Associated Press and a coalition of other news organizations have reported extensively on Facebook's actions, internal dissents that warned of these problems and related issues around the world based on internal company documents, now known as the Facebook Papers, leaked by the data scientist and former Facebook employee Frances Haugen.
“Facebook said it was looking out for our children and weeding out online trolls, but in reality was creating misery and divisiveness for profit,” Ohio Attorney General Dave Yost said in a statement. “We are not people to Mark Zuckerberg, we are the product and we are being used against each other out of greed.”
Read:In the middle of a crisis, Facebook Inc. renames itself Meta
The lawsuit, filed last week in federal court in California, says market losses resulting from publicity over Facebook’s actions caused investors — including OPERS — to lose more than $100 billion . A Facebook spokesperson called the lawsuit without merit and said the company would fight it.
Grameenphone launches Text-Only Facebook, Discover
Grameenphone, in partnership with Meta, has launched text-only Facebook and Discover to enable Grameenphone customers to stay connected more consistently, even when they run out of data.
Text-only Facebook will help the telecom operator's customers to stay connected with a text-only version of Facebook and Messenger when they run out of data until they can top up their data balance again.
Discover, a mobile web and Android app, will allow Grameenphone customers to browse the internet using a daily balance of 15MB without data charges. Also, it only supports low-bandwidth features such as text and icons when using free data.
Read: Grameenphone signs agreement with D24 Logistics
Post and Telecommunication Minister Mustafa Jabbar inaugurated text-only Facebook and Discover Tuesday at Bangladesh Telecommunication Regulatory Commission (BTRC).
"Allowing the use of Facebook without the internet is a great initiative. This shall help reduce the digital divide by ensuring information sharing and connectivity of marginalised people," he said.
"The government has been emphasising bringing maximum people under the umbrella of digital connectivity. But to turn it into a reality, we need the private sectors, especially the mobile network operators to step forward," BTRC Chairman Shyam Sunder Sikder said. "It is a good move by Grameenphone to improve access to social media and other important resources on the internet."
Read:Bangladesh Retail Awards 2021: Grameenphone wins two accolades
"Today's launch is a testimony of co-creation with Meta and Regulator to best use digital solutions for ensuring access to vital information in need for one of the largest Facebook user bases in the world," Grameenphone CEO Yasir Azman said.
"Helping people stay connected and ensuring they have consistent access to important resources on the internet such as education and health resources is critical. We are grateful to support these programmes to enable better connectivity and access for people in Bangladesh," Paul Kim, director of International Business Development and Operator Partnerships, APAC at Meta, said.
Read GP Explorers: 2nd batch graduates from Grameenphone's in-house skill academy
Europe bolsters pioneering tech rules with help from Haugen
European lawmakers have pioneered efforts to rein in big technology companies and are working to strengthen those rules, putting them ahead of the United States and other parts of world that have been slower to regulate Facebook and other social media giants facing increasing blowback over misinformation and other harmful content that can proliferate on their platforms.
While Europe shares Western democratic values with the U.S., none of the big tech companies — Facebook, Twitter, Google — that dominate online life are based on the continent, which some say allowed European officials to make a more clear-eyed assessment of the risks posed by tech companies largely headquartered in Silicon Valley or elsewhere in the U.S.
But that’s only part of the explanation, said Jan Penfrat, senior policy adviser at digital rights group EDRi.
Read:Could Facebook sue whistleblower Frances Haugen?
The question, Penfrat said, should also be: “Why is the U.S. so much lagging behind? And that may be because of the immense pressure from the homegrown companies” arguing to officials in Washington that stricter rules would hobble them as they compete with, for example, Chinese rivals.
Drawing up a new package of digital rules for the 27-nation European Union is getting a boost from Facebook whistleblower Frances Haugen, who answered questions Monday in Brussels from a European Parliament committee. It's the latest sign of interest in her revelations that Facebook prioritized profits over safety after the former data scientist testified last month to the U.S. Senate and released internal documents.
If the EU rules are done right, “you can create a game-changer for the world, you can force platforms to price in societal risk to their business operations so the decisions about what products to build and how to build them is not purely based on profit maximization," Haugen told lawmakers. “And you can show the world how transparency, oversight and enforcement should work.”
Since Haugen left Facebook, the company has renamed itself Meta as it focuses its business on a virtual reality world called the metaverse.
“I’m shocked they picked this name,” she said. In the book that inspired the term, “the metaverse is a dystopian thing, that people’s lives are so unpleasant that they need to hide in the system for half of their day.”
Haugen has been on a European tour, meeting lawmakers and regulators in the EU and United Kingdom who are seeking her input as they work on stricter rules for online companies amid concerns that social media can do everything from magnify depression in teens to incite political violence. A wider global movement to crack down on digital giants is taking cues from Europe and gaining momentum in the U.S. and Australia.
Europe has been a trailblazer in applying more scrutiny for big tech companies, most famously by slapping Google with multibillion-dollar fines in three antitrust cases. Now, the European Union is working on a sweeping update of its digital rulebook, including requiring companies to be more transparent with users on how algorithms make recommendations for what shows up on their feeds and forcing them to swiftly take down illegal content such as hate speech.
The rules are aimed at preventing bad behavior, rather than punishing past actions, as the EU has largely done so far.
France and Germany also are bringing in legislation requiring social media platforms to take down illegal content quicker, though these rules would be superseded by the EU ones, which are expected to take effect no earlier than 2023.
Meanwhile, the U.S. has only recently started cracking down on big tech companies, with regulators fining Facebook and YouTube over allegations of privacy violations and the government suing over their huge share of the market in the last couple of years. American lawmakers have proposed measures to protect kids online and get at the algorithms used to determine what shows up on feeds, but they all face a long road to passing.
While Haugen’s testimony and the documents she has provided have shed light on how Facebook’s systems work and spurred efforts in the U.S., European lawmakers may not be that surprised by what she has to say.
Read:Ex-Facebook manager criticizes company, urges more oversight
“The fact that Facebook is disseminating polarizing content more than other kinds of content is something that people like me have been saying for years,” said Alexandra Geese, a European Union lawmaker with the Green party. “But we didn’t have any evidence to prove it.”
European lawmakers have been interested in digging in to algorithms, as they work on requiring platforms to be more transparent with users on how artificial intelligence makes recommendations on what content people see.
“It’s rather about looking under the hood and regulating the kind of mechanisms that a company, a platform established to disseminate content or to direct people down rabbit holes into extremist groups,” Geese said. What Haugen is doing is “shifting the focus, and I think this is something that many other people before didn’t see.”
In the U.K., which left the European Union last year, the government also is working on raft of digital regulations, including an online safety bill that calls for a regulator to ensure tech companies comply with rules requiring them to remove dangerous or harmful content or face big financial penalties.
For the European Union, there’s still a lot of wrangling over the final details of the rules, two packages known as the Digital Services Act and the Digital Markets Act, which the EU Commission hopes to get approved next year.
Free speech campaigners and digital rights activists worry that EU rules requiring platforms to swiftly remove harmful content will lead to overzealous deletion of material that isn't illegal. In a bid to balance free speech requirements, users will be given the chance to complain about what content is removed.
In London, there's been a similar debate over how to define harmful but illegal content.
Both the EU and U.K. rules call for hefty fines worth up to 10% of a company's annual global turnover, which for the biggest tech companies could amount to billions of dollars in revenue.
Plenty of pitfalls await Zuckerberg’s ‘metaverse’ plan
When Mark Zuckerberg announced ambitious plans to build the “metaverse” — a virtual reality construct intended to supplant the internet, merge virtual life with real life and create endless new playgrounds for everyone — he promised that “you’re going to able to do almost anything you can imagine.”
That might not be such a great idea.
Zuckerberg, CEO of the company formerly known as Facebook, even renamed it Meta to underscore the significance of the effort. During his late October presentation, he effused about going to virtual concerts with your friends, fencing with holograms of Olympic athletes and — best of all — joining mixed-reality business meetings where some participants are physically present while others beam in from the metaverse as cartoony avatars.
But it’s just as easy to imagine dystopian downsides. Suppose the metaverse also enables a vastly larger, yet more personal version of the harassment and hate that Facebook has been slow to deal with on today’s internet? Or ends up with the same big tech companies that have tried to control the current internet serving as gatekeepers to its virtual-reality edition? Or evolves into a vast collection of virtual gated communities where every visitor is constantly monitored, analyzed and barraged with advertisements? Or foregoes any attempt to curtail user freedom, allowing scammers, human traffickers and cybergangs to commit crimes with impunity?
Picture an online troll campaign — but one in which the barrage of nasty words you might see on social media is instead a group of angry avatars yelling at you, with your only escape being to switch off the machine, said Amie Stepanovich, executive director of Silicon Flatirons at the University of Colorado.
“We approach that differently — having somebody scream at us than having somebody type at us,” she said. “There is a potential for that harm to be really ramped up.”
Read: Facebook to shut down face-recognition system, delete data
That’s one reason Meta might not be the best institution to lead us into the metaverse, said Philip Rosedale, founder of the virtual escape Second Life, which was an internet craze 15 years ago and still attracts hundreds of thousands of online inhabitants.
The danger is creating online public spaces that appeal only to a “polarized, homogenous group of people,” said Rosedale, describing Meta’s flagship VR product, Horizon, as filled with “presumptively male participants” and a bullying tone. In a safety tutorial, Meta has advised Horizon users to treat fellow avatars kindly and offers tips for blocking, muting or reporting those who don’t, but Rosedale said it’s going to take more than a “schoolyard monitor” approach to avoid a situation that rewards the loudest shouters.
“Nobody’s going to come to that party, thank goodness,” he said. “We’re not going to move the human creative engine into that sphere.”
A better goal, he said, would be to create systems that are welcoming and flexible enough to allow people who don’t know each other to get along as well as they might in a real place like New York’s Central Park. Part of that could rely on systems that help someone build a good reputation and network of trusted acquaintances they can carry across different worlds, he said. In the current web environment, such reputation systems have had a mixed record in curbing toxic behavior.
It’s not clear how long it will take Meta, or anyone else investing in the metaverse, to consider such issues. So far, tech giants from Microsoft and Apple to video game makers are still largely focused on debating the metaverse’s plumbing.
To make the metaverse work, some developers say they are going to have to form a set of industry standards similar to those that coalesced around HTML, the open “markup language” that’s been used to structure websites since the 1990s.
“You don’t think about that when you go to a website. You just click on the link,” said Richard Kerris, who leads the Omniverse platform for graphics chipmaker Nvidia. “We’re going to get to the same point in the metaverse where going from one world to another world and experiencing things, you won’t have to think about, ‘Do I have the right setup?’”
Nvidia’s vision for an open standard involves a structure for 3D worlds built by movie-making studio Pixar, which is also used by Apple. Among the basic questions being resolved are how physics will work in the metaverse — will virtual gravity cause someone’s glass to smash into pieces if they drop it? Will those rules change as you move from place to place?
Bigger disagreements will center on questions of privacy and identity, said Timoni West, vice president of augmented and virtual reality at Unity Technologies, which builds an engine for video game worlds.
“Being able to share some things but not share other things” is important when you’re showing off art in a virtual home but don’t want to share the details of your calendar, she said. “There’s a whole set of permission layers for digital spaces that the internet could avoid but you really need to have to make this whole thing work.”
Read: In the middle of a crisis, Facebook Inc. renames itself Meta
Some metaverse enthusiasts who’ve been working on the concept for years welcome the spotlight that could attract curious newcomers, but they also want to make sure Meta doesn’t ruin their vision for how this new internet gets built.
“The open metaverse is created and owned by all of us,” said Ryan Gill, founder and CEO of metaverse-focused startup Crucible. “The metaverse that Mark Zuckerberg and his company want is created by everybody but owned by them.”
Gill said Meta’s big splash is a reaction to ideas circulating in grassroots developer communities centered around “decentralized” technologies like blockchain and non-fungible tokens, or NFTs, that can help people establish and protect their online identity and credentials.
Central to this tech movement, nicknamed Web 3, for a third wave of internet innovation, is that what people create in these online communities belongs to them, a shift away from the Big Tech model of “accumulating energy and attention and optimizing it for buying behavior,” Gill said.
Evan Greer, an activist with Fight for the Future, said it’s easy to see Facebook’s Meta announcement as a cynical attempt to distance itself from all the scandals the company is facing. But she says Meta’s push is actually even scarier.
“This is Mark Zuckerberg revealing his end game, which is not just to dominate the internet of today but to control and define the internet that we leave to our children and our children’s children,” she said.
The company recently abandoned its use of facial recognition on its Facebook app, but metaverse gadgetry relies on new forms of tracking people’s gaits, body movements and expressions to animate their avatars with real-world emotions. And with both Facebook and Microsoft pitching metaverse apps as important work tools, there’s a potential for even more invasive workplace monitoring and exhaustion.
Activists are calling for the U.S. to pass a national digital privacy act that would apply not just to today’s platforms like Facebook but also those that might exist in the metaverse. Outside of a few such laws in states such as California and Illinois, though, actual online privacy laws remain rare in the U.S.
Facebook to shut down face-recognition system, delete data
Facebook said it will shut down its face-recognition system and delete the faceprints of more than 1 billion people amid growing concerns about the technology and its misuse by governments, police and others.
“This change will represent one of the largest shifts in facial recognition usage in the technology’s history,” Jerome Pesenti, vice president of artificial intelligence for Facebook’s new parent company, Meta, wrote in a blog post on Tuesday.
He said the company was trying to weigh the positive use cases for the technology “against growing societal concerns, especially as regulators have yet to provide clear rules.” The company in the coming weeks will delete “more than a billion people’s individual facial recognition templates,” he said.
Facebook’s about-face follows a busy few weeks. On Thursday it announced its new name Meta for Facebook the company, but not the social network. The change, it said, will help it focus on building technology for what it envisions as the next iteration of the internet -- the “metaverse.”
The company is also facing perhaps its biggest public relations crisis to date after leaked documents from whistleblower Frances Haugen showed that it has known about the harms its products cause and often did little or nothing to mitigate them.
Read: Just what are 'The Facebook Papers,' anyway?
Facebook didn’t immediately respond to questions about how people could verify that their image data was deleted, or what it would be doing with the underlying technology.
More than a third of Facebook’s daily active users have opted in to have their faces recognized by the social network’s system. That’s about 640 million people. Facebook introduced facial recognition more than a decade ago but gradually made it easier to opt out of the feature as it faced scrutiny from courts and regulators.
Facebook in 2019 stopped automatically recognizing people in photos and suggesting people “tag” them, and instead of making that the default, asked users to choose if they wanted to use its facial recognition feature.
Facebook’s decision to shut down its system “is a good example of trying to make product decisions that are good for the user and the company,” said Kristen Martin, a professor of technology ethics at the University of Notre Dame. She added that the move also demonstrates the power of public and regulatory pressure, since the face recognition system has been the subject of harsh criticism for over a decade.
Meta Platforms Inc., Facebook’s parent company, appears to be looking at new forms of identifying people. Pesenti said Tuesday’s announcement involves a “company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.”
“Facial recognition can be particularly valuable when the technology operates privately on a person’s own devices,” he wrote. “This method of on-device facial recognition, requiring no communication of face data with an external server, is most commonly deployed today in the systems used to unlock smartphones.”
Apple uses this kind of technology to power its Face ID system for unlocking iPhones.
Researchers and privacy activists have spent years raising questions about the tech industry’s use of face-scanning software, citing studies that found it worked unevenly across boundaries of race, gender or age. One concern has been that the technology can incorrectly identify people with darker skin.
Another problem with face recognition is that in order to use it, companies have had to create unique faceprints of huge numbers of people – often without their consent and in ways that can be used to fuel systems that track people, said Nathan Wessler of the American Civil Liberties Union, which has fought Facebook and other companies over their use of the technology.
“This is a tremendously significant recognition that this technology is inherently dangerous,” he said.
Facebook found itself on the other end of the debate last year when it demanded that facial recognition startup ClearviewAI, which works with police, stop harvesting Facebook and Instagram user images to identify the people in them.
Concerns also have grown because of increasing awareness of the Chinese government’s extensive video surveillance system, especially as it’s been employed in a region home to one of China’s largely Muslim ethnic minority populations.
Facebook’s huge repository of images shared by users helped make it a powerhouse for improvements in computer vision, a branch of artificial intelligence. Now many of those research teams have been refocused on Meta’s ambitions for augmented reality technology, in which the company envisions future users strapping on goggles to experience a blend of virtual and physical worlds. Those technologies, in turn, could pose new concerns about how people’s biometric data is collected and tracked.
Read:In the middle of a crisis, Facebook Inc. renames itself Meta
Meta’s newly wary approach to facial recognition follows decisions by other U.S. tech giants such as Amazon, Microsoft and IBM last year to end or pause their sales of facial recognition software to police, citing concerns about false identifications and amid a broader U.S. reckoning over policing and racial injustice.
At least seven U.S. states and nearly two dozen cities have limited government use of the technology amid fears over civil rights violations, racial bias and invasion of privacy.
President Joe Biden’s science and technology office in October launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character. European regulators and lawmakers have also taken steps toward blocking law enforcement from scanning facial features in public spaces.
Facebook’s face-scanning practices also contributed to the $5 billion fine and privacy restrictions the Federal Trade Commission imposed on the company in 2019. Facebook’s settlement with the FTC included a promise to require “clear and conspicuous” notice before people’s photos and videos were subjected to facial recognition technology.
And the company earlier this year agreed to pay $650 million to settle a 2015 lawsuit alleging it violated an Illinois privacy law when it used photo-tagging without users’ permission.
“It is a big deal, it’s a big shift but it’s also far, far too late,” said John Davisson, senior counsel at the Electronic Privacy Information Center. EPIC filed its first complaint with the FTC against Facebook’s facial recognition service in 2011, the year after it was rolled out.
Just what are 'The Facebook Papers,' anyway?
The Facebook Papers project represents a unique collaboration among 17 American news organizations, including The Associated Press. Journalists from a variety of newsrooms, large and small, worked together to gain access to thousands of pages of internal company documents obtained by Frances Haugen, the former Facebook product manager-turned-whistleblower.
A separate consortium of European news outlets had access to the same set of documents, and members of both groups began publishing content related to their analysis of the materials at 7 a.m. EDT on Monday, Oct. 25. That date and time was set by the partner news organizations to give everyone in the consortium an opportunity to fully analyze the documents, report out relevant details, and to give Facebook’s public relations staff ample time to respond to questions and inquiries raised by that reporting.
Each member of the consortium pursued its own independent reporting on the document contents and their significance. Every member also had the opportunity to attend group briefings to gain information and context about the documents.
Also read: In the middle of a crisis, Facebook Inc. renames itself Meta
The launch of The Facebook Papers project follows similar reporting by The Wall Street Journal, sourced from the same documents, as well as Haugen’s appearance on the CBS television show “60 Minutes” and her Oct. 5 Capitol Hill testimony before a U.S. Senate subcommittee.
The papers themselves are redacted versions of disclosures that Haugen has made over several months to the Securities and Exchange Commission, alleging Facebook was prioritizing profits over safety and hiding its own research from investors and the public.
Also read: Facebook froze as anti-vaccine comments swarmed users
These complaints cover a range of topics, from its efforts to continue growing its audience, to how its platforms might harm children, to its alleged role in inciting political violence. The same redacted versions of those filings are being provided to members of Congress as part of its investigation. And that process continues as Haugen’s legal team goes through the process of redacting the SEC filings by removing the names of Facebook users and lower-level employees and turns them over to Congress.
The Facebook Papers consortium will continue to report on these documents as more become available in the coming days and weeks.
In the middle of a crisis, Facebook Inc. renames itself Meta
Like many companies in trouble before it, Facebook is changing its name and logo.
Facebook Inc. is now called Meta Platforms Inc., or Meta for short, to reflect what CEO Mark Zuckerberg said Thursday is its commitment to developing the new surround-yourself technology known as the “ metaverse.” But the social network itself will still be called Facebook.
Also unchanged, at least for now, are its chief executive and senior leadership, its corporate structure and the crisis that has enveloped the company.
Skeptics immediately accused the company of trying to change the subject from the Facebook Papers, the trove of leaked documents that have plunged it into the biggest crisis since it was founded in Zuckerberg's Harvard dorm room 17 years ago. The documents portray Facebook as putting profits ahead of ridding its platform of hate, political strife and misinformation around the world.
Also read: Facebook froze as anti-vaccine comments swarmed users
The move reminded marketing consultant Laura Ries of when energy company BP rebranded itself to “Beyond Petroleum” to escape criticism that the oil giant harmed the environment.
“Facebook is the world’s social media platform, and they are being accused of creating something that is harmful to people and society,” she said. “They can’t walk away from the social network with a new corporate name and talk of a future metaverse.”
Facebook the app is not changing its name. Nor are Instagram, WhatsApp and Messenger. The company’s corporate structure also won’t change. But on Dec. 1, its stock will start trading under a new ticker symbol, MVRS.
The metaverse is sort of the internet brought to life, or at least rendered in 3D. Zuckerberg has described it as a “virtual environment” you can go inside of, instead of just looking at on a screen. People can meet, work and play, using virtual reality headsets, augmented reality glasses, smartphone apps or other devices.
Also read: Facebook dithered in curbing divisive user content in India
It also will incorporate other aspects of online life such as shopping and social media, according to Victoria Petrock, an analyst who follows emerging technologies.
Zuckerberg’s foray into virtual reality has drawn some comparisons to fellow tech billionaires’ outer space adventures and jokes that perhaps it's understandable he would want to escape his current reality amid calls for his resignation and increasing scrutiny of the company.
On Monday, Zuckerberg announced a new segment for Facebook that will begin reporting its financial results separately from the company’s Family of Apps segment starting in the final quarter of this year. The entity, Reality Labs, will reduce Facebook’s overall operating profit by about $10 billion this year, the company said.
Other tech companies such as Microsoft, chipmaker Nvidia and Fortnite maker Epic Games have all been outlining their own visions of how the metaverse will work.
Zuckerberg said that he expects the metaverse to reach a billion people within the next decade and that he hopes the new technology will creates millions of jobs for creators.
The announcement comes amid heightened legislative and regulatory scrutiny of Facebook in many parts of the world because of the Facebook Papers. A corporate rebranding isn't likely to solve the myriad problems revealed by the internal documents or quiet the alarms that critics have been raising for years about the harm the company's products are causing to society.
Zuckerberg, for his part, has largely dismissed the furor triggered by the Facebook Papers as unfair.
In an interesting twist, the Chan Zuckerberg Initiative, the philanthropic organization run by Zuckerberg and his wife, Priscilla Chan, bought a Canadian scientific literature analysis company called Meta in 2017.
By Thursday afternoon, though, its website Meta.org announced that it will “sunset" at the end of March. The Meta.com domain, meanwhile, redirected to the former Facebook's rebranded corporate site.
At headquarters in Menlo Park, California, the iconic thumbs up sign that has long been outside was repainted to a blue, pretzel-shape logo resembling an infinity symbol.
Some of Facebook’s biggest critics seemed unimpressed by the name change. The Real Facebook Oversight Board, a watchdog group focused on the company, announced that it will keep its name.
“Changing their name doesn’t change reality: Facebook is destroying our democracy and is the world’s leading peddler of disinformation and hate," the group said in a statement. "Their meaningless name change should not distract from the investigation, regulation and real, independent oversight needed to hold Facebook accountable.”
In explaining the rebrand, Zuckerberg said the name Facebook no longer encompasses everything the company does. In addition to the social network, that now includes Instagram, Messenger, its Quest VR headset, its Horizon VR platform and more.
“Today we are seen as a social media company,” Zuckerberg said. “But in our DNA we are a company that builds technology to connect people.”
Facebook froze as anti-vaccine comments swarmed users
In March, as claims about the dangers and ineffectiveness of coronavirus vaccines spun across social media and undermined attempts to stop the spread of the virus, some Facebook employees thought they had found a way to help.
By altering how posts about vaccines are ranked in people’s newsfeeds, researchers at the company realized they could curtail the misleading information individuals saw about COVID-19 vaccines and offer users posts from legitimate sources like the World Health Organization.
“Given these results, I’m assuming we’re hoping to launch ASAP,” one Facebook employee wrote, responding to the internal memo about the study.
Instead, Facebook shelved some suggestions from the study. Other changes weren't made until April.
READ: Australia wants Facebook to seek parental consent for kids
When another Facebook researcher suggested disabling some comments on vaccine posts in March until the platform could do a better job of tackling anti-vaccine messages lurking in them, that proposal was ignored at the time.
Critics say the reason Facebook was slow to take action on the ideas is simple: The tech giant worried it might impact the company’s profits.
“Why would you not remove comments? Because engagement is the only thing that matters,” said Imran Ahmed, the CEO of the Center for Countering Digital Hate, an internet watchdog group. “It drives attention and attention equals eyeballs and eyeballs equal ad revenue.”
In an emailed statement, Facebook said it has made “considerable progress” this year with downgrading vaccine misinformation in users' feeds.
Facebook’s internal discussions were revealed in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions received by Congress were obtained by a consortium of news organizations, including The Associated Press.
The trove of documents shows that in the midst of the COVID-19 pandemic, Facebook carefully investigated how its platforms spread misinformation about life-saving vaccines. They also reveal rank-and-file employees regularly suggested solutions for countering anti-vaccine content on the site, to no avail. The Wall Street Journal reported on some of Facebook's efforts to deal with anti-vaccine comments last month.
Facebook's response raises questions about whether the company prioritized controversy and division over the health of its users.
“These people are selling fear and outrage,” said Roger McNamee, a Silicon Valley venture capitalist and early investor in Facebook who is now a vocal critic. “It is not a fluke. It is a business model.”
Typically, Facebook ranks posts by engagement — the total number of likes, dislikes, comments, and reshares. That ranking scheme may work well for innocuous subjects like recipes, dog photos, or the latest viral singalong. But Facebook’s own documents show that when it comes to divisive public health issues like vaccines, engagement-based ranking only emphasizes polarization, disagreement, and doubt.
READ: Communal violence: Facebook cannot deny responsibility, says Hasan Mahmud
To study ways to reduce vaccine misinformation, Facebook researchers changed how posts are ranked for more than 6,000 users in the U.S., Mexico, Brazil, and the Philippines. Instead of seeing posts about vaccines that were chosen based on their popularity, these users saw posts selected for their trustworthiness.
The results were striking: a nearly 12% decrease in content that made claims debunked by fact-checkers and an 8% increase in content from authoritative public health organizations such as the WHO or U.S. Centers for Disease Control. Those users also had a 7% decrease in negative interactions on the site.
Employees at the company reacted to the study with exuberance, according to internal exchanges included in the whistleblower’s documents.
“Is there any reason we wouldn’t do this?” one Facebook employee wrote in response to an internal memo outlining how the platform could rein in anti-vaccine content.
Facebook said it did implement many of the study’s findings — but not for another month, a delay that came at a pivotal stage of the global vaccine rollout.
In a statement, company spokeswoman Dani Lever said the internal documents “don’t represent the considerable progress we have made since that time in promoting reliable information about COVID-19 and expanding our policies to remove more harmful COVID and vaccine misinformation.”
The company also said it took time to consider and implement the changes.
Yet the need to act urgently couldn't have been clearer: At that time, states across the U.S. were rolling out vaccines to their most vulnerable — the elderly and sick. And public health officials were worried. Only 10% of the population had received their first dose of a COVID-19 vaccine. And a third of Americans were thinking about skipping the shot entirely, according to a poll from The Associated Press-NORC Center for Public Affairs Research.
Despite this, Facebook employees acknowledged they had “no idea” just how bad anti-vaccine sentiment was in the comments sections on Facebook posts. But company research in February found that as much as 60% of the comments on vaccine posts were anti-vaccine or vaccine reluctant.
“That’s a huge problem and we need to fix it,” the presentation on March 9 read.
Even worse, company employees admitted they didn’t have a handle on catching those comments. And if they did, Facebook didn’t have a policy in place to take the comments down. The free-for-all was allowing users to swarm vaccine posts from news outlets or humanitarian organizations with negative comments about vaccines.
“Our ability to detect (vaccine hesitancy) in comments is bad in English — and basically non-existent elsewhere,” another internal memo posted on March 2 said.
Los Angeles resident Derek Beres, an author and fitness instructor, sees anti-vaccine content thrive in the comments every time he promotes immunizations on his accounts on Instagram, which is owned by Facebook. Last year, Beres began hosting a podcast with friends after they noticed conspiracy theories about COVID-19 and vaccines were swirling on the social media feeds of popular health and wellness influencers.
Earlier this year, when Beres posted a picture of himself receiving the COVID-19 shot, some on social media told him he would likely drop dead in six months’ time.
"The comments section is a dumpster fire for so many people,” Beres said.
Anti-vaccine comments on Facebook grew so bad that even as prominent public health agencies like UNICEF and the World Health Organization were urging people to take the vaccine, the organizations refused to use free advertising that Facebook had given them to promote inoculation, according to the documents.
Some Facebook employees had an idea. While the company worked to hammer out a plan to curb all the anti-vaccine sentiment in the comments, why not disable commenting on posts altogether?
“Very interested in your proposal to remove ALL in-line comments for vaccine posts as a stopgap solution until we can sufficiently detect vaccine hesitancy in comments to refine our removal,” one Facebook employee wrote on March 2.
The suggestion went nowhere until mid-April, when Lever said the company stopped showing previews of popular comments on vaccine posts.
Instead, Facebook CEO Mark Zuckerberg announced on March 15 that the company would start labeling posts about vaccines that described them as safe.
The move allowed Facebook to continue to get high engagement — and ultimately profit — off anti-vaccine comments, said Ahmed of the Center for Countering Digital Hate.
“They were trying to find ways to not reduce engagement but at the same time make it look like they were trying to make some moves toward cleaning up the problems that they caused,” he said.
It’s unrealistic to expect a multi-billion-dollar company like Facebook to voluntarily change a system that has proven to be so lucrative, said Dan Brahmy, CEO of Cyabra, an Israeli tech firm that analyzes social media networks and disinformation. Brahmy said government regulations may be the only thing that could force Facebook to act.
“The reason they didn’t do it is because they didn’t have to,” Brahmy said. “If it hurts the bottom line, it’s undoable.”
Bipartisan legislation in the U.S. Senate would require social media platforms to give users the option of turning off algorithms tech companies use to organize individuals' newsfeeds.
Sen. John Thune, R-South Dakota, a sponsor of the bill, asked Facebook whistleblower Haugen to describe the dangers of engagement-based ranking during her testimony before Congress earlier this month.
She said there are other ways of ranking content — for instance, by the quality of the source, or chronologically — that would serve users better. The reason Facebook won’t consider them, she said, is that they would reduce engagement.
“Facebook knows that when they pick out the content ... we spend more time on their platform, they make more money,” Haugen said.
Haugen’s leaked documents also reveal that a relatively small number of Facebook’s anti-vaccine users are rewarded with big pageviews under the tech platform’s current ranking system.
Internal Facebook research presented on March 24 warned that most of the “problematic vaccine content” was coming from a handful of areas on the platform. In Facebook communities where vaccine distrust was highest, the report pegged 50% of anti-vaccine pageviews on just 111 — or .016% — of Facebook accounts.
“Top producers are mostly users serially posting (vaccine hesitancy) content to feed,” the research found.
On that same day, the Center for Countering Digital Hate published an analysis of social media posts that estimated just a dozen Facebook users were responsible for 73% of anti-vaccine posts on the site between February and March. It was a study that Facebook’s leaders in August told the public was “faulty,” despite the internal research published months before that confirmed a small number of accounts drive anti-vaccine sentiment.
Earlier this month, an AP-NORC poll found that most Americans blame social media companies, like Facebook, and their users for misinformation.
But Ahmed said Facebook shouldn't just shoulder blame for that problem.
“Facebook has taken decisions which have led to people receiving misinformation which caused them to die,” Ahmed said. “At this point, there should be a murder investigation.”
Australia wants Facebook to seek parental consent for kids
Australia plans to crack down on online advertisers targeting children by making social media platforms seek parental consent for users younger than 16 years old to join or face fines of 10 million Australian dollars ($7.5 million) under a draft law released Monday.
The landmark legislation would protect Australians online and ensure that Australia’s privacy laws are appropriate in the digital age, a government statement said.
Social media platforms would be required to take all reasonable steps to verify their users’ ages under a binding code for social media services, data brokers and other large online platforms operating in Australia.
Read:Facebook dithered in curbing divisive user content in India
The platforms would also have to give primary consideration to the best interests of children when handling their personal information, the draft legislation states.
The code would also require platforms to obtain parental consent for users under the age of 16.
The proposed legal changes come after former Facebook product manager Frances Haugen this month asserted that whenever there was a conflict between the public good and what benefited the company, the social media giant would choose its own interests.
Assistant Minister to the Prime Minister for Mental Health and Suicide Prevention David Coleman said the new code would lead the world in protecting children from social media companies.
“In Australia, even before the COVID-19 pandemic, there was a consistent increase in signs of distress and mental ill health among young people. While the reasons for this are varied and complex, we know that social media is part of the problem,” Coleman said in a statement.
Read: Amid the Capitol riot, Facebook faced its own insurrection
Facebook regional director of public policy Mia Garlick said her platform had been calling for Australia’s privacy laws to evolve with new technology.
“We have supported the development of international codes around young people’s data, like the U.K. Age Appropriate Design Code,” Garlick said in a statement, referring to British legislation introduced this year that requires platforms to verify users’ ages if content risks the moral, physical or mental well-being of children.
“We’re reviewing the draft bill and discussion paper released today, and look forward to working with the Australian government on this further,” she added.
Australia has been a prominent voice in calling for international regulation of the internet.
It passed laws this year that oblige Google and Facebook to pay for journalism. Australia also defied the tech companies by creating a law that could imprison social media executives if their platforms stream violent images.