Facebook says it's working to limit the spread of misinformation and potentially harmful content about the coronavirus as bogus claims about the ongoing outbreak circulate online.
Kang-Xing Jin, Facebook's head of health, announced that the social media platform will begin removing posts that include false claims or conspiracy theories about the virus that have been flagged by health authorities. The company said it will focus on posts that discourage people from getting medical treatment, or that make potentially dangerous claims about cures.
The company will also limit the spread of posts debunked by its third-party fact checkers, and send users who shared the post a notification.
Users who search for information on the virus on Facebook, or who click on certain related hashtags on Instagram, will receive a pop-up providing authoritative information on the virus. In addition, information about the outbreak will also appear at the top of Facebook users' news feeds based on guidance from the World Health Organization.
"We will also block or restrict hashtags used to spread misinformation on Instagram, and are conducting proactive sweeps to find and remove as much of this content as we can," Jin wrote in a post. "Not all of these steps are fully in place. It will take some time to roll them out across our platforms.
Since the outbreak began a number of misleading claims and hoaxes about the virus have circulated online. They include false conspiracy theories that the virus was created in a lab and that vaccines have already been manufactured, wildly exaggerations about the number of sick and dead, and potentially harmful claims about bogus cures.
The coronavirus has now infected more than 9,800 people around the world, based on numbers released Friday. Some 213 deaths have been reported in China, with most of the deaths in the central province of Hubei. The number of cases grew in Japan, Thailand, Singapore, Taiwan and Germany on Friday, while Russia, Italy and England reported their first cases.
The first person-to-person transmission of the virus in the U.S. was announced Thursday in Chicago. The U.S. declared a public health emergency on Friday, as the nation's seventh case was identified.
Other internet companies have announced their own efforts to stem the flow of misinformation about the disease.
Twitter users who search for information about coronavirus are now given a link to the Centers for Disease Control and Prevention website on coronavirus. YouTube and Google, meanwhile, say they're promoting authoritative information about the virus to the top of search results.
Google also announced that users who search for information on the virus will see an "SOS Alert" at the top of their screen giving them links to the World Health Organization's references on the outbreak.
Facebook had a strong fourth quarter, making more money on advertising and adding more users despite challenges around regulation, privacy and efforts to fight election interference.
Its profit and revenue both handily surpassed Wall Street's expectations.
The company also said it settled a lawsuit filed in 2015 over its facial recognition practices and will pay $550 million as a result. The suit alleged Facebook violated Illinois privacy regulations with a feature that suggested to users other people to tag in their photos. Facebook replaced the tag suggestion tool with a broader facial recognition setting last year.
Facebook said that about 2.89 billion people use at least one of its services — Facebook, WhatsApp, Instagram or Messenger — each month. About 2.26 billion people use at least one every day. The Menlo Park, California, company said its main service had 2.5 billion monthly users at the end of the year, up 8% from a year earlier.
"This is a company that has shown that it can withstand ongoing criticism of its practices and yet still pull out gains in both revenue and users," said eMarketer analyst Debra Aho Williamson.
Facebook is under growing regulatory scrutiny around the world. In the U.S., it faces several government investigations for alleged anti-competitive behavior. Last August, it was fined $5 billion by the Federal Trade Commission for privacy violations, the largest FTC fine ever for a tech company.
Amid ongoing criticism about how Facebook handles the private data of its users, CEO Mark Zuckerberg has announced that the company was shifting course for a more "privacy-focused" future. This includes emphasizing small-group and private communication, though details are still scant.
It's not clear if this privacy focus will mean anything for how ads on Facebook are targeted, which has always been among the chief concerns for privacy advocates.
And Facebook continues to face challenges over election interference. After Russian actors used social media platforms like Facebook to interfere in the 2016 U.S. elections, the companies have tried to clamp down on fake accounts, misinformation and other forms of misuse. This Election Day will be a test of whether they've done enough.
In reporting fourth-quarter results Wednesday, Facebook said it earned $7.35 billion, or $2.56 per share, up 7% from $6.88 billion, or $2.38 per share, a year earlier. Revenue rose 25% to $21.1 billion from $16.9 billion, the bulk of that from ads. Analysts were expecting earnings of $2.52 per share and revenue of $20.9 billion, according to FactSet.
Facebook's stock dropped more than 6% in after-hours trading after the results came out. Some investors may be concerned about the company's growing expenses, while others could simply be cashing out following a record high for the stock earlier in the day.
China says its diplomats and government officials will fully exploit foreign social media platforms such as Facebook and Twitter that are blocked off to its own citizens.
Foreign ministry spokesman Geng Shuang on Monday likened the government to "diplomatic agencies and diplomats of other countries" in embracing such platforms to provide "better communication with the people outside and to better introduce China's situation and policies."
Facebook, Twitter and other social media platforms have tried for years without success to be allowed into the lucrative Chinese market, where Beijing has helped create politically reliable analogues such as Weichat and Weibo. Their content is carefully monitored by the companies and by government censors.
Despite that, Geng said China is "willing to strengthen communication with the outside world through social media such as Twitter to enhance mutual understanding." He also insisted that the Chinese internet remained open and said the country has the largest number of users of any nation, adding, "we have always managed the internet in accordance with laws and regulations."
The canny use of social media by pro-democracy protesters in Hong Kong has further deepened China's concern over the use of such platforms, prompting further crackdowns on the mainland, including on the use of virtual private networks.
Facebook Inc. said Thursday that it will continue to allow political ads on its platform including Instagram, despite possible false information in those ads run by politicians.
Facebook Director of Product Management Rob Leathern reasserted the policy by claiming that "people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public."
Leathern said Facebook does not intend to act the same way as Twitter, which completely bans political ads, or Google that limits the targeting of political ads.
Facebook admitted it received much criticism of its policy on political ads, but it asserted that decisions about those topics should not be made by private companies like Facebook.
Leathern said Facebook will give users more control on how they read political ads, including a functionality that helps them stop seeing ads run by political figures.
He noted that the U.S. tech giant will add more features in its Ad Library, a unique tool launched in May 2018 to allow Facebook users to access all political ads run by politicians and their campaigns on its platform.
Facebook has faced widespread scrutiny over its role in politics since U.S. general elections in 2016, and it has been slammed for giving too much freedom to politicians to post misinformation in advertisements, which apparently violated its own community standards.
Facebook says it is banning "deepfake" videos, the false but realistic clips created with artificial intelligence and sophisticated tools, as it steps up efforts to fight online manipulation.
The social network said late Monday that it's beefing up its policies to remove videos edited or synthesized in ways that aren't apparent to the average person, and which could dupe someone into thinking the video's subject said something he or she didn't actually say.
Created by artificial intelligence or machine learning, deepfakes combine or replace content to create images that can be almost impossible to tell are not authentic.
"While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases," Facebook's vice president of global policy management, Monika Bickert, said in a blog post.
However, she said the new rules won't include parody or satire, or clips edited just to change the order of words. The exceptions underscore the balancing act Facebook and other social media services face in their struggle to stop the spread of online misinformation and "fake news" while also respecting free speech and fending off allegations of censorship.
The U.S. tech company has been grappling with how to handle the rise of deepfakes after facing criticism last year for refusing to remove a doctored video of House Speaker Nancy Pelosi slurring her words, which was viewed more than 3 million times. Experts said the crudely edited clip was more of a "cheap fake" than a deepfake.
Then, a pair of artists posted fake footage of Facebook CEO Mark Zuckerberg showing him gloating over his one-man domination of the world. Facebook also left that clip online. The company said at the time that neither video violated its policies.
The problem of altered videos is taking on increasing urgency as experts and lawmakers try to figure out how to prevent deepfakes from being used to interfere with U.S. presidential elections in November.
Facebook said any videos that don't meet existing standards for removal can still be reviewed by independent third-party fact-checkers. Those deemed false will be flagged as such to anyone trying to share or view them, which Bickert said was a better approach than just taking them down.
"If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem," Bickert said. "By leaving them up and labelling them as false, we're providing people with important information and context."