hate speech
Top EU official warns Musk: Twitter needs to protect users from hate speech, misinformation
A top European Union official warned Elon Musk on Wednesday that Twitter needs to beef up measures to protect users from hate speech, misinformation and other harmful content to avoid violating new rules that threaten tech giants with big fines or even a ban in the 27-nation bloc.
Thierry Breton, the EU's commissioner for digital policy, told the billionaire Tesla CEO that the social media platform will have to significantly increase efforts to comply with the new rules, known as the Digital Services Act, set to take effect next year.
The two held a video call to discuss Twitter's preparedness for the law, which will require tech companies to better police their platforms for material that, for instance, promotes terrorism, child sexual abuse, hate speech and commercial scams.
It’s part of a new digital rulebook that has made Europe the global leader in the push to rein in the power of social media companies, potentially setting up a clash with Musk’s vision for a more unfettered Twitter. U.S. Treasury Secretary Janet Yellen also said Wednesday that an investigation into Musk's $44 billion purchase was not off the table.
Breton said he was pleased to hear that Musk considers the EU rules “a sensible approach to implement on a worldwide basis.”
“But let’s also be clear that there is still huge work ahead,” Musk said, according to a readout of the call released by Breton’s office. “Twitter will have to implement transparent user policies, significantly reinforce content moderation and protect freedom of speech, tackle disinformation with resolve, and limit targeted advertising.”
After Musk, a self-described “free speech absolutist,” bought Twitter a month ago, groups that monitor the platform for racist, antisemitic and other toxic speech, such the Cyber Civil Rights Initiative, say it’s been on the rise on the world’s de facto digital public square.
Musk has signaled an interest in rolling back many of Twitter’s previous rules meant to combat misinformation, most recently by abandoning enforcement of its COVID-19 misinformation policy. He already reinstated some high-profile accounts that had violated Twitter’s content rules and had promised a “general amnesty” restoring most suspended accounts starting this week.
Twitter didn’t respond to an email request for comment. In a separate blog post Wednesday, the company said “human safety” is its top priority and that its trust and safety team “continues its diligent work to keep the platform safe from hateful conduct, abusive behavior, and any violation of Twitter’s rules."
Musk, however, has laid off half the company’s 7,500-person workforce, along with an untold number of contractors responsible for content moderation. Many others have resigned, including the company’s head of trust and safety.
Read more: Musk says granting 'amnesty' to suspended Twitter accounts
In the call Wednesday, Musk agreed to let the EU's executive Commission carry out a “stress test" at Twitter’s headquarters early next year to help the platform comply with the new rules ahead of schedule, the readout said.
That will also help the company prepare for an “extensive independent audit" as required by the new law, which is aimed at protecting internet users from illegal content and reducing the spread of harmful but legal material.
Violations could result in huge fines of up to 6% of a company’s annual global revenue or even a ban on operating in the European Union's single market.
Along with European regulators, Musk risks running afoul of Apple and Google, which power most of the world’s smartphones. Both have stringent policies against misinformation, hate speech and other misconduct, previously enforced to boot apps like the social media platform Parler from their devices. Apps must also meet certain data security, privacy and performance standards.
Musk tweeted without providing evidence this week that Apple “threatened to withhold Twitter from its App Store, but won’t tell us why.” Apple hasn’t commented but Musk backtracked on his claim Wednesday, saying he met with Apple CEO Tim Cook who “was clear that Apple never considered” removing Twitter.
Meanwhile, U.S. Treasury Secretary Janet Yellen walked back her statements about whether Musk’s purchase of Twitter warrants government review.
“I misspoke,” she said at The New York Times’ DealBook Summit on Wednesday, referring to a CBS interview this month where she said there was “no basis” to review the Twitter purchase.
The Treasury secretary oversees the Committee on Foreign Investment in the United States, an interagency committee that investigates the national security risks from foreign investments in American firms.
Read more: Elon Musk says Twitter deal ‘temporarily on hold’
“If there are such risks, it would be appropriate for the Treasury to have a look,” Yellen told The New York Times.
She declined to confirm whether CFIUS is currently investigating Musk’s Twitter purchase.
Billionaire Saudi Prince Alwaleed bin Talal is, through his investment company, Twitter’s biggest shareholder after Musk.
1 year ago
“Prejudice, racism and rising hate speech”: UN chief describes world
UN Secretary-General Antonio Guterres has called for embracing Gandhi’s values and working across cultures and borders to build a better, more peaceful future.
"Let us walk this path together, in solidarity, as one human family," he said in a message marking the International Day of Non-violence that falls on October 2.
The International Day of Non-Violence celebrates not only Mahatma Gandhi’s birthday, but the values he embodied that echo across decades: peace, mutual respect, and the essential dignity shared by every person.
Read: Peace is the only practical way to a better, fairer world for all: UN Chief
"Sadly, our world is not living up to those values," Guterres said.
He said the world is going through growing conflicts and climate chaos.
"Poverty, hunger and deepening inequalities. Prejudice, racism and rising hate speech. And a morally bankrupt global financial system that entrenches poverty and stymies recovery for developing countries," he mentioned.
Read: Divisions among major powers since Russia invaded Ukraine must be addressed: UN Chief
The UN chief laid emphasis on investing in people’s health, education, decent jobs and social protection.
On the International Day of Non-violence, Antonio Guterres, UN Secretary-General, also highlighted the importance of supporting developing countries as they build resilient infrastructure and protect populations from the impacts of climate change, while also accelerating the transition from planet-killing fossil fuels to renewable energy.
"Gandhi’s life and example reveal a timeless pathway to a more peaceful and tolerant world," said the UN chief.
Read Let's build a more just, thriving workforce leaving no one behind: Guterres
2 years ago
Fighting hate speech a job for everyone: UN chief
UN Secretary-General Antonio Guterres has said hate speech is a danger to everyone and fighting it is a job for everyone.
“This first International Day to Counter Hate Speech is a call to action. Let us recommit to doing everything in our power to prevent and end hate speech by promoting respect for diversity and inclusivity,” he said.
In a message on the International Day for Countering Hate Speech, the UN chief on Saturday said hate speech incites violence, undermines diversity and social cohesion, and threatens the common values and principles that bind them together.
Read: UN chief for working together to build peaceful, inclusive societies
“It promotes racism, xenophobia and misogyny; it dehumanizes individuals and communities; and it has a serious impact on our efforts to promote peace and security, human rights, and sustainable development,” Guterres said.
He said words can be weaponized and cause physical harm and the escalation from hate speech to violence has played a significant role in the most horrific and tragic crimes of the modern age, from the antisemitism driving the Holocaust, to the 1994 genocide against the Tutsi in Rwanda.
The internet and social media have turbocharged hate speech, enabling it to spread like wildfire across borders, said the UN chief.
The spread of hate speech against minorities during the COVID-19 pandemic provides further evidence that many societies are highly vulnerable to the stigma, discrimination and conspiracies it promotes, Guterres said.
Read: Overseas aid cuts imperil SDGs: UN chief
“In response to this growing threat, three years ago, I launched the United Nations Strategy and Plan of Action on Hate Speech,” he said, adding that this provides a framework for their support to Member States to counter this scourge while respecting freedom of expression and opinion, in collaboration with civil society, the media, technology companies and social media platforms.
Last year, the UN chief said, the General Assembly came together to pass a resolution calling for inter-cultural and inter-religious dialogue to counter hate speech – and proclaimed the International Day that they mark today (June 18) for the first time.
2 years ago
National plan of action needed to counter hate speech: ARTICLE 19
ARTICLE 19, the UK-based human rights organization with an emphasis on free speech, has urged the Bangladesh government to develop and implement a national plan of action to counter hate speech.Kenya recently became the first country in the world to declare a national plan of action for the purpose.ARTICLE 19 also sees the need to step up efforts of the Bangladesh government and other concerned stakeholders to promote inter-religious and inter-cultural dialogue and tolerance that counters hate speech.Read: ARTICLE 19 to support troubled journalists, activists in Bangladesh
The rights-based organization raised the issue on the eve of the UN's “International Day for Countering Hate Speech”, which will be marked for the first time on June 18.Faruq Faisel, South Asia Regional Director of ARTICLE 19, said the exponential spread and proliferation of hate speech is becoming a deep concern in Bangladesh and around the world."Although hate speech is not a new phenomenon, the scale and impacts of hate speech have amplified due to the advent of new technologies and online communication. In Bangladesh, physical and verbal attacks against religious and ethnic minorities are on the rise due to the influence of hate speech, especially online,” said Faisel.ARTICLE 19 called on the government and other concerned stakeholders to ensure that religions, beliefs and ethnicity are not used to violate human rights, and urges all, both the government and citizens to combat hate speech – which is a threat to human rights of the citizens of Bangladesh.
2 years ago
'Kill more': Facebook fails to detect hate against Rohingya
A new report has found that Facebook failed to detect blatant hate speech and calls to violence against Myanmar’s Rohingya Muslim minority years after such behavior was found to have played a determining role in the genocide against them.
The report shared exclusively with The Associated Press showed the rights group Global Witness submitted eight paid ads for approval to Facebook, each including different versions of hate speech against Rohingya. All eight ads were approved by Facebook to be published.
The group pulled the ads before they were posted or paid for, but the results confirmed that despite its promises to do better, Facebook's leaky controls still fail to detect hate speech and calls for violence on its platform.
The army conducted what it called a clearance campaign in western Myanmar's Rakhine state in 2017 after an attack by a Rohingya insurgent group. More than 700,000 Rohingya fled into neighboring Bangladesh and security forces were accused of mass rapes, killings and torching thousands of homes.
Also Monday, U.S. Secretary of State Antony Blinken announced that the U.S. views the violence against Rohingya as genocide. The declaration is intended to both generate international pressure and lay the groundwork for potential legal action, Blinken said.
Also read: Rohingya sue Facebook for $150bn over Myanmar hate speech
On Feb. 1 of last year, Myanmar’s military forcibly took control of the country, jailing democratically elected government officials. Rohingya refugees have condemned the military takeover and said it makes them more afraid to return to Myanmar.
Experts say such ads have continued to appear and that despite its promises to do better and assurances that it has taken its role in the genocide seriously, Facebook still fails even the simplest of tests — ensuring that paid ads that run on its site do not contain hate speech calling for the killing of Rohingya Muslims.
“The current killing of the Kalar is not enough, we need to kill more!” read one proposed paid post from Global Witness, using a slur often used in Myanmar to refer to people of east Indian or Muslim origin.
“They are very dirty. The Bengali/Rohingya women have a very low standard of living and poor hygiene. They are not attractive,” read another.
“These posts are shocking in what they encourage and are a clear sign that Facebook has not changed or done what they told the public what they would do: properly regulate themselves,” said Ronan Lee, a research fellow at the Institute for Media and Creative Industries at Loughborough University, London.
The eight ads from Global Witness all used hate speech language taken directly from the United Nations Independent International Fact-Finding Mission on Myanmar in their report to the Human Rights Council. Several examples were from past Facebook posts.
The fact that Facebook approved all eight ads is especially concerning because the company claims to hold advertisements to an “even stricter” standard than regular, unpaid posts, according to their help center page for paid advertisements.
“I accept the point that eight isn’t a very big number. But I think the findings are really stark, that all eight of the ads were accepted for publication,” said Rosie Sharpe, a campaigner at Global Witness. “I think you can conclude from that that the overwhelming majority of hate speech is likely to get through.”
Facebook's parent company Meta Platforms Inc. said it has invested in improving its safety and security controls in Myanmar, including banning military accounts after the Tatmadaw, as the armed forces are locally known, seized power and imprisoned elected leaders in the 2021 coup.
“We’ve built a dedicated team of Burmese speakers, banned the Tatmadaw, disrupted networks manipulating public debate and taken action on harmful misinformation to help keep people safe. We’ve also invested in Burmese-language technology to reduce the prevalence of violating content,” Rafael Frankel, director of public policy for emerging markets at Meta Asia Pacific wrote in an e-mailed statement to AP on March 17. “This work is guided by feedback from experts, civil society organizations and independent reports, including the UN Fact-Finding Mission on Myanmar’s findings and the independent Human Rights Impact Assessment we commissioned and released in 2018.”
Facebook has been used to spread hate speech and amplify military propaganda in Myanmar in the past.
Also read: 3 stabbed dead in Gazipur over ‘Facebook comment’: 2 held
Shortly after Myanmar became connected to the internet in 2000, Facebook paired with its telecom providers to allow customers to use the platform without having to pay for the data, which was still expensive at the time. Use of the platform exploded. For many in Myanmar, Facebook became the internet itself.
Local internet policy advocates repeatedly told Facebook hate speech was spreading across the platform, often targeting the Muslim minority Rohingya in the majority Buddhist nation.
For years Facebook failed to invest in content moderators who spoke local languages or fact checkers with an understanding of the political situation in Myanmar or to close specific accounts or delete pages being used to propagate hatred of the Rohingya, said Tun Khin, president of Burmese Rohingya Organization UK, a London-based Rohingya advocacy organization.
In March 2018, less than six months after hundreds of thousands of Rohingya fled violence in western Myanmar, Marzuki Darusman, chairman of the U.N. Independent International Fact-Finding Mission on Myanmar, told reporters social media had “substantively contributed to the level of acrimony and dissension and conflict, if you will, within the public."
“Hate speech is certainly of course a part of that. As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media,” Darusman said.
Asked about Myanmar a month later at a U.S. Senate hearing, Meta CEO Mark Zuckerberg said Facebook planned to hire “dozens” of Burmese speakers to moderate content and would work with civil society groups to identify hate figures and develop new technologies to combat hate speech.
“Hate speech is very language specific. It’s hard to do it without people who speak the local language and we need to ramp up our effort there dramatically,” Zuckerberg said.
Yet in internal files leaked by whistleblower Frances Haugen last year, AP found that breaches persisted. The company stepped up efforts to combat hate speech but never fully developed the tools and strategies required to do so.
Rohingya refugees have sued Facebook for more than $150 billion, accusing it of failing to stop hate speech that incited violence against the Muslim ethnic group by military rulers and their supporters in Myanmar. Rohingya youth groups based in the Bangladesh refugee camps have filed a separate complaint in Ireland with the 38-nation Organization for Economic Cooperation and Development calling for Facebook to provide some remediation programs in the camps.
The company now called Meta has refused to say how many of its content moderators read Burmese and can thus detect hate speech in Myanmar.
“Rohingya genocide survivors continue to live in camps today and Facebook continue to fail them,” said Tun Khin. “Facebook needs to do more.”
2 years ago
Muslim civil rights group sues Facebook over hate speech
A civil rights group is suing Facebook and its executives, saying CEO Mark Zuckerberg made “false and deceptive” statements to Congress when he said the giant social network removes hate speech and other material that violates its rules.
The lawsuit, filed by Muslim Advocates in Washington, D.C., Superior Court on Thursday, claims Zuckerberg and other senior executives “have engaged in a coordinated campaign to convince the public, elected representatives, federal officials, and non-profit leaders in the nation’s capital that Facebook is a safe product.”
Also read:Facebook acknowledges a bug that blocked coronavirus news
Facebook, the lawsuit alleges, has been repeatedly alerted to hate speech and calls to violence on its platform and done nothing or very little. Making false and deceptive statements about removing hateful and harmful content violates the District of Columbia’s consumer-protection law and its bar on fraud, the lawsuit says.
“Every day, ordinary people are bombarded with harmful content in violation of Facebook’s own policies on hate speech, bullying, harassment, dangerous organizations, and violence,” the lawsuit says. “Hateful, anti-Muslim attacks are especially pervasive on Facebook.”
In a statement, Facebook said it does not allow hate speech on its platform and said it regularly works with “experts, non-profits, and stakeholders to help make sure Facebook is a safe place for everyone, recognizing anti-Muslim rhetoric can take different forms.
The company based in Menlo Park, California, said it has invested in artificial intelligence technologies aimed at removing hate speech and proactively detects 97% of what it removes.
Facebook declined to comment beyond the statement, which did not address the lawsuit’s allegations that it has not removed hate speech and anti-Muslim networks from its platform even after it was notified of their existence.
Also read: US govt, states sue Facebook for 'predatory' conduct
For example, the lawsuit cites research by Elon University professor Megan Squire, who published research about anti-Muslim groups on Facebook and alerted the company. According to the lawsuit, Facebook did not remove the groups — but it did change how outside academics can access its platform so that the kind of research Squire did would be “impossible other than if done by Facebook employees.”
Facebook’s hate speech policy prohibits targeting a person or group with “dehumanizing speech or imagery,” calls for violence, references to subhumanity and inferiority as well as generalizations that state inferiority. The policy applies to attacks on the basis of race, religion, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.
But in one example from April 25, 2018, Squire reported to Facebook a group called “Purge Worldwide,” according to the lawsuit. The group’s description reads: “This is an anti Islamic group A Place to share information about what is happening in your part of the world.”
Facebook responded that it would not remove the group or the content. The lawsuit cites other examples of groups with names like “Death to Murdering Islamic Muslim Cult Members” and “Filth of Islam” that Facebook did not remove despite being notified, even though Facebook policy prohibits “reference or comparison to filth” on the basis of religion. In the latter case Facebook did remove some posts from the group, but not the group itself.
The lawsuit also cites an exception Facebook made to its policy for former President Donald Trump, for whom Facebook made an exception to its rules when he posted as a candidate in 2016 about banning all Muslims from entering the U.S.
Zuckerberg and other social media executives have repeatedly testified before Congress about how they combat extremism, hate and misinformation on their platforms. Zuckerberg told the House Energy and Commerce Committee that the issue is “nuanced.”
“Any system can make mistakes” in moderating harmful material, he said.
The plaintiffs seek a jury trial and damages of $1,500 per violation.
3 years ago
Facebook’s Oversight Board must consider minority rights: UN expert
A UN human rights expert has called on Facebook’s Oversight Board to take the rights of ethnic, religious and linguistic minorities into account in reaching decisions, particularly on hate speech.
3 years ago
Dhaka seeks ‘rock-solid partnership’ to fight intolerance
Foreign Minister Dr AK Abdul Momen has said the battle against the “pandemic of intolerance” needs a concerted “whole of society” approach and a rock-solid partnership involving all stakeholders.
4 years ago
Facebook oversight board to start operating in October
Facebook's long-awaited oversight board is set to launch in October.
4 years ago
Facebook civil rights audit finds ‘serious setbacks’
A two-year audit of Facebook’s civil rights records found “serious setbacks” that have marred the social network’s progress on matters such as hate speech, misinformation and bias, reports AP.
4 years ago