Social-Media
Meta CEO Zuckerberg considered spinning off Instagram in 2018 over antitrust worries, email says
Meta CEO Mark Zuckerberg once considered separating Instagram from its parent company due to worries about antitrust litigation, according to an email shown Tuesday on the second day of an antitrust trial alleging Meta illegally monopolized the social media market.
In the 2018 email, Zuckerberg wrote that he was beginning to wonder if “spinning Instagram out” would be the only way to accomplish important goals, as big-tech companies grow. He also noted “there is a non-trivial chance” Meta could be forced to spin out Instagram and perhaps WhatsApp in five to 10 years anyway.
He wrote that while most companies resist breakups, “the corporate history is that most companies actually perform better after they've been split up.”
Asked Tuesday by attorney Daniel Matheson, who is leading the antitrust case for the Federal Trade Commission, which incidence in corporate history he had in mind, Zuckerberg responded: “I'm not sure what I had in mind then.”
Zuckerberg, who was the first witness, testified for more than seven hours over two days in the trial that could force Meta to break off Instagram and WhatsApp, startups the tech giant bought more than a decade ago that have since grown into social media powerhouses.
While questioning Zuckerberg on Tuesday morning, Matheson noted that he had referred to Instagram as being a “rapidly growing, threatening, network.” The attorney also pointed out Zuckerberg's referring to trying to neutralize a competitor by buying the company.
But Zuckerberg said while Matheson was able to show documents in court that indicated his concern about Instagram's growth, he also had many conversations about how excited his company was to acquire Instagram to make a better product.
Zuckerberg defends Instagram and WhatsApp deals as Meta faces landmark antitrust trial
Zuckerberg also said Facebook was in the process of building a camera app for sharing on mobile phones, and he thought Instagram was better at that, “so I wanted to buy them.”
Zuckerberg also pushed back against Matheson's contention that the reason for buying the company was to neutralize a threat.
“I think that that mischaracterizes what the email was," Zuckerberg said.
In his questioning of Zuckerberg, Matheson repeatedly brought up emails — many of them more than a decade old — written by Zuckerberg and his associates before and after the acquisition of Instagram.
While acknowledging the documents, Zuckerberg has often sought to downplay the contents, saying he wrote them in the early stages of considering the acquisition and that what he wrote at the time didn't capture the full scope of his interest in the company.
Matheson also brought up a February 2012 message in which Zuckerberg wrote to the former chief financial officer of Facebook that Instagram and Path, a social networking app, already had created meaningful networks that could be “very disruptive to us.”
Zuckerberg testified that the message was written in the context of a broad discussion about whether they should buy companies to accelerate their own developments.
Zuckerberg also testified that buying the company, taking it off the market and building their own version of it was “a reasonable thing to do.”
Later Tuesday, Mark Hansen, an attorney for Meta, began his questioning of Zuckerberg. Hansen, in his opening statements Monday, emphasized that Meta's services are free and that the company, far from holding a monopoly, actually has a lot of competition. He made a point of bringing up those issues in just over an hour of questioning Zuckerberg, with more expected to come Wednesday.
“It's very competitive,” Zuckerberg said, noting that charging for using services like Facebook would likely drive users away, since similar services are widely available elsewhere.
7 Warning Signs Social Media Is Affecting Your Child’s Mental Health
The trial is one of the first big tests of President Donald Trump’s FTC’s ability to challenge Big Tech. The lawsuit was filed against Meta — then called Facebook — in 2020, during Trump’s first term. It claims the company bought Instagram and WhatsApp to squash competition and establish an illegal monopoly in the social media market.
Facebook bought Instagram — which was a photo-sharing app with no ads — for $1 billion in 2012.
Instagram was the first company Facebook bought and kept running as a separate app. Until then, Facebook was known for smaller “acqui-hires” — a popular Silicon Valley deal in which a company purchases a startup as a way to hire its talented workers, then shuts the acquired company down. Two years later, it did it again with the messaging app WhatsApp, which it purchased for $22 billion.
WhatsApp and Instagram helped Facebook move its business from desktop computers to mobile devices, and to remain popular with younger generations as rivals like Snapchat (which it also tried, but failed, to buy) and TikTok emerged.
However, the FTC has a narrow definition of Meta’s competitive market, excluding companies like TikTok, YouTube and Apple’s messaging service from being considered rivals to Instagram and WhatsApp.
Apple unlikely to make iPhones in US despite Trump’s China tariffs
U.S. District Judge James Boasberg is presiding over the case. Late last year, he denied Meta’s request for a summary judgment and ruled that the case must go to trial.
4 days ago
7 Warning Signs Social Media Is Affecting Your Child’s Mental Health
In today’s hyper-connected world, children are growing up with screens as constant companions—scrolling, sharing, and seeking approval online. While social media offers scopes of connection and creativity, its darker effects often go unnoticed. Minor shifts in behaviour, mood, and daily habits may indicate underlying emotional distress. Recognising these early warning signs is crucial to safeguarding kids’ mental health and overall well-being. Let’s look closely at the red flags that social media-addicted children may reveal, which is more than just screen fatigue.
7 Red Flags That Signal Social Media Affects Your Child’s Mental Wellbeing
.
Irritability, Anger, Anxiety, and Depression
Emotional turbulence is often one of the first signs that social networks are impacting a child’s mental well-being. A child who once handled challenges with calm may suddenly snap over minor inconveniences—like being asked to pause their screen time. This shift is more than a passing phase.
Excessive digital platform exposure can condition a kid’s brain to expect instant gratification. Consequently, it gets difficult to tolerate delays or engage in slower-paced activities like reading or studying. The flood of fast, dopamine-triggering content rewires emotional responses, often replacing patience with frustration. As a result, parents might find their child increasingly restless, easily angered, and emotionally unbalanced even outside the screen.
Read more: How to Keep Your Baby Comfortable and Healthy While Using Air Conditioner or Cooler
Losing Track of Time
When children spend long hours online, it’s easy for them to lose a sense of time. What often begins as a quick scroll can spiral into hours of passive consumption, especially on apps designed to encourage endless engagement. This disconnection from time awareness can quietly lead to neglect of daily responsibilities such as homework, family interactions, or personal hygiene.
The 2025 report from Common Sense Media reveals that children under 8 now spend an average of 2 hours and 27 minutes each day engaging with screen-based media. TikTok dominates their screen time with nearly two hours a day, making it the top platform among this age group. These numbers point to a growing trend where time management skills erode as children become immersed in the virtual world.
Social Withdrawal
As children spend more time scrolling through digital feeds, their connection with real-world interactions often begins to fade.
Social psychologist Jonathan Haidt, in his book The Anxious Generation (2024), likens social media to a firehose of addictive content. It displaces physical activity and in-person play—fundamental elements of healthy childhood development.
Read more: Summer Tips for School-going Children
Children using online media for three or more hours a day often avoid eye contact and struggle to express emotions clearly. Moreover, they speak in incomplete sentences during face-to-face interactions.
For instance, a child who once eagerly engaged in family dinners might now retreat to their room, avoiding conversation entirely. This pattern of withdrawal isn’t shyness-—it’s discomfort, shaped by a digital world that rarely demands verbal or emotional expression.
Misguided Self-esteem
Virtual communities often act as distorted mirrors, shaping how children perceive their worth. Constantly exposed to highlight reels of peers’ lives, many begin to question their own value.
According to ElectroIQ's Social Media Mental Health Statistics, 52% of users report feeling worse about their lives after seeing friends’ posts. 43% of teenagers admit feeling pressure to post content, driven by the hope of gaining likes or comments.
Read more: How to Protect Children from Electric Shocks
This chase for validation can have serious consequences. Children may develop body image issues or body dissatisfaction, comparing themselves to edited or filtered content. To gain approval online, they might resort to risky behaviour. For example, a teen might post provocative or reckless videos for attention and digital praise.
Losing Attention in Offline Tasks
Children nowadays are increasingly struggling to stay focused on tasks that require sustained concentration, like reading, studying, or completing chores. SambaRecovery's report highlighted that children’s average attention span is only 29.61 seconds. Over time, this figure showed a significant 27.41% decline during the continuous performance test.
This trend mirrors parental concerns- 79% of parents, as cited by Common Sense Media 2025, fear that heavy screen exposure is eroding their child's ability to concentrate.
This erosion is often visible in daily life. Constant notifications, videos, and scrolling content condition young minds to crave quick bursts of stimulation. It makes slow, offline tasks feel dull and unrewarding. Over time, this affects not just academics but also a child’s overall cognitive stamina and productivity.
Read more: Parenting a Teenager? 10 Tips to be Their Best Friend
Fear Of Missing Out (FOMO)
This is a powerful psychological driver that affects emotional health and can be especially damaging. This feeling stems from the perception that others are enjoying experiences, events, or interactions without them. It's amplified through the constant visibility of others’ lives online.
For example, a kid might see classmates hanging out without him/her, sparking feelings of exclusion, sadness, or even jealousy. These emotions, although silently endured, can create deep emotional turbulence. FOMO intensifies anxiety and self-doubt, fuelling compulsive social network checking as children try to stay “in the loop” at all times.
Increased Secrecy and Refusal to Go Outside
When children begin to maintain excessive secrecy, it’s often a red flag that something deeper is affecting their well-being. If your child has previously been open but suddenly becomes reluctant to share details about their day or their online activities, it could signal emotional distress. Secrecy often indicates that they are hiding something troubling, like exposure to cyberbullying or other online dangers.
According to social media mental health statistics, 87% of teens report being cyberbullied. Notably, 36.4% of girls report being affected by online harassment, compared to 31.4% of boys.
Read more: The Importance of Instilling Leadership Skills in Your Child
This constant exposure to negativity can cause children to avoid going outside, preferring the perceived safety of digital spaces. Over time, this behaviour can lead to a loss of trust and emotional isolation, as children avoid engaging in conversations.
Wrapping Up
These 7 warning signs reflect social media's negative impact on children's mental and emotional health. Excessive screen time can cause them to lose track of time and decrease their attention span, neglecting important tasks and responsibilities. Over time, this often results in social withdrawal. The constant comparison to others online fosters misguided self-esteem and worsens their mental well-being. Furthermore, children may struggle with FOMO, which heightens their feelings of inadequacy. As they struggle with these emotions, many develop increased secrecy, distancing themselves from the real world. All of these factors contribute to heightened emotional distress, often manifesting as irritability, anger, anxiety, and depression.
Read more: Bullying in School: How to Protect Children and Deal with the Issue
7 days ago
Can technology help more sexual assault survivors in South Sudan?
After being gang-raped while collecting firewood, a 28-year-old woman in South Sudan struggled to find medical assistance. Some clinics were closed, others turned her away, and she lacked the money for hospital care.
Five months later, she lay on a mat in a displacement camp in Juba, rubbing her swollen belly. “I felt like no one listened … and now I’m pregnant,” she said. The Associated Press does not identify survivors of sexual assault.
Sexual violence remains a persistent threat for women in South Sudan. Now, an aid group is using technology to locate and support survivors faster. However, low internet access, high illiteracy rates, and concerns over data privacy pose challenges in a country still grappling with instability.
Using Chatbots to Bridge the Gap
Five months ago, IsraAID, an Israeli humanitarian organization, introduced a chatbot on WhatsApp in South Sudan. The system enables staff to document survivors’ accounts anonymously, triggering immediate alerts to social workers who can provide aid within hours.
Rodah Nyaduel, a psychologist with IsraAID, said the technology enhances case management, reducing the risk of misplaced paperwork. “As soon as an incident is recorded, I get a notification with the case details,” she said.
While experts agree that technology can minimize human error, concerns remain about how such data is handled.
“Who has access to this information? Is it shared with law enforcement? Could it cross borders?” asked Gerardo Rodriguez Phillip, a UK-based AI and technology consultant.
Comedian Russell Brand denies allegations of sexual assault published by three UK news organizations
IsraAID insists its system is encrypted, anonymized, and automatically deletes records from staff devices. During the chatbot’s first three months in late 2024, it processed reports of 135 cases.
Barriers to Accessing Help
For the 28-year-old survivor, timely intervention could have changed everything. She knew she had just a few days to take medication to prevent pregnancy and disease, but when she approached an aid group, her details were hastily written on paper, and she was told to return later. When she did, staff were too busy to help. After 72 hours, she gave up. Weeks later, she realized she was pregnant.
IsraAID eventually located her through door-to-door outreach. Initially hesitant about having her information recorded on a phone, she agreed after learning the devices were not personal and that she could hold the organization accountable if issues arose.
She is among thousands still living in displacement camps in Juba, years after a 2018 peace deal ended the country’s civil war. Many fear leaving or have no homes to return to.
Women who venture out for necessities like firewood continue to face the risk of assault. Several women in the camps told the AP they had been raped but lacked access to services, as humanitarian aid has declined and government investment in health remains minimal. Many cannot afford transportation to hospitals.
The Impact of Funding Cuts
The situation has worsened following U.S. President Donald Trump’s recent executive order pausing USAID funding for a 90-day review period. The freeze has forced aid organizations to shut down critical services, including psychological support for sexual violence survivors, affecting tens of thousands.
Can More Tech Solutions Work?
Most humanitarian groups tackling gender-based violence in South Sudan have yet to widely adopt technology. Some organizations believe an ideal app would allow survivors to seek help remotely.
However, stigma surrounding sexual violence makes it difficult for survivors—especially young girls—to seek assistance. Many need permission to leave home, said Mercy Lwambi, gender-based violence lead at the International Rescue Committee.
“They want to talk to someone quickly, without waiting for a face-to-face meeting,” she said.
Yet, South Sudan has one of the world’s lowest mobile and internet penetration rates—less than 25%, according to GSMA, a global network of mobile operators. Even those with phones often lack internet access, and many people are illiterate.
Netrakona govt officer stand released for alleged sexual assault
“You have to ask: Will this work in a low-tech environment? Are people literate? Do they have the right devices? Will they trust it?” said Kirsten Pontalti, a senior associate at the Proteknon Foundation for Innovation and Learning.
Pontalti, who has tested chatbots for sexual health education and child protection, said such tools should include audio features for those with low literacy and remain as simple as possible.
A Desire to Be Heard
Some survivors just want acknowledgment—whether in person or through technology.
A 45-year-old father of 11 waited years before seeking help after being sexually assaulted by his wife, who forced him into sex despite his refusal and concerns about providing for more children.
It took multiple visits by aid workers to his displacement camp before he finally opened up.
“Organizations need to engage more with the community,” he said. “If they hadn’t come, I wouldn’t have spoken out.”
Source: With input from agency
1 month ago
Elon Musk reacts to 'Bengali' Signboard at London Station
Rupert Lowe, the Member of Parliament (MP) for Great Yarmouth, shared an image on his official X account of a bilingual sign at Whitechapel Station, which has sparked debate. The sign, written in both English and Bengali, has been criticized by some, including Lowe, who believes that signs at London stations should be in English alone.
In his post, Lowe, a Reform UK MP, expressed his opinion that "This is London – the station name should be in English, and English only," which quickly went viral. Elon Musk, the billionaire owner of X, responded with a simple "Yes."
Lowe, who has been supportive of former U.S. President Donald Trump, recently called for Nigel Farage's removal as the leader of Reform UK, while also seemingly endorsing Lowe's views. Some users supported the MP's stance, while others argued that having signs in multiple languages was not an issue.
Musk’s aggressive cost-cutting tactics that shook Washington and Backfired at Twitter
The Bengali signage was installed at Whitechapel Tube station in 2022 to honor the contributions of the Bangladeshi community in East London. The Tower Hamlets council funded the dual-language signs as part of broader station improvements. Whitechapel is home to the largest Bangladeshi community in the UK.
West Bengal Chief Minister Mamata Banerjee praised the initiative, expressing pride that Bengali had been accepted as a language for signage at the station. She highlighted the global significance of the Bengali language, calling the move a "victory of our culture and heritage" and underscoring the importance of diaspora unity.
Source: With inputs from agency
2 months ago
Understanding Zero-Click Hacks: The Growing Cyber Threat to WhatsApp Users
In an era where digital security is paramount, cyber threats are evolving at an alarming pace.
Among the latest and most concerning hacking techniques is the Zero-Click Hack, a sophisticated cyberattack that allows hackers to infiltrate a user's device without any interaction from the victim.
Recent reports indicate that nearly 90 WhatsApp users across more than two dozen countries have fallen victim to this silent yet dangerous hacking method.
What is a Zero-Click Hack?
As the name suggests, a zero-click hack is a form of cyberattack that does not require the user to click on a malicious link, download a file, or take any action.
Unlike traditional phishing attempts that rely on social engineering, these attacks exploit software vulnerabilities to gain unauthorised access.
Hackers typically exploit weaknesses in messaging applications, email clients, or multimedia processing functions, sending malicious electronic documents that compromise devices without requiring any user interaction.
WhatsApp accuses Israeli spyware firm of targeting journalists, activists
In the case of WhatsApp, the attackers took advantage of vulnerabilities in the messaging app, allowing them to gain access to sensitive information.
How Do Zero-Click Attacks Work?
Zero-click attacks work by sending malicious files to targeted individuals. These files are processed by the operating system or application without the user's knowledge, granting hackers access to vital data such as messages, call logs, photos, and even the device’s microphone and camera.
This type of cyberattack is particularly dangerous because it is difficult to detect and prevent. Since there is no need for user interaction, conventional security awareness—such as avoiding suspicious links—does not provide protection against such threats.
The WhatsApp Security Breach
WhatsApp recently disclosed that nearly 90 users had been targeted by hackers using spyware developed by the Israeli company Paragon Solutions. This spyware enabled attackers to infiltrate victims' devices without requiring them to take any action.
Among those affected were journalists and members of civil society. In response, WhatsApp has sent a cease-and-desist letter to Paragon Solutions and has reassured users of its commitment to maintaining privacy and security.
How to Stay Safe from Zero-Click Attacks
While zero-click attacks are highly sophisticated and challenging to prevent, users can take certain precautions to minimise the risk:
Keep Apps Updated: Always update your applications to the latest versions. Updates often include security patches that fix vulnerabilities exploited by hackers.
All social media platforms including Facebook to be unblocked within 2 hours today, Palak says
Enable Automatic Updates: This ensures that your device installs security updates as soon as they become available, reducing the window of opportunity for hackers to exploit vulnerabilities.
Monitor Device Behaviour: Unusual signs, such as sudden battery drainage, unexpected app behaviour, or strange messages from unknown contacts, may indicate a compromise.
Report Suspicious Activity: If you suspect your device has been compromised, report it to your local cybercrime unit immediately.
The Fight Against Cyber Threats
Despite the increasing sophistication of cyberattacks, companies like WhatsApp continue to implement security measures to protect user data. However, digital safety remains a shared responsibility. Users must stay informed about emerging threats and adopt best practices to safeguard their digital presence.
As technology advances, so do the tactics employed by cybercriminals. Zero-click hacks serve as a stark reminder that cybersecurity vigilance is more critical than ever
2 months ago
WhatsApp accuses Israeli spyware firm of targeting journalists, activists
WhatsApp has accused Israeli spyware company Paragon Solutions of targeting nearly 100 journalists and civil society members using its sophisticated spyware, Graphite.
The attacks, reportedly carried out using zero-click methods, have raised fresh concerns about the misuse of commercial surveillance tools and the lack of accountability within the industry.
According to a report by The Guardian, WhatsApp has "high confidence" that around 90 users, including journalists and activists, were targeted and possibly compromised.
Elon Musk's DOGE commission gains access to sensitive Treasury payment systems: AP sources
The company did not disclose the locations of the affected individuals but confirmed that they had been notified of the potential breach. WhatsApp has also sent a cease-and-desist letter to Paragon and is considering legal action against the firm.
Zero-Click Attack and Full Device Access
Graphite, Paragon’s spyware, is reportedly capable of infiltrating a device without requiring any interaction from the victim, making it a particularly dangerous tool for surveillance. Once installed, the software provides complete access to the infected phone, including the ability to read messages sent through encrypted apps such as WhatsApp and Signal.
While the identity of those behind the attacks remains unknown, Paragon Solutions is known to sell its software to government clients. A source close to the company claimed that it has 35 government customers, all of which are democratic nations.
Texas Governor orders ban on DeepSeek, RedNote for government devices
The source further stated that Paragon avoids doing business with countries that have previously been accused of spyware abuse, such as Greece, Poland, Hungary, Mexico, and India.
Growing Scrutiny of Spyware Industry
The incident has intensified scrutiny of the commercial spyware industry. Natalia Krapiva, a senior tech legal counsel at Access Now, commented on the matter, stating, "This is not just a question of some bad apples — these types of abuses are a feature of the commercial spyware industry."
While Paragon had been perceived as a relatively less controversial spyware provider, WhatsApp’s revelations have called that perception into question.
This development follows a recent legal victory for WhatsApp against NSO Group, another Israeli spyware maker. In December, a California judge ruled that NSO was liable for hacking 1,400 WhatsApp users in 2019, violating US hacking laws and the platform’s terms of service.
Italy blocks access to the Chinese AI application DeepSeek to protect users' data
In 2021, NSO Group was also added to the US commerce department’s blacklist due to activities deemed contrary to US national security interests.
WhatsApp’s Response and Future Security Measures
WhatsApp has not disclosed how long the targeted users may have been under surveillance but confirmed that the alleged attacks were disrupted in December. The company is now working to support affected users and reinforce its security measures to prevent future breaches.
As concerns over spyware misuse continue to grow, this latest revelation underscores the need for stricter regulations and international cooperation to curb the abuse of surveillance technologies.
2 months ago
Elon Musk's DOGE commission gains access to sensitive Treasury payment systems: AP sources
The Department of Government Efficiency, run by President Donald Trump's billionaire adviser and Tesla CEO Elon Musk, has gained access to sensitive Treasury data including Social Security and Medicare customer payment systems, according to two people familiar with the situation.
The move by DOGE, a Trump administration task force assigned to find ways to fire federal workers, cut programs and slash federal regulations, means it could have wide leeway to access important taxpayer data, among other things.
The New York Times first reported the news of the group's access of the massive federal payment system. The two people who spoke to The Associated Press spoke on condition of anonymity because they were not authorized to speak publicly.
The highest-ranking Democrat on the Senate Finance Committee, Ron Wyden of Oregon, on Friday sent a letter to Trump's Treasury Secretary Scott Bessent expressing concern that “officials associated with Musk may have intended to access these payment systems to illegally withhold payments to any number of programs.”
“To put it bluntly, these payment systems simply cannot fail, and any politically motivated meddling in them risks severe damage to our country and the economy," Wyden said.
The news also comes after Treasury's acting Deputy Secretary David Lebryk resigned from his position at Treasury after more than 30 years of service. The Washington Post on Friday reported that Lebryk resigned his position after Musk and his DOGE organization requested access to sensitive Treasury data.
Elon Musk’s X to launch Digital Wallet with Visa partnership
“The Fiscal Service performs some of the most vital functions in government," Lebryk said in a letter to Treasury employees sent out Friday. “Our work may be unknown to most of the public, but that doesn’t mean it isn’t exceptionally important. I am grateful for having been able to work alongside some of the nation’s best and most talented operations staff.”
The letter did not mention a DOGE request to access Treasury payments.
Musk on Saturday responded to a post on his social media platform X about the departure of Lebryk: “The @DOGE team discovered, among other things, that payment approval officers at Treasury were instructed always to approve payments, even to known fraudulent or terrorist groups. They literally never denied a payment in their entire career. Not even once."
He did not provide proof of this claim.
Musk clashes with OpenAI CEO Sam Altman over Trump-supported Stargate AI data center project
DOGE was originally headed by Musk and former Republican presidential candidate Vivek Ramaswamy, who jointly vowed to cut billions from the federal budget and usher in “mass headcount reductions across the federal bureaucracy.”
Ramaswamy has since left DOGE as he mulls a run for governor of Ohio.
2 months ago
Families sue TikTok in France over teen suicides they say are linked to harmful content
In the moment when her world shattered three years ago, Stephanie Mistre found her 15-year-old daughter, Marie, lifeless in the bedroom where she died by suicide.
“I went from light to darkness in a fraction of a second,” Mistre said, describing the day in September 2021 that marked the start of her fight against TikTok, the Chinese-owned video app she blames for pushing her daughter toward despair.
Delving into her daughter’s phone after her death, Mistre discovered videos promoting suicide methods, tutorials and comments encouraging users to go beyond “mere suicide attempts.” She said TikTok’s algorithm had repeatedly pushed such content to her daughter.
“It was brainwashing,” said Mistre, who lives in Cassis, near Marseille, in the south of France. “They normalized depression and self-harm, turning it into a twisted sense of belonging.”
Now Mistre and six other families are suing TikTok France, accusing the platform of failing to moderate harmful content and exposing children to life-threatening material. Out of the seven families, two experienced the loss of a child.
Asked about the lawsuit, TikTok said its guidelines forbid any promotion of suicide and that it employs 40,000 trust and safety professionals worldwide — hundreds of which are French-speaking moderators — to remove dangerous posts. The company also said it refers users who search for suicide-related videos to mental health services.
Before killing herself, Marie Le Tiec made several videos to explain her decision, citing various difficulties in her life, and quoted a song by the Louisiana-based emo rap group Suicideboys, who are popular on TikTok.
Her mother also claims that her daughter was repeatedly bullied and harassed at school and online. In addition to the lawsuit, the 51-year-old mother and her husband have filed a complaint against five of Marie’s classmates and her previous high school.
Above all, Mistre blames TikTok, saying that putting the app "in the hands of an empathetic and sensitive teenager who does not know what is real from what is not is like a ticking bomb.”
Scientists have not established a clear link between social media and mental health problems or psychological harm, said Grégoire Borst, a professor of psychology and cognitive neuroscience at Paris-Cité University.
“It’s very difficult to show clear cause and effect in this area,” Borst said, citing a leading peer-reviewed study that found only 0.4% of the differences in teenagers’ well-being could be attributed to social media use.
Read: TikTok-loaded phones listed online for thousands amid app ban
Additionally, Borst pointed out that no current studies suggest TikTok is any more harmful than rival apps such as Snapchat, X, Facebook or Instagram.
While most teens use social media without significant harm, the real risks, Borst said, lie with those already facing challenges such as bullying or family instability.
“When teenagers already feel bad about themselves and spend time exposed to distorted images or harmful social comparisons," it can worsen their mental state, Borst said.
Lawyer Laure Boutron-Marmion, who represents the seven families suing TikTok, said their case is based on “extensive evidence.” The company "can no longer hide behind the claim that it’s not their responsibility because they don’t create the content,” Boutron-Marmion said.
The lawsuit alleges that TikTok’s algorithm is designed to trap vulnerable users in cycles of despair for profit and seeks reparations for the families.
“Their strategy is insidious,” Mistre said. “They hook children into depressive content to keep them on the platform, turning them into lucrative re-engagement products.”
Boutron-Marmion noted that TikTok’s Chinese version, Douyin, features much stricter content controls for young users. It includes a “youth mode” mandatory for users under 14 that restricts screen time to 40 minutes a day and offers only approved content.
“It proves they can moderate content when they choose to,” Boutron-Marmion said. “The absence of these safeguards here is telling.”
A report titled “Children and Screens,” commissioned by French President Emmanuel Macron in April and to which Borst contributed, concluded that certain algorithmic features should be considered addictive and banned from any app in France. The report also called for restricting social media access for minors under 15 in France. Neither measure has been adopted.
TikTok, which faced being shut down in the U.S. until President Donald Trump suspended a ban on it, has also come under scrutiny globally.
The U.S. has seen similar legal efforts by parents. One lawsuit in Los Angeles County accuses Meta and its platforms Instagram and Facebook, as well as Snapchat and TikTok, of designing defective products that cause serious injuries. The lawsuit lists three teens who died by suicide. In another complaint, two tribal nations accuse major social media companies, including YouTube owner Alphabet, of contributing to high rates of suicide among Native youths.
Meta CEO Mark Zuckerberg apologized to parents who had lost children while testifying last year in the U.S. Senate.
In December, Australia enacted a groundbreaking law banning social media accounts for children under 16.
Read more: Trump pauses US TikTok ban with executive order
In France, Boutron-Marmion expects TikTok Limited Technologies, the European Union subsidiary for ByteDance — the Chinese company that owns TikTok — to answer the allegations in the first quarter of 2025. Authorities will later decide whether and when a trial would take place.
When contacted by The Associated Press, TikTok said it had not been notified about the French lawsuit, which was filed in November. It could take months for the French justice system to process the complaint and for authorities in Ireland — home to TikTok’s European headquarters — to formally notify the company, Boutron-Marmion said.
Instead, a Tiktok spokesperson highlighted company guidelines that prohibit content promoting suicide or self-harm.
Critics argue that TikTok’s claims of robust moderation fall short.
Imran Ahmed, the CEO of the Center for Countering Digital Hate, dismissed TikTok’s assertion that over 98.8% of harmful videos had been flagged and removed between April and June.
When asked about the blind spots of their moderation efforts, social media platforms claim that users are able to bypass detection by using ambiguous language or allusions that algorithms struggle to flag, Ahmed said.
The term “algospeak” has been coined to describe techniques such as using zebra or armadillo emojis to talk about cutting yourself, or the Swiss flag emoji as an allusion to suicide.
Such code words "aren’t particularly sophisticated,” Ahmed said. "The only reason TikTok can’t find them when independent researchers, journalists and others can is because they’re not looking hard enough,” Ahmed said.
Ahmed’s organization conducted a study in 2022 simulating the experience of a 13-year-old girl on TikTok.
“Within 2.5 minutes, the accounts were served self-harm content,” Ahmed said. “By eight minutes, they saw eating disorder content. On average, every 39 seconds, the algorithm pushed harmful material.”
The algorithm “knows that eating disorder and self-harm content is especially addictive” for young girls.
For Mistre, the fight is deeply personal. Sitting in her daughter’s room, where she has kept the decor untouched for the last three years, she said parents must know about the dangers of social media.
Had she known about the content being sent to her daughter, she never would have allowed her on TikTok, she said. Her voice breaks as she describes Marie as a “sunny, funny” teenager who dreamed of becoming a lawyer.
“In memory of Marie, I will fight as long as I have the strength,” she said. “Parents need to know the truth. We must confront these platforms and demand accountability.”
2 months ago
All social media platforms including Facebook to be unblocked within 2 hours today, Palak says
All social media platforms including Facebook will be unblocked within two hours on Wednesday.
State Minister for Posts, Telecommunications, and Information Technology Zunaid Ahmed Palak confirmed the development.
Palak shared the update following virtual meeting with representatives from Facebook, TikTok, and YouTube, joining from Bangladesh Telecommunication Regulatory Commission (BTRC) building in Dhaka's Agargaon this morning.
Earlier on July 18, internet services were disrupted and access to social media platforms were blocked.
Read more: Only Youtube gets back to Palak; Facebook, others have till Wed morning
8 months ago
TikTok to start labeling AI-generated content as technology becomes more universal
TikTok will begin labeling content created using artificial intelligence when it's uploaded from certain platforms.
TikTok says its efforts are an attempt to combat misinformation from being spread on its social media platform.
The announcement came on ABC's “Good Morning America” on Thursday.
“Our users and our creators are so excited about AI and what it can do for their creativity and their ability to connect with audiences.” Adam Presser, TikTok’s Head of Operations & Trust and Safety told ABC News. “And at the same time, we want to make sure that people have that ability to understand what fact is and what is fiction.”
TikTok's policy in the past has been to encourage users to label content that has been generated or significantly edited by AI. It also requires users to label all AI-generated content where it contains realistic images, audio, and video.
11 months ago