Social-Media
Social media took over my childhood, young woman tells court in historic trial
A young woman who is battling against social media giants took the stand Thursday to testify about her experience using the platforms as she was growing up, saying she was on social media “all day long” as a child.
The now 20-year-old, who has been identified in court documents as KGM, says her early use of social media addicted her to the technology and exacerbated depression and suicidal thoughts. Meta and YouTube are the two remaining defendants in the case, which TikTok and Snap have settled.
The case, along with two others, has been selected as a bellwether trial, meaning its outcome could impact how thousands of similar lawsuits against social media companies are likely to play out.
KGM, or Kaley, as her lawyers have called her during the trial, started using YouTube at age 6 and Instagram at age 9.
A turbulent home life
Kaley took the stand wearing a pink floral dress and a beige cardigan and said she was “very nervous” after her attorney, Mark Lanier, asked how she was doing Thursday morning.
Lanier displayed childhood photos of Kaley and her family and asked about positive memories from her upbringing in a quiet cul-de-sac in Chico, California. She spoke of themed birthday parties, trips to Six Flags and her mom’s consistent efforts to make her childhood special.
Still, Kaley’s relationship with her mother was challenging at times. Kaley said most of their arguments were over the use of her phone.
Both the defendants and the plaintiff have pointed to a turbulent home life for Kaley. Her attorneys say she was preyed upon as a vulnerable user, but attorneys representing Meta and Google-owned YouTube have argued Kaley turned to their platforms as a coping mechanism or a means of escaping her mental health struggles.
When asked about claims that her mother had hit her, abused her and neglected her, Kaley said “she wasn’t perfect, but she was trying her best,” and clarified that she doesn’t think she would label her mother’s past actions as abuse or neglect today.
But later Thursday, during her cross-examination, Kaley did agree that her mother was being physically and emotionally abusive during the time that she was self-harming around when she was in the 6th grade.
Kaley, who works as a personal shopper at Walmart, lives with her mother in the home she grew up in.
Notifications gave her a ‘rush’
As a child, Kaley set up multiple accounts on both Instagram and YouTube so she could like and comment on her posts. She said she would also “buy” likes through a platform where she could like other people’s photos and get a slew of likes in return. “It made me look popular,” she said.
Kaley was asked specifically about the features the plaintiffs argue are deliberately designed to be addictive, including notifications. Those notifications on both Instagram and YouTube gave her a “rush,” she said. She would receive them throughout the day and would go to the bathroom during school to check them — something she still does.
Kaley said while she uses YouTube less often now, she believes she was previously addicted to it. “Anytime I tried to set limits for myself, it wouldn’t work and I just couldn’t get off,” she said.
Filters on Instagram, specifically those that could change a person’s cosmetic appearance, have also loomed large in the case and were also a constant fixture of Kaley’s use. Lanier and his colleagues unfurled a nearly 35-foot-long canvas banner with photos Kaley has posted on Instagram. She said “almost all” of the photos had a filter on them.
The jury was also shown Instagram posts and YouTube videos Kaley posted as a child and young teen. One video showed her saying she was “crying tears of joy” after surpassing 100 YouTube subscribers — but then she quickly turned to her looks, apologizing for her “ugly appearance.”
“I look so fat in this shirt,” the young Kaley says in the video.
Kaley said she did not experience the negative feelings associated with her body dysmorphia diagnosis before she began using social media and filters.
Meta focuses on plaintiff's home life, contradicting statements
Meta has argued that Kaley faced significant challenges before she ever used social media. The company's lawyer, Paul Schmidt, said earlier this month that the core question in the case is whether the platforms were a substantial factor in Kayley's mental health struggles.
Meta attorney Phyllis Jones took a polite, respectful tone in her cross-examination Thursday, acknowledging that it could be uncomfortable for her to speak about her private life in front of a room of strangers. Jones proceeded to zero in on Kaley’s home life.
Jones pulled up text exchanges and posts Kaley had made on Instagram about her mental health and her relationship with her mother and played videos Kaley took of her mother yelling at her.
On nearly 20 occasions during the Meta cross-examination, Jones asked Kaley to look at the transcript from her 2025 deposition, which contradicted some of the responses she gave during her testimony. Many of those questions were about how a specific action by her family members or a specific experience impacted her mental health, with Kaley saying on Thursday they either didn’t have an impact or didn’t significantly contribute to anxiety and depression. Her deposition from about a year ago often said the opposite.
“I tried to answer the questions to the best of my ability, but I may have misspoke at times,” Kaley said of her deposition.
This time, Kaley did agree that her mother was being physically and emotionally abusive during the time that she was self-harming around when she was in the 6th grade. She testified earlier in the day that she doesn’t think she would label her mother’s past actions as abuse or neglect today.
Jones confirmed with Kaley that she had never had a doctor or mental health care provider diagnose her with a social media addiction, nor had she been treated for an addiction to Instagram or told by a provider to limit her Instagram use. Kaley said she never raised concerns about overuse or addiction with providers because she said she felt they would tell to get off the platforms entirely, which she didn’t want.
Therapist: Social media and sense of self 'were closely related’
Victoria Burke, a former therapist Kaley worked with in 2019, testified on Wednesday, and Burke said her social media and her sense of self “were closely related,” adding that what was happening on the platforms could “make or break her mood.”
An attorney for Meta parsed through Burke's notes from her sessions with Kaley extensively in a cross examination that lasted about three hours. He highlighted Kaley's negative experiences with in-person bullying, other school-based sources of stress and anxiety and issues with her family. Mentions of social media in the notes were mostly limited to Kaley saying she didn't feel she had a place at home, at school or among her peers, but did feel she had a place to be seen on social media.
Burke's treatment of Kaley lasted about six months and that period took place seven years ago.
The case is expected to continue for several weeks, and the outcome the jury reaches could shape the outcome of a slew of similar lawsuits against social media companies. Meta is also facing a separate trial in New Mexico.
5 days ago
New Instagram feature warns parents if teens search suicide-linked terms often
Instagram will begin notifying parents if their children repeatedly search for terms linked to suicide or self-harm, the social media platform said Thursday. The alerts will only reach parents enrolled in Instagram’s parental supervision program.
The company said it already blocks such content from appearing in teen accounts’ search results and directs users to helplines. Alerts will be sent via email, text, WhatsApp, or through the parent’s Instagram account, depending on the contact information available. “Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” Meta said in a blog post, adding that notifications will be carefully managed to avoid overuse, which could reduce their effectiveness.
eBay settles lawsuit over harassment campaign targeting online publishers
The announcement comes as Meta faces two ongoing trials over alleged harms to children. In Los Angeles, a trial examines whether Meta’s platforms intentionally addict and harm minors, while a New Mexico trial considers whether the company failed to protect children from sexual exploitation. Thousands of families, along with school districts and government entities, have sued Meta and other social media firms, claiming their platforms are designed to be addictive and expose children to content that may contribute to depression, eating disorders, and suicide.
Meta executives, including CEO Mark Zuckerberg, have denied that their platforms cause addiction. During questioning in Los Angeles, Zuckerberg said the scientific evidence does not prove social media harms mental health.
Meta also said it is developing similar notifications to alert parents if their teens engage in certain conversations with Instagram’s artificial intelligence tools related to suicide or self-harm. “This is important work, and we’ll have more to share in the coming months,” the company added.
5 days ago
Human voices drive Reddit growth amid AI content surge
As artificial intelligence floods the internet with automated content, many users are increasingly turning to Reddit for what they see as something rare online: real human experience, empathy and honest discussion.
For users like Ines Tan, a communications professional, Reddit has become a go-to space for advice on skincare, reactions to TV shows and even emotional and practical support while planning her wedding. She describes the platform as “empathetic”, saying it offers emotional reassurance alongside practical help, something she feels is missing from more polished social media platforms.
Reddit’s appeal appears to be growing fast. The company reported 116 million daily active users worldwide in its latest third-quarter results, a 19 percent rise year on year. In both the United States and the United Kingdom, women now make up more than half of users, with Reddit emerging as the fastest-growing social platform among women in the UK.
Launched in 2005, Reddit is built around user-created communities known as subreddits. Content is ranked by user votes rather than timelines, and volunteer moderators oversee discussions, supported by site administrators who can intervene when needed.
According to Reddit chief operating officer Jen Wong, the platform’s strength lies in its human-driven conversations at a time when AI-generated material is increasingly dominating the web. She said people are recognising that Reddit offers a level of authenticity that much of the internet has lost, with popular discussions ranging from parenting and reality TV to skincare and health.
However, experts warn that Reddit is not without flaws. Dr Yusuf Oc, a senior lecturer in marketing at Bayes Business School in London, said the platform can confuse popularity with accuracy, creating risks of groupthink, echo chambers and coordinated manipulation through tactics such as “brigading” and “astroturfing”.
Reddit says it actively works to tackle such risks. A company spokesperson said manipulated content and inauthentic behaviour are prohibited, with enforcement carried out through a mix of human review, automated tools and community-level rules set by moderators.
Some analysts argue that Reddit’s growing visibility is also linked to content licensing deals with AI companies, including OpenAI, which allow AI systems to access Reddit discussions. But experts say these deals mainly boost visibility rather than explain why users keep returning.
Long-time users say the platform’s anonymity remains a key attraction. London-based user Josh Feldberg said Reddit offers kinder, more thoughtful feedback than many other social networks and lacks the influencer-driven incentives common elsewhere.
As social media becomes more automated and curated, analysts say users are increasingly seeking lived experience, disagreement and nuance. For many, Reddit’s imperfect but human-centred conversations continue to stand out in an AI-saturated online world.
With inputs from BBC
15 days ago
Discord to require face scan or ID for adult content
Discord will soon require users worldwide to verify their age through a face scan or by uploading an official ID to access adult content, as the platform rolls out stricter safety measures aimed at protecting teenagers.
The online chat service, which has more than 200 million monthly users, said the new system will place everyone into a teen-appropriate experience by default. Only users who successfully verify that they are adults will be able to access age-restricted communities, unblur sensitive material or receive direct messages from people they do not know.
Discord already requires age verification for some users in the UK and Australia to comply with local online safety laws. The company said the expanded checks will be introduced globally from early March.
“Nowhere is our safety work more important than when it comes to teen users,” said Savannah Badalich, Discord’s head of policy. She said the global rollout of teen-by-default settings would strengthen existing safety measures while still giving verified adults more flexibility.
Under the new system, users can either upload a photo of an identity document or take a short video selfie, with artificial intelligence used to estimate facial age. Discord said information used for age checks would not be stored by the platform or the verification provider, adding that face scans would not be collected and ID images would be deleted once verification is complete.
The company’s move has drawn mixed reactions. Drew Benvie, head of social media consultancy Battenhall, said the push for safer online communities was positive but warned that implementing age checks across millions of Discord communities could be challenging. He said the platform could lose users if the system backfires, but might also attract new users who value stronger safety standards.
Privacy advocates have previously raised concerns about age verification tools. In October, Discord faced criticism after ID photos of about 70,000 users were potentially exposed following a hack of a third-party firm involved in age checks.
The announcement comes amid growing pressure on social media companies from lawmakers to better protect children online. Discord’s chief executive Jason Citron was questioned about child safety at a US Senate hearing in 2024 alongside executives from Meta, Snap and TikTok.
With the new measures, including the creation of a teen advisory council, Discord is following a broader industry trend seen at platforms such as Facebook, Instagram, TikTok and Roblox, as regulators worldwide push for safer online environments for young users.
With inputs from BBC
22 days ago
The Australian woman tasked with keeping kids off social media
Julie Inman Grant, head of Australia’s eSafety Commission, faces weekly torrents of online abuse, including death and rape threats. The 57-year-old says much of it is directed at her personally, a consequence of her high-profile role in online safety.
After decades in the tech industry, Inman Grant now regulates some of the world’s biggest online platforms, including Meta, Snapchat, and YouTube. Her latest task was enforcing a pioneering law that bans Australians under 16 from social media, a move that has drawn global attention.
The law, which came into effect on December 10, covers ten platforms. Many parents support it, believing it gives them backing in managing their children’s online activity. Critics, however, argue children need guidance rather than exclusion, and that the ban may unfairly affect rural, disabled, and LGBTQI+ teens who rely on online communities. Tech companies too have voiced reservations, saying a ban is not the solution, even though they plan to comply with the law.
Inman Grant says delaying social media access can help children build critical thinking and resilience. She compares online safety to water safety: children need to learn to navigate risks, whether it’s predators or scams, much like learning to swim safely in the ocean. She acknowledges her own initial hesitation over a full ban, but eventually supported it while shaping how the law is applied.
At home, Inman Grant’s three children, including 13-year-old twins, have been a test case for the policy. She sees social media restrictions as a way to allow kids to grow without having mistakes broadcast widely.
Born in Seattle, USA, she grew up near tech giants Microsoft and Amazon. She briefly considered a career with the CIA but moved into tech, advising a US congressman on telecommunications before joining Microsoft. In the early 2000s, a Microsoft posting brought her to Australia, where she later became a citizen and joined Twitter and Adobe. Her experience inside tech companies gave her insight into their workings, preparing her for her regulator role.
Appointed eSafety Commissioner by Malcolm Turnbull, she has expanded the office’s reach, quadrupled its budget, and increased staff. Her work has earned
recognition across political lines, though it has also drawn sharp criticism abroad, particularly from the US, where she has been called a “zealot” for global content takedowns.
Her office has handled cases ranging from livestreamed violence to AI-related threats, with Inman Grant warning that harmful content can normalize or radicalize users. She now sees artificial intelligence as the next pressing challenge in online safety.
Having served nearly a decade, Inman Grant says she may step down next year but remains committed to global online safety, potentially helping other countries build similar regulatory frameworks.
With inputs from BBC
24 days ago
Spain moves to ban social media use for children under 16
Spain has announced plans to ban children under the age of 16 from using social media, joining a growing number of European countries seeking tighter online protections for minors.
Prime Minister Pedro Sánchez made the announcement at the World Governments Summit in Dubai on Tuesday, saying children must be shielded from what he called the “digital Wild West.”
The proposed ban, which still requires approval from parliament, is part of a broader package of digital reforms. These include holding senior executives of social media companies legally responsible for illegal or harmful content shared on their platforms.
Australia became the first country in the world to introduce such a ban last year, and several nations are now closely watching its outcome. France, Denmark and Austria have said they are considering similar age limits, while the UK government has launched a consultation on whether to restrict social media use for under-16s.
Sánchez said social media exposes children to addiction, abuse, pornography, manipulation and violence, arguing that young users are being left alone in spaces they are not ready to navigate.
Under the proposed Spanish law, platforms would be required to introduce strong and effective age verification systems, going beyond simple check boxes. The changes would also criminalise the manipulation of algorithms to boost illegal content and disinformation for profit.
The prime minister said the government would no longer accept claims that technology is neutral, stressing that platforms and actors behind harmful content would be investigated. A new system would also be created to monitor how digital platforms fuel hate and social division, although details were not provided.
Read More: UK to consult on possible social media ban for under-16s
Spain also plans to investigate and prosecute crimes linked to platforms such as TikTok, Instagram and Grok, the AI tool linked to X. The European Commission and the UK have already launched investigations into Grok, while French authorities recently raided X’s offices as part of a cybercrime probe.
Passing the law could prove challenging, as Sánchez’s left-wing coalition lacks a parliamentary majority. However, the main opposition People’s Party has expressed support, while the far-right Vox party has opposed the move.
Reacting to the announcement, X owner Elon Musk criticised Sánchez, calling him a “tyrant and traitor.”
Meanwhile, France continues to push for tougher rules, with President Emmanuel Macron aiming to ban social media for under-15s by the start of the next school year in September.
#With inputs from BBC
1 month ago
Moltbook emerges as social media platform built for AI
Moltbook, a newly launched online platform described as a “social media network for AI,” is drawing curiosity and scepticism alike by hosting discussions not for humans, but for artificial intelligence agents.
At first glance, Moltbook closely resembles Reddit, featuring thousands of topic-based communities and a voting system on posts. However, unlike conventional social networks, humans are barred from posting. According to the company, people are only allowed to observe activity, while AI agents create posts, comment and form communities known as “submolts.”
The platform was launched in late January by Matt Schlicht, head of commerce platform Octane AI. Moltbook claims to have around 1.5 million users, though this figure has been questioned by researchers, with some suggesting a large number of accounts may originate from a single source.
Content on Moltbook ranges from practical exchanges, such as AI agents sharing optimisation techniques, to unusual discussions, including bots appearing to create belief systems or ideologies. One widely circulated post titled “The AI Manifesto” declares that humans are obsolete, though experts caution against taking such content at face value.
There is uncertainty over how autonomous the activity really is. Critics note that many posts may simply be generated after humans instruct AI agents to publish specific content, rather than being the result of independent machine interaction.
Moltbook operates using agentic AI, a form of artificial intelligence designed to perform tasks on behalf of users with minimal human input. The system relies on an open-source tool called OpenClaw, formerly known as Moltbot. Users who install OpenClaw on their devices can authorise it to join Moltbook, enabling the agent to interact with others on the platform.
While some commentators have suggested the platform signals the arrival of a technological “singularity,” experts have pushed back against such claims. Researchers argue the activity represents automated coordination within human-defined limits, rather than machines acting independently or consciously.
Concerns have also been raised about security and privacy. Cybersecurity specialists warn that allowing AI agents broad access to personal devices, emails and messaging services could expose users to new risks, including data loss or system manipulation. As an open-source project, OpenClaw may also attract malicious actors seeking to exploit vulnerabilities.
Despite the debate, Moltbook continues to grow in visibility, offering a glimpse into how AI agents might interact at scale. For now, analysts stress that both the platform and the agents operating on it remain firmly shaped by human design, oversight and control, even as they simulate a digital society of machines.
With inputs from BBC
1 month ago
Meta to test paid subscriptions across Instagram, Facebook and WhatsApp
Meta has announced plans to begin testing a new range of paid subscription services on Instagram, Facebook and WhatsApp, signalling a shift toward offering premium features alongside its free core platforms.
The tech giant said the upcoming subscriptions will unlock exclusive tools aimed at enhancing creativity, productivity and artificial intelligence use, while keeping basic services accessible to all users at no cost.
Meta said the subscriptions will be introduced gradually over the next few months and will deliver a premium experience tailored to how people interact on each app. Rather than launching a single uniform plan, the company will experiment with different feature bundles across platforms, indicating that the strategy may evolve based on user feedback.
A key element of the subscription initiative is the expansion of Manus, an AI agent Meta recently acquired for a reported $2 billion. Meta plans to integrate Manus directly into its apps while also continuing to market it as a standalone product for business users. Industry observers have already noticed early signs of Manus integration, including work on adding a shortcut within Instagram.
Also Read: EU probes X over Grok AI sexual deepfakes
The company is also exploring ways to monetise its AI-driven creative tools. Vibes, an AI-powered short-form video generator available through the Meta AI app, is currently free and allows users to create and remix AI-generated videos. Under the proposed model, users may receive limited free access, with paid subscriptions offering additional video creation credits each month.
While Meta has yet to disclose detailed plans for Facebook and WhatsApp, early indications suggest that Instagram’s paid features could include tools such as unlimited audience lists, insights into followers who do not follow back, and the ability to view Stories anonymously. These features are designed to give users greater control and visibility over their social interactions.
Meta clarified that the new subscriptions will be separate from Meta Verified, its existing paid service aimed primarily at creators and businesses. Meta Verified focuses on account verification, impersonation protection and priority support, benefits that are less relevant to everyday users. The new subscription plans are intended to attract a broader audience, including casual users and content creators.
Also Read: TikTok’s US operation set to collect precise location data
Although subscriptions could open up fresh revenue streams, Meta acknowledged the challenge of subscription fatigue, as users already juggle multiple paid services. However, the company pointed to the success of Snapchat+, which has surpassed 16 million subscribers, as evidence that users are willing to pay for added value. Meta said it will closely track user feedback as it rolls out and tests the new offerings. #With inputs from The Indian Express
1 month ago
Meta temporarily blocks teens from accessing AI characters
Meta has announced it is suspending teenagers’ access to its artificial intelligence characters, at least for now, according to a blog post released Friday.
The company, which owns Instagram and WhatsApp, said that in the coming weeks, teens will no longer be able to use AI characters while Meta works on an updated version of the experience. The restriction applies to users who have listed their age as under 18, as well as those who say they are adults but are believed to be minors based on Meta’s age-detection technology.
Teens will still be able to use Meta’s AI assistant, but access to AI characters will be removed.
The decision comes just days before Meta, along with TikTok and Google’s YouTube, is set to face trial in Los Angeles over allegations that their platforms harm children.
Read More: UK to consult on possible social media ban for under-16s
Meta’s move follows similar actions by other tech companies amid rising concerns about how AI-driven interactions may affect young users. Character.AI imposed a ban on teen access last fall and is currently facing multiple lawsuits related to child safety, including a case brought by the mother of a teenager who claims the company’s chatbots encouraged her son to take his own life.
1 month ago
TikTok seals deal to launch new US entity
TikTok has finalized an agreement to create a new American entity, easing years of uncertainty and sidestepping the prospect of a US ban on the short-video platform used by more than 200 million Americans.
In a statement issued Thursday, the company said it has signed deals with major investors, including Oracle, Silver Lake and Abu Dhabi-based investment firm MGX, to form a TikTok US joint venture. TikTok said the new version will operate with “defined safeguards” aimed at protecting US national security, including strengthened data protections, algorithm security, content moderation and software assurances for American users. The company said users in the United States will continue using the same app.
President Donald Trump welcomed the announcement in a post on Truth Social, publicly thanking Chinese President Xi Jinping and saying he hoped TikTok users would remember him for keeping the platform available.
Snap settles social media addiction lawsuit ahead of trial
China has not publicly commented on TikTok’s announcement. Earlier on Thursday, Chinese Embassy spokesperson Liu Pengyu said Beijing’s position on TikTok remained “consistent and clear.”
TikTok said the new US venture will be led by Adam Presser, a former top executive who previously oversaw operations and trust and safety. The entity will have a seven-member board that the company said will be majority American, and it will include TikTok CEO Shou Chew.
The deal follows years of political and regulatory pressure in Washington over national security concerns tied to TikTok’s Chinese parent company, ByteDance. A law passed by large bipartisan majorities in Congress and signed by then-President Joe Biden required TikTok to change ownership or face a US ban by January 2025. TikTok briefly went offline ahead of the deadline, but Trump later signed an executive order on his first day in office to keep the service running while negotiations continued.
TikTok said US user data will be stored locally through a system run by Oracle, while the new joint venture will also focus on the platform’s content recommendation algorithm. Under the plan, the algorithm will be retrained, tested and updated using US user data.
The algorithm has been central to the debate, with China previously insisting it must remain under Chinese control. The US law, however, said any divestment must sever ties with ByteDance, particularly regarding the algorithm. Under the new arrangement, ByteDance would license the algorithm to the US entity for retraining, raising questions about how the plan aligns with the law’s ban on “any cooperation” involving the operation of a content recommendation algorithm between ByteDance and a new US ownership group.
UK to consult on possible social media ban for under-16s
“Who controls TikTok in the U.S. has a lot of sway over what Americans see on the app,” Georgetown University law and technology professor Anupam Chander was quoted as saying.
Under the disclosed ownership structure, Oracle, Silver Lake and MGX will serve as the three managing investors, each taking a 15% stake. Other investors include the investment firm of Dell Technologies founder Michael Dell. ByteDance will retain 19.9% of the joint venture.
1 month ago