Social-Media
American YouTuber jailed in South Korea for offensive stunts
A US YouTuber who caused widespread anger in South Korea with a series of offensive online stunts has been sentenced to six months in prison.
The Seoul Western District Court on Wednesday found Ramsey Khalid Ismael, known online as Johnny Somali, guilty of several charges, including disrupting businesses and sharing fake sexually explicit content.
Prosecutors had asked for a three-year jail term. Ismael was also accused of harassing people at an amusement park, creating disturbances at a convenience store by playing loud music and throwing noodles on a table, and causing similar disruptions on public transport. He was also charged with distributing deepfake videos without consent.
The court said the 25-year-old showed serious disregard for South Korean laws and hurt many people through his livestreamed actions aimed at earning money on YouTube. He was taken into custody immediately after the verdict, as the court considered him a flight risk.
In October 2024, Ismael triggered public outrage after posting a video of himself kissing and performing a lap dance on a statue honoring victims of Japan’s wartime sexual slavery. He later apologized, saying he did not understand the importance of the monument.
Ismael, who had been banned from leaving South Korea during the trial, earlier told reporters he regretted his actions and wanted to apologize to the public.
3 days ago
AI ‘Lego-style’ videos push pro-Iran narrative, raise propaganda concerns
Viral AI-generated videos styled like Lego animations are being used to spread pro-Iran narratives during the ongoing conflict, with experts warning they represent a powerful new form of propaganda.
At first glance, the fast-paced and vividly coloured clips resemble scenes from animated films. However, the content often includes images of war, injured children, fighter jets and US President Donald Trump, presenting Iran as resisting what it portrays as a dominant global power, the United States.
In a recent BBC podcast, a representative of Explosive Media, one of the main creators of such videos, acknowledged that the Iranian government is a “customer” of the outlet, despite earlier claims of being fully independent.
The individual, who identified himself as Mr Explosive, said his small team uses the Lego-style format because it is easily understood across cultures. The videos are widely shared by Iranian and Russian state-linked social media accounts, reaching millions of viewers.
Experts say the content is highly effective. Propaganda specialist Dr Emma Briant described the videos as “highly sophisticated,” noting that AI tools trained on Western data help create culturally familiar messages for global audiences. She said the clips have collectively drawn hundreds of millions of views.
The videos often mix political messaging with controversial or unverified claims. Some include references to conspiracy theories, such as alleged links between US figures and the Epstein files, for which there is no credible evidence.
In one widely circulated clip, a downed US pilot is shown being captured by Iranian forces. However, US officials said the pilot was rescued by special forces on April 4 and is receiving treatment in Kuwait. The producer rejected that account, offering an alternative version without evidence.
Analysts say such content can shape perceptions by rapidly spreading misleading narratives. Some social media influencers have echoed the claims made in the videos, further amplifying their reach among English-speaking audiences.
The clips have become more detailed in recent months, depicting specific locations in the Gulf region being destroyed by Iranian strikes. In reality, reports suggest damage in many cases has been limited.
The videos are often released shortly after major developments in the conflict, sometimes even before official announcements, indicating a coordinated and fast-moving content strategy.
Explosive Media’s representative defended working with the Iranian government, calling it an “honourable” role, and dismissed criticism over misinformation and alleged bias.
Researchers say this type of AI-driven messaging signals a shift in how countries communicate during conflicts, bypassing traditional media channels and directly targeting global audiences.
While social media platforms have removed some accounts sharing the videos, similar content continues to reappear, highlighting the challenges of controlling such rapidly evolving digital campaigns.
Source: BBC
7 days ago
Advocacy groups urge YouTube to protect kids from 'AI slop' videos
Apr 1 (AP/UNB)--Advocacy groups and experts condemned YouTube for serving up low-quality artificial intelligence-generated videos to its most vulnerable audience: children.
In a letter to YouTube CEO Neal Mohan and Sundar Pichai, the CEO of YouTube’s parent company Google, children’s advocacy group Fairplay expresses “serious concern” about the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, which was sent on Wednesday morning, was signed by more than 200 organizations and individual experts such as child psychiatrists and educators.
“This ’ AI slop ’ harms children’s development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development,” the letter reads. “These harms are particularly acute for young children.” The letter calls on YouTube to clearly label all AI-generated content and ban any AI-generated content on YouTube Kids. They also propose barring AI-generated videos from being recommended to users under 18 and implementing an option for parents to turn off AI-generated content even if their child searches for it.
The letter is signed by 135 organizations including the American Federation of Teachers and the American Counseling Association, and around 100 individual experts like “The Anxious Generation” author Jonathan Haidt. The letter is part of a larger campaign from Fairplay that also includes a petition.
Much of this AI-generated content is fast-paced with bright colors, lively music and clickbait titles that work to grab the attention of young viewers, the letter outlines. There has been a growing movement online against AI-generated content, particularly when it looks or feels low quality or leans into the meaninglessness of “ brainrot.”
Spokesperson Boot Bullwinkle said in a statement that YouTube has “high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels.”
“We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content,” Bullwinkle said. “We’re always evolving our approach to stay current as the ecosystem evolves.”
YouTube's current policy regarding AI-generated content requires creators to disclose when content that's “realistic” is made with altered or synthetic media, including generative AI. Creators are not required to disclose when generative AI is used to create content that is clearly unrealistic, including animated videos and those with special effects.
YouTube said it is actively working on developing labels for YouTube Kids.
In its letter, Fairplay argues that voluntary disclosure policy and what it sees as an “extremely limited” definition of altered and synthetic content mean kids still see a flood of AI-generated videos that are not labeled as such. They also argue that many children who watch YouTube videos are not yet able to read or to comprehend something like an AI disclosure. That leaves children “to fend for themselves or their parents to play whack-a-mole,” the letter reads.
Fairplay's campaign comes shortly after Google’s AI Futures Fund invested $1 million into Animaj, an AI animation studio that makes videos for kids and draws in staggeringly high viewership numbers, according to Bloomberg.
The campaign follows a landmark verdict in a social media addiction trial in which a California jury found that YouTube designed its platform to hook young users without concern for their well-being. Meta was also found liable on the same counts as YouTube in the same case.
“Pushing AI slop onto young children is just another testament to how YouTube and YouTube Kids are designed to maximize children’s time online — including babies. AI slop hypnotizes young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction,” said Rachel Franz, the director of Fairplay’s Young Children Thrive Offline program, in a statement. “What’s more, YouTube’s algorithm makes it impossible for kids to avoid AI slop.”
Earlier this year, YouTube head Mohan listed out “managing AI slop” as one of the company's priorities for 2026. In a January blog post, he wrote that the company was “actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content.”
17 days ago
Pokémon criticises White House for using its imagery in political meme
The Pokémon Company International has criticised the White House for using its imagery, including the popular character Pikachu, in a political meme posted online with the slogan “Make America Great Again”.
The company said it had no involvement in the creation or distribution of the meme and had not given permission to use its intellectual property.
Pokémon spokeswoman Sravanthi Dev said,“We were not involved in its creation or distribution, and no permission was granted for the use of our intellectual property.”
She added that the company’s mission is to bring people together and that it is not linked to any political viewpoint or agenda.
This is not the first time the company has objected to the Trump administration’s use of its content. In September, Pokémon also criticised a video that used its theme song and the slogan “Gotta catch ’em all” while showing arrests made by US border patrol and immigration agents as part of the administration’s deportation campaign.
The latest meme appears to use an image from the recently released game Pokopia for Nintendo. The slogan was written in a font similar to the game’s style, with a small version of Pikachu appearing behind the letter “e” in the word “make”.
When asked about the criticism, the White House referred the BBC to a post on X by spokesman Kaelan Dorr.In the post, Dorr shared a 10-year-old Wall Street Journal article about former Democratic presidential candidate Hillary Clinton, who once referenced the mobile game Pokémon Go during the 2016 election campaign, saying she was trying to get supporters to “have Pokémon go to the polls”.
“Hey Mr Pikachu, big fan. Question for you – why no response to articles like this?” Dorr wrote on X, suggesting the company might have a political bias.
The Pokémon Company did not say whether it plans to take legal action over the use of its content.
During Donald Trump’s second term, the White House has frequently used popular internet memes on official social media accounts to promote its policies.
White House spokeswoman Abigail Jackson earlier defended the approach, saying the administration was using engaging posts and memes to communicate the president’s agenda.
Recently, the White House also posted a video combining images from the war with Iran and scenes from the video game series Call of Duty.
Several artists and public figures have criticised the administration for using their content without permission. Comedian and podcaster Theo Von last year objected after the Department of Homeland Security used a clip of him in a video highlighting deportation numbers.
Von responded on X saying he did not approve the use of the clip and asked the agency to remove it.
Source: BBC
1 month ago
AI-generated misinformation about Iran war spreads widely online as creators profit from new technology
An extraordinary surge of AI-generated misinformation linked to the US-Israel war with Iran is being exploited by online content creators who are using advanced generative AI tools to generate revenue, experts have told BBC Verify.
Analysis by BBC Verify uncovered numerous instances of AI-created videos and manipulated satellite images being circulated online to support false or misleading claims about the conflict. Collectively, such content has drawn hundreds of millions of views across social media platforms.
“The scale is deeply concerning and the current war has brought the issue into sharp focus,” said Timothy Graham, a digital media specialist at Queensland University of Technology.
“What previously required professional video production teams can now be produced within minutes using AI tools. The barrier to creating convincing synthetic footage of conflict has effectively disappeared,” he added.
The United States and Israel began launching military strikes on Iran on February 28. In response, Iran has carried out drone and missile attacks targeting Israel as well as several Gulf countries and US military assets across the region.
As the conflict escalated rapidly over the past week, many people turned to social media platforms to follow developments, seek updates and share information about the unfolding situation.
Social media platform X announced this week that it will temporarily remove creators from its monetisation programme if they share AI-generated videos of armed conflicts without clearly labelling them.
Under the programme, eligible users receive payments when their posts attract large numbers of views, likes, shares and comments.
Mahsa Alimardani, a researcher on Iran at the Oxford Internet Institute, said the decision signals that the platform recognises the scale of the problem.
“It’s a significant indication that they understand this is a major issue,” she said.
BBC Verify contacted TikTok and Meta, the parent company of Facebook and Instagram, to ask whether they plan to introduce similar measures. Neither company responded to requests for comment.
One example of misleading AI-generated content identified by BBC Verify appears to show missiles hitting the Israeli city of Tel Aviv while explosions can be heard in the background.
The clip has appeared in more than 300 separate posts and has been shared tens of thousands of times across multiple social media platforms.
Some users on X asked the platform’s AI chatbot Grok to verify whether the footage was authentic. However, BBC Verify found that in several cases the chatbot incorrectly claimed the AI-generated footage was real.
Another fabricated video, which has been viewed tens of millions of times, purports to show the Burj Khalifa skyscraper in Dubai engulfed in flames while crowds appear to run toward the building.
The AI-generated clip circulated widely online during a period of heightened anxiety among residents and tourists following reports of drone and missile strikes targeting the city.
According to Alimardani, such fabricated content damages public confidence in reliable information.
“Videos like these undermine trust in verified information available online and make it far more difficult to document genuine evidence,” she said.
BBC Verify also identified a new element emerging in the conflict: the spread of AI-generated satellite images.
On the first day of the war, BBC Verify confirmed several authentic videos showing Iranian drones and missiles striking the headquarters of the US Navy’s Fifth Fleet in Bahrain.
However, a manipulated satellite image shared on X by the state-linked newspaper The Tehran Times began circulating the following day, claiming to show severe destruction at the military facility.
The fabricated image appears to have been derived from a real satellite photo of a US naval base in Bahrain taken in February 2025, which is publicly available online.
Google’s SynthID watermark detection system indicates that the altered image was generated or modified using a Google AI tool.
Further examination shows that three vehicles parked outside the base appear in exactly the same positions in both the genuine satellite photo and the manipulated AI image, even though the pictures supposedly represent scenes captured a year apart.
Google’s AI products, including the video-generation tool Veo, are among a growing number of widely used AI platforms. Others include OpenAI’s Sora model, the Chinese AI application Seedance, and Grok, which is integrated into X.
Henry Ajder, a specialist in generative AI, said the range and accessibility of such tools has grown dramatically.
“The number of tools now available to create highly realistic AI manipulations across different formats is unprecedented,” he said.
“We have never seen these technologies so accessible, so simple to use and so inexpensive,” Ajder added.
Victoire Rio, executive director of the technology policy non-profit What To Fix, said this has contributed to a sharp rise in AI-generated material online because the process of producing and distributing such content can now be largely automated.
Meanwhile, X’s head of product said on Tuesday that about 99 percent of accounts sharing AI-generated war footage were attempting to “game monetisation” by posting content designed to attract high engagement and earn payments through the platform’s Creator Revenue Sharing programme.
X does not disclose how many accounts participate in the programme or the amount of money creators can earn from it.
However, Graham estimates that X may pay between eight and 12 dollars for every one million verified user impressions.
To qualify for the programme, creators must generate at least five million organic impressions within three months and maintain an X Premium subscription, he said.
“Once creators qualify, viral AI-generated content effectively becomes a money-making machine,” Graham added. “It has created the ultimate misinformation enterprise.”
X did not respond to BBC Verify’s requests for comment or questions about the Creator Revenue Sharing programme.
Experts told BBC Verify that although social media companies say they are attempting to improve moderation and detection systems to manage the rapid spread of AI-generated content, addressing the issue remains complex.
“The deeper problem is that monetisation driven by engagement and the distribution of accurate information are fundamentally at odds,” Graham said. “No platform has fully solved that conflict, and perhaps none ever will.”
1 month ago
Social media took over my childhood, young woman tells court in historic trial
A young woman who is battling against social media giants took the stand Thursday to testify about her experience using the platforms as she was growing up, saying she was on social media “all day long” as a child.
The now 20-year-old, who has been identified in court documents as KGM, says her early use of social media addicted her to the technology and exacerbated depression and suicidal thoughts. Meta and YouTube are the two remaining defendants in the case, which TikTok and Snap have settled.
The case, along with two others, has been selected as a bellwether trial, meaning its outcome could impact how thousands of similar lawsuits against social media companies are likely to play out.
KGM, or Kaley, as her lawyers have called her during the trial, started using YouTube at age 6 and Instagram at age 9.
A turbulent home life
Kaley took the stand wearing a pink floral dress and a beige cardigan and said she was “very nervous” after her attorney, Mark Lanier, asked how she was doing Thursday morning.
Lanier displayed childhood photos of Kaley and her family and asked about positive memories from her upbringing in a quiet cul-de-sac in Chico, California. She spoke of themed birthday parties, trips to Six Flags and her mom’s consistent efforts to make her childhood special.
Still, Kaley’s relationship with her mother was challenging at times. Kaley said most of their arguments were over the use of her phone.
Both the defendants and the plaintiff have pointed to a turbulent home life for Kaley. Her attorneys say she was preyed upon as a vulnerable user, but attorneys representing Meta and Google-owned YouTube have argued Kaley turned to their platforms as a coping mechanism or a means of escaping her mental health struggles.
When asked about claims that her mother had hit her, abused her and neglected her, Kaley said “she wasn’t perfect, but she was trying her best,” and clarified that she doesn’t think she would label her mother’s past actions as abuse or neglect today.
But later Thursday, during her cross-examination, Kaley did agree that her mother was being physically and emotionally abusive during the time that she was self-harming around when she was in the 6th grade.
Kaley, who works as a personal shopper at Walmart, lives with her mother in the home she grew up in.
Notifications gave her a ‘rush’
As a child, Kaley set up multiple accounts on both Instagram and YouTube so she could like and comment on her posts. She said she would also “buy” likes through a platform where she could like other people’s photos and get a slew of likes in return. “It made me look popular,” she said.
Kaley was asked specifically about the features the plaintiffs argue are deliberately designed to be addictive, including notifications. Those notifications on both Instagram and YouTube gave her a “rush,” she said. She would receive them throughout the day and would go to the bathroom during school to check them — something she still does.
Kaley said while she uses YouTube less often now, she believes she was previously addicted to it. “Anytime I tried to set limits for myself, it wouldn’t work and I just couldn’t get off,” she said.
Filters on Instagram, specifically those that could change a person’s cosmetic appearance, have also loomed large in the case and were also a constant fixture of Kaley’s use. Lanier and his colleagues unfurled a nearly 35-foot-long canvas banner with photos Kaley has posted on Instagram. She said “almost all” of the photos had a filter on them.
The jury was also shown Instagram posts and YouTube videos Kaley posted as a child and young teen. One video showed her saying she was “crying tears of joy” after surpassing 100 YouTube subscribers — but then she quickly turned to her looks, apologizing for her “ugly appearance.”
“I look so fat in this shirt,” the young Kaley says in the video.
Kaley said she did not experience the negative feelings associated with her body dysmorphia diagnosis before she began using social media and filters.
Meta focuses on plaintiff's home life, contradicting statements
Meta has argued that Kaley faced significant challenges before she ever used social media. The company's lawyer, Paul Schmidt, said earlier this month that the core question in the case is whether the platforms were a substantial factor in Kayley's mental health struggles.
Meta attorney Phyllis Jones took a polite, respectful tone in her cross-examination Thursday, acknowledging that it could be uncomfortable for her to speak about her private life in front of a room of strangers. Jones proceeded to zero in on Kaley’s home life.
Jones pulled up text exchanges and posts Kaley had made on Instagram about her mental health and her relationship with her mother and played videos Kaley took of her mother yelling at her.
On nearly 20 occasions during the Meta cross-examination, Jones asked Kaley to look at the transcript from her 2025 deposition, which contradicted some of the responses she gave during her testimony. Many of those questions were about how a specific action by her family members or a specific experience impacted her mental health, with Kaley saying on Thursday they either didn’t have an impact or didn’t significantly contribute to anxiety and depression. Her deposition from about a year ago often said the opposite.
“I tried to answer the questions to the best of my ability, but I may have misspoke at times,” Kaley said of her deposition.
This time, Kaley did agree that her mother was being physically and emotionally abusive during the time that she was self-harming around when she was in the 6th grade. She testified earlier in the day that she doesn’t think she would label her mother’s past actions as abuse or neglect today.
Jones confirmed with Kaley that she had never had a doctor or mental health care provider diagnose her with a social media addiction, nor had she been treated for an addiction to Instagram or told by a provider to limit her Instagram use. Kaley said she never raised concerns about overuse or addiction with providers because she said she felt they would tell to get off the platforms entirely, which she didn’t want.
Therapist: Social media and sense of self 'were closely related’
Victoria Burke, a former therapist Kaley worked with in 2019, testified on Wednesday, and Burke said her social media and her sense of self “were closely related,” adding that what was happening on the platforms could “make or break her mood.”
An attorney for Meta parsed through Burke's notes from her sessions with Kaley extensively in a cross examination that lasted about three hours. He highlighted Kaley's negative experiences with in-person bullying, other school-based sources of stress and anxiety and issues with her family. Mentions of social media in the notes were mostly limited to Kaley saying she didn't feel she had a place at home, at school or among her peers, but did feel she had a place to be seen on social media.
Burke's treatment of Kaley lasted about six months and that period took place seven years ago.
The case is expected to continue for several weeks, and the outcome the jury reaches could shape the outcome of a slew of similar lawsuits against social media companies. Meta is also facing a separate trial in New Mexico.
1 month ago
New Instagram feature warns parents if teens search suicide-linked terms often
Instagram will begin notifying parents if their children repeatedly search for terms linked to suicide or self-harm, the social media platform said Thursday. The alerts will only reach parents enrolled in Instagram’s parental supervision program.
The company said it already blocks such content from appearing in teen accounts’ search results and directs users to helplines. Alerts will be sent via email, text, WhatsApp, or through the parent’s Instagram account, depending on the contact information available. “Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” Meta said in a blog post, adding that notifications will be carefully managed to avoid overuse, which could reduce their effectiveness.
eBay settles lawsuit over harassment campaign targeting online publishers
The announcement comes as Meta faces two ongoing trials over alleged harms to children. In Los Angeles, a trial examines whether Meta’s platforms intentionally addict and harm minors, while a New Mexico trial considers whether the company failed to protect children from sexual exploitation. Thousands of families, along with school districts and government entities, have sued Meta and other social media firms, claiming their platforms are designed to be addictive and expose children to content that may contribute to depression, eating disorders, and suicide.
Meta executives, including CEO Mark Zuckerberg, have denied that their platforms cause addiction. During questioning in Los Angeles, Zuckerberg said the scientific evidence does not prove social media harms mental health.
Meta also said it is developing similar notifications to alert parents if their teens engage in certain conversations with Instagram’s artificial intelligence tools related to suicide or self-harm. “This is important work, and we’ll have more to share in the coming months,” the company added.
1 month ago
Human voices drive Reddit growth amid AI content surge
As artificial intelligence floods the internet with automated content, many users are increasingly turning to Reddit for what they see as something rare online: real human experience, empathy and honest discussion.
For users like Ines Tan, a communications professional, Reddit has become a go-to space for advice on skincare, reactions to TV shows and even emotional and practical support while planning her wedding. She describes the platform as “empathetic”, saying it offers emotional reassurance alongside practical help, something she feels is missing from more polished social media platforms.
Reddit’s appeal appears to be growing fast. The company reported 116 million daily active users worldwide in its latest third-quarter results, a 19 percent rise year on year. In both the United States and the United Kingdom, women now make up more than half of users, with Reddit emerging as the fastest-growing social platform among women in the UK.
Launched in 2005, Reddit is built around user-created communities known as subreddits. Content is ranked by user votes rather than timelines, and volunteer moderators oversee discussions, supported by site administrators who can intervene when needed.
According to Reddit chief operating officer Jen Wong, the platform’s strength lies in its human-driven conversations at a time when AI-generated material is increasingly dominating the web. She said people are recognising that Reddit offers a level of authenticity that much of the internet has lost, with popular discussions ranging from parenting and reality TV to skincare and health.
However, experts warn that Reddit is not without flaws. Dr Yusuf Oc, a senior lecturer in marketing at Bayes Business School in London, said the platform can confuse popularity with accuracy, creating risks of groupthink, echo chambers and coordinated manipulation through tactics such as “brigading” and “astroturfing”.
Reddit says it actively works to tackle such risks. A company spokesperson said manipulated content and inauthentic behaviour are prohibited, with enforcement carried out through a mix of human review, automated tools and community-level rules set by moderators.
Some analysts argue that Reddit’s growing visibility is also linked to content licensing deals with AI companies, including OpenAI, which allow AI systems to access Reddit discussions. But experts say these deals mainly boost visibility rather than explain why users keep returning.
Long-time users say the platform’s anonymity remains a key attraction. London-based user Josh Feldberg said Reddit offers kinder, more thoughtful feedback than many other social networks and lacks the influencer-driven incentives common elsewhere.
As social media becomes more automated and curated, analysts say users are increasingly seeking lived experience, disagreement and nuance. For many, Reddit’s imperfect but human-centred conversations continue to stand out in an AI-saturated online world.
With inputs from BBC
2 months ago
Discord to require face scan or ID for adult content
Discord will soon require users worldwide to verify their age through a face scan or by uploading an official ID to access adult content, as the platform rolls out stricter safety measures aimed at protecting teenagers.
The online chat service, which has more than 200 million monthly users, said the new system will place everyone into a teen-appropriate experience by default. Only users who successfully verify that they are adults will be able to access age-restricted communities, unblur sensitive material or receive direct messages from people they do not know.
Discord already requires age verification for some users in the UK and Australia to comply with local online safety laws. The company said the expanded checks will be introduced globally from early March.
“Nowhere is our safety work more important than when it comes to teen users,” said Savannah Badalich, Discord’s head of policy. She said the global rollout of teen-by-default settings would strengthen existing safety measures while still giving verified adults more flexibility.
Under the new system, users can either upload a photo of an identity document or take a short video selfie, with artificial intelligence used to estimate facial age. Discord said information used for age checks would not be stored by the platform or the verification provider, adding that face scans would not be collected and ID images would be deleted once verification is complete.
The company’s move has drawn mixed reactions. Drew Benvie, head of social media consultancy Battenhall, said the push for safer online communities was positive but warned that implementing age checks across millions of Discord communities could be challenging. He said the platform could lose users if the system backfires, but might also attract new users who value stronger safety standards.
Privacy advocates have previously raised concerns about age verification tools. In October, Discord faced criticism after ID photos of about 70,000 users were potentially exposed following a hack of a third-party firm involved in age checks.
The announcement comes amid growing pressure on social media companies from lawmakers to better protect children online. Discord’s chief executive Jason Citron was questioned about child safety at a US Senate hearing in 2024 alongside executives from Meta, Snap and TikTok.
With the new measures, including the creation of a teen advisory council, Discord is following a broader industry trend seen at platforms such as Facebook, Instagram, TikTok and Roblox, as regulators worldwide push for safer online environments for young users.
With inputs from BBC
2 months ago
The Australian woman tasked with keeping kids off social media
Julie Inman Grant, head of Australia’s eSafety Commission, faces weekly torrents of online abuse, including death and rape threats. The 57-year-old says much of it is directed at her personally, a consequence of her high-profile role in online safety.
After decades in the tech industry, Inman Grant now regulates some of the world’s biggest online platforms, including Meta, Snapchat, and YouTube. Her latest task was enforcing a pioneering law that bans Australians under 16 from social media, a move that has drawn global attention.
The law, which came into effect on December 10, covers ten platforms. Many parents support it, believing it gives them backing in managing their children’s online activity. Critics, however, argue children need guidance rather than exclusion, and that the ban may unfairly affect rural, disabled, and LGBTQI+ teens who rely on online communities. Tech companies too have voiced reservations, saying a ban is not the solution, even though they plan to comply with the law.
Inman Grant says delaying social media access can help children build critical thinking and resilience. She compares online safety to water safety: children need to learn to navigate risks, whether it’s predators or scams, much like learning to swim safely in the ocean. She acknowledges her own initial hesitation over a full ban, but eventually supported it while shaping how the law is applied.
At home, Inman Grant’s three children, including 13-year-old twins, have been a test case for the policy. She sees social media restrictions as a way to allow kids to grow without having mistakes broadcast widely.
Born in Seattle, USA, she grew up near tech giants Microsoft and Amazon. She briefly considered a career with the CIA but moved into tech, advising a US congressman on telecommunications before joining Microsoft. In the early 2000s, a Microsoft posting brought her to Australia, where she later became a citizen and joined Twitter and Adobe. Her experience inside tech companies gave her insight into their workings, preparing her for her regulator role.
Appointed eSafety Commissioner by Malcolm Turnbull, she has expanded the office’s reach, quadrupled its budget, and increased staff. Her work has earned
recognition across political lines, though it has also drawn sharp criticism abroad, particularly from the US, where she has been called a “zealot” for global content takedowns.
Her office has handled cases ranging from livestreamed violence to AI-related threats, with Inman Grant warning that harmful content can normalize or radicalize users. She now sees artificial intelligence as the next pressing challenge in online safety.
Having served nearly a decade, Inman Grant says she may step down next year but remains committed to global online safety, potentially helping other countries build similar regulatory frameworks.
With inputs from BBC
2 months ago