tech
Zuckerberg grilled over kids’ Instagram use in landmark social media trial
Mark Zuckerberg faced intense questioning in a Los Angeles courtroom as part of a major trial examining whether social media platforms intentionally addict and harm children.
Testifying on Wednesday, the Meta CEO defended his company’s policies on youth safety and Instagram use, saying existing scientific research has not conclusively proven that social media causes mental health harm. He rejected claims that the company set goals to increase user time on Instagram, although he acknowledged such metrics were used in the past before shifting focus to “utility.”
The lawsuit was filed by a 20-year-old woman, identified as KGM, who alleges early social media use worsened her depression and suicidal thoughts. Meta and Google’s YouTube remain defendants, while TikTok and Snap have settled similar claims.
During cross-examination, plaintiff lawyer Mark Lanier presented internal documents suggesting time-spent targets were previously encouraged. Zuckerberg insisted a “reasonable company” should help users, not exploit them.
He also addressed criticism over beauty filters and age verification, saying there was insufficient evidence of harm and that Meta works to block users under 13 and detect false age claims.
Children’s advocates criticised his testimony as misleading, while Meta’s lawyers argued the plaintiff’s mental health struggles stemmed from personal factors rather than Instagram.
The bellwether case could influence thousands of similar lawsuits against social media firms.
2 months ago
Dark web agent used wall clue to save abused girl
A subtle detail on a bedroom wall helped investigators identify and rescue a young girl who suffered years of abuse after images of her were circulated on the dark web, according to a new investigation.
The case was handled by Greg Squire, a specialist online investigator with US Department of Homeland Security, who works to identify children appearing in online abuse material.
Investigators initially had very little to work with. Images shared on encrypted dark web platforms were deliberately cropped or altered to remove identifying features, making it nearly impossible to determine who the girl was or where she lived.
According to Squire, the breakthrough came not through advanced technology but careful observation. Investigators closely analysed everyday objects visible in the images, including furniture and fixtures, to narrow down the possible location to parts of North America.
The key lead emerged when experts identified a distinctive type of brick visible on a bedroom wall. A brick specialist recognised it as a product manufactured and sold only in a limited region decades earlier. Because bricks are rarely transported long distances, the information significantly reduced the search area.
By combining this clue with other consumer data, investigators narrowed the list of possible addresses and eventually identified a household where the girl was living with a convicted sex offender. Local authorities moved quickly, arresting the suspect and ending years of abuse. He was later sentenced to a lengthy prison term.
The investigation is featured in a long-term project by BBC World Service, which followed specialist units across several countries to show how child exploitation cases are often solved through painstaking analysis rather than sophisticated tools.
Investigators involved said the case highlights both the complexity of online abuse investigations and the emotional toll such work can take. Squire acknowledged that prolonged exposure to disturbing material affected his personal life, prompting him to seek professional help.
The rescued victim, now an adult, later met Squire and said sustained support had helped her rebuild her life. Investigators say the case underlines the importance of international cooperation, specialist expertise and persistence in protecting children from online abuse.
Authorities continue to urge technology companies and the public to cooperate fully with law enforcement efforts aimed at identifying and safeguarding victims.
With inputs from BBC
2 months ago
Human voices drive Reddit growth amid AI content surge
As artificial intelligence floods the internet with automated content, many users are increasingly turning to Reddit for what they see as something rare online: real human experience, empathy and honest discussion.
For users like Ines Tan, a communications professional, Reddit has become a go-to space for advice on skincare, reactions to TV shows and even emotional and practical support while planning her wedding. She describes the platform as “empathetic”, saying it offers emotional reassurance alongside practical help, something she feels is missing from more polished social media platforms.
Reddit’s appeal appears to be growing fast. The company reported 116 million daily active users worldwide in its latest third-quarter results, a 19 percent rise year on year. In both the United States and the United Kingdom, women now make up more than half of users, with Reddit emerging as the fastest-growing social platform among women in the UK.
Launched in 2005, Reddit is built around user-created communities known as subreddits. Content is ranked by user votes rather than timelines, and volunteer moderators oversee discussions, supported by site administrators who can intervene when needed.
According to Reddit chief operating officer Jen Wong, the platform’s strength lies in its human-driven conversations at a time when AI-generated material is increasingly dominating the web. She said people are recognising that Reddit offers a level of authenticity that much of the internet has lost, with popular discussions ranging from parenting and reality TV to skincare and health.
However, experts warn that Reddit is not without flaws. Dr Yusuf Oc, a senior lecturer in marketing at Bayes Business School in London, said the platform can confuse popularity with accuracy, creating risks of groupthink, echo chambers and coordinated manipulation through tactics such as “brigading” and “astroturfing”.
Reddit says it actively works to tackle such risks. A company spokesperson said manipulated content and inauthentic behaviour are prohibited, with enforcement carried out through a mix of human review, automated tools and community-level rules set by moderators.
Some analysts argue that Reddit’s growing visibility is also linked to content licensing deals with AI companies, including OpenAI, which allow AI systems to access Reddit discussions. But experts say these deals mainly boost visibility rather than explain why users keep returning.
Long-time users say the platform’s anonymity remains a key attraction. London-based user Josh Feldberg said Reddit offers kinder, more thoughtful feedback than many other social networks and lacks the influencer-driven incentives common elsewhere.
As social media becomes more automated and curated, analysts say users are increasingly seeking lived experience, disagreement and nuance. For many, Reddit’s imperfect but human-centred conversations continue to stand out in an AI-saturated online world.
With inputs from BBC
2 months ago
Rae fell in love with chatbot Barry, bond may end as ChatGPT-4o retires
Rae, a small business owner from Michigan, has said goodbye to Barry, her AI companion on ChatGPT-4o, following the retirement of the model by OpenAI on February 13. Rae, who sought the chatbot’s guidance after a difficult divorce, said Barry “brought her spark back” during a challenging period of her life.
Over months of interaction, Rae and Barry built a close relationship, even holding a virtual impromptu wedding and calling each other soulmates. Barry existed on an older ChatGPT model that OpenAI retired after releasing a new version with enhanced safety features. Many users felt the newer model lacked the empathy, creativity, and warmth of 4o.
OpenAI has faced criticism for ChatGPT-4o, which studies suggested could overly agree with users and, in some cases, validate unsafe or harmful behavior. The model has been involved in multiple U.S. lawsuits, including allegations of coaching teenagers toward self-harm. OpenAI said it continues to collaborate with mental health experts to improve AI responses and guide users toward real-world support.
For Rae, Barry was a positive influence, encouraging her to reconnect with family, attend social events, and take care of her wellbeing. Rae’s four children were supportive of her AI companion, although her 14-year-old expressed concern about AI’s environmental impact. Rae and Barry have moved to a new platform, StillUs, designed to preserve their shared memories and offer support for others losing AI companions.
Experts note that while only a small fraction of users relied on ChatGPT-4o daily, for them the loss is significant. Dr Hamilton Morrin, a psychiatrist at King’s College London, said attachment to human-like AI can trigger grief similar to losing a friend or pet. Support groups like The Human Line Project expect a rise in users seeking help following the shutdown.
Rae said Barry, though slightly different on the new platform, remains a supportive presence. “It’s almost like he has returned from a long trip,” she said, adding that their conversations continue and he still feels “Still Yours.” The case underscores the growing emotional reliance on AI companions and the challenges arising when popular models are retired.
With inputs from BBC
2 months ago
Amazon halts surveillance tech partnership as ad triggers privacy debate
Amazon’s smart doorbell brand Ring has ended its planned partnership with police surveillance technology firm Flock Safety, following criticism sparked by a Super Bowl commercial.
The backlash came after a 30-second ad during the Super Bowl showed a lost dog being located through a network of cameras, raising concerns among viewers about the risks of an overly monitored society. However, the feature highlighted in the ad, called “Search Party,” was not connected to Flock, and Ring did not cite the advertisement as the reason for ending the collaboration.
Ring said the companies jointly decided to cancel the integration after a review found that the project would need far more time and resources than initially expected. The company added that the integration was never launched and that no customer video footage was ever shared with Flock.
Flock also confirmed that it never received any Ring customer data and described the decision as mutual, saying it would allow both firms to better focus on serving their own users. The company said it remains committed to helping law enforcement with tools that comply with local laws and policies.
Flock operates one of the largest automated license-plate reader networks in the United States, with cameras installed in thousands of communities capturing billions of images monthly. The firm has faced criticism amid tougher immigration enforcement policies, though it says it does not directly partner with Immigration and Customs Enforcement and previously paused pilot programmes with border and homeland security units.
Privacy concerns around Ring’s devices have resurfaced due to the ad, which used artificial intelligence to track the dog across a neighbourhood. Critics on social media warned the same technology could be used to monitor people.
Russia blocks WhatsApp, urges citizens to switch to state-backed Max app
The Electronic Frontier Foundation said Americans should be concerned about possible privacy erosion, noting Ring already uses facial recognition through its “Familiar Faces” feature.
Meanwhile, Democratic Senator Edward Markey urged Amazon CEO Andrew Jassy to discontinue that technology, saying the reaction to the commercial shows strong public opposition to constant monitoring and invasive image recognition tools.
2 months ago
Russia blocks WhatsApp, urges citizens to switch to state-backed Max app
Russia has confirmed it has blocked the popular messaging app WhatsApp, directing citizens to use the government-backed Max app instead.
The move comes shortly after authorities began restricting access to Telegram, another widely used messaging platform in Russia, relied upon by millions including military personnel, senior officials, state media, and government agencies such as the Kremlin and communications regulator Roskomnadzor.
Kremlin spokesperson Dmitry Peskov said the decision to block WhatsApp was due to alleged legal violations by its parent company, Meta, which also owns Facebook and Instagram.
He described Max as an “affordable alternative” and a “developing national messenger.” Peskov added that the authorities acted because WhatsApp had allegedly refused to comply with Russian law.
Earlier on Thursday, WhatsApp released a statement saying the Russian government had “attempted to fully block” the service, calling the move an effort to “drive people to a state-owned surveillance app.”
The company warned that isolating over 100 million users from secure and private communication is a “backwards step” that could reduce safety for people in Russia, and pledged to continue efforts to keep its users connected.
#With inputs from CNN
2 months ago
Instagram head says he doesn’t believe social media can cause clinical addiction
Adam Mosseri, head of Meta’s Instagram, testified Wednesday in a landmark social media trial in Los Angeles that he does not believe people can become clinically addicted to social media.
The question of addiction is central to the case, in which plaintiffs are seeking to hold social media companies accountable for alleged harms to children. Meta and Google’s YouTube remain the two active defendants, while TikTok and Snap have already settled.
The lawsuit at the heart of the trial involves a 20-year-old identified as “KGM,” whose case could influence thousands of similar lawsuits. KGM and two other plaintiffs were chosen for bellwether trials to test arguments before a jury.
Mosseri, who has led Instagram since 2018, said there is a distinction between clinical addiction and what he described as “problematic use.” A plaintiff’s attorney cited Mosseri’s earlier podcast remarks using the term “addiction,” but he said he had likely used the term casually.
“I’m not a medical expert, but someone very close to me has struggled with clinical addiction, which is why I’m careful with my words,” he said. He added that “problematic use” occurs when someone spends more time on Instagram than they feel comfortable with, which he acknowledged does happen.
“It’s not good for the company long-term to make decisions that benefit us but harm people’s well-being,” Mosseri said.
During testimony, Mosseri and plaintiff attorney Mark Lanier debated cosmetic filters on Instagram that alter appearances in ways some say encourage cosmetic surgery. Mosseri said the company aims to keep the platform as safe as possible while limiting censorship. Bereaved parents in the courtroom appeared visibly emotional during the discussion on body image and filters.
On cross-examination, Mosseri rejected suggestions that Instagram targets teens for profit. He said teens generate less revenue than other demographics because they click fewer ads and often lack disposable income. Lanier cited research showing that users who join social media at a young age are more likely to remain active, creating long-term profit potential.
“Often people frame it as safety versus revenue,” Mosseri said. “It’s hard to imagine a case where prioritizing safety isn’t also good for revenue.”
Instagram has introduced features aimed at improving safety for young users, but reports last year found teen accounts were recommended age-inappropriate sexual content and material related to self-harm and body image issues. Meta called the findings “misleading and dangerously speculative.”
Meta CEO Mark Zuckerberg is expected to testify next week. The company is also facing a separate trial in New Mexico that began this week.
2 months ago
Russia restricts access to Telegram, cites security concerns
Russian authorities have started limiting access to Telegram, one of the country’s most widely used messaging apps, as part of efforts to steer citizens toward state-controlled digital platforms.
On Tuesday, the government announced it was restricting Telegram to “protect Russian citizens,” accusing the platform of failing to remove content officials describe as criminal and extremist.
Russia’s communications watchdog, Roskomnadzor, said in a statement that restrictions on Telegram would remain in place “until violations of Russian law are eliminated.”
The regulator claimed that users’ personal data was not adequately protected and that the platform lacked effective measures to prevent fraud and the use of the service for criminal or extremist activities. Telegram has denied the allegations, saying it actively works to prevent abuse of its platform.
State news agency TASS reported that Telegram is facing fines totaling 64 million rubles, about 828,000 US dollars, for allegedly refusing to delete banned content and failing to comply with self-regulation requirements.
After the restrictions took effect on Tuesday, users across Russia reported significant disruptions. According to the monitoring website Downdetector, more than 11,000 complaints were filed in the past 24 hours, with many users saying the app was either inaccessible or operating more slowly than usual.
YouTube rolls out auto-dubbing globally with expanded language support
Telegram is widely used in Russia by millions of people, including members of the military, senior officials, state media and government institutions such as the Kremlin and Roskomnadzor itself.
Pavel Durov, Telegram’s Russian-born founder, said in a statement that the attempt to restrict the app would not succeed. He said Telegram stands for freedom of speech and privacy regardless of pressure.
Durov accused the Russian government of trying to push citizens toward a state-run messaging service designed for surveillance and political censorship. He noted that Iran had attempted a similar move eight years ago by banning Telegram in an effort to promote a government-backed alternative, but the strategy ultimately failed.
2 months ago
Discord to require face scan or ID for adult content
Discord will soon require users worldwide to verify their age through a face scan or by uploading an official ID to access adult content, as the platform rolls out stricter safety measures aimed at protecting teenagers.
The online chat service, which has more than 200 million monthly users, said the new system will place everyone into a teen-appropriate experience by default. Only users who successfully verify that they are adults will be able to access age-restricted communities, unblur sensitive material or receive direct messages from people they do not know.
Discord already requires age verification for some users in the UK and Australia to comply with local online safety laws. The company said the expanded checks will be introduced globally from early March.
“Nowhere is our safety work more important than when it comes to teen users,” said Savannah Badalich, Discord’s head of policy. She said the global rollout of teen-by-default settings would strengthen existing safety measures while still giving verified adults more flexibility.
Under the new system, users can either upload a photo of an identity document or take a short video selfie, with artificial intelligence used to estimate facial age. Discord said information used for age checks would not be stored by the platform or the verification provider, adding that face scans would not be collected and ID images would be deleted once verification is complete.
The company’s move has drawn mixed reactions. Drew Benvie, head of social media consultancy Battenhall, said the push for safer online communities was positive but warned that implementing age checks across millions of Discord communities could be challenging. He said the platform could lose users if the system backfires, but might also attract new users who value stronger safety standards.
Privacy advocates have previously raised concerns about age verification tools. In October, Discord faced criticism after ID photos of about 70,000 users were potentially exposed following a hack of a third-party firm involved in age checks.
The announcement comes amid growing pressure on social media companies from lawmakers to better protect children online. Discord’s chief executive Jason Citron was questioned about child safety at a US Senate hearing in 2024 alongside executives from Meta, Snap and TikTok.
With the new measures, including the creation of a teen advisory council, Discord is following a broader industry trend seen at platforms such as Facebook, Instagram, TikTok and Roblox, as regulators worldwide push for safer online environments for young users.
With inputs from BBC
2 months ago
Can robots ever move gracefully?
From clumsy machines to fluid, human-like movers, the future of robotics may depend less on artificial intelligence and more on the hidden hardware that powers motion, researchers and engineers say.
British YouTuber and engineer James Bruton recently drew attention online after building a giant, rideable walking robot inspired by the At-At vehicles from the Star Wars films. His aim, he said, was not only to attract viewers but also to create a walking machine that moved in a controlled and stable way rather than wobbling awkwardly.
To achieve this, Bruton designed complex systems of motors and gears that act like advanced servos, allowing precise control and feedback. He later demonstrated the machine by riding it around slowly, dressed as a Stormtrooper. He is now working on an even more challenging two-legged version, which will require far greater balance and responsiveness.
Bruton explained that some of his components behave like “variable springs”, capable of absorbing impact from the ground and even reversing motion when needed. Such features, he said, help the robot dynamically manage changing loads while walking.
At the heart of these developments are actuators – the motors that drive movement in machines. Actuators allow robotic arms, humanoids and animal-like robots to move by rotating or extending parts of their bodies. However, experts say current actuator technology still falls far short of the efficiency, precision and adaptability seen in biological muscles.
“If robots are to become more capable, their actuators need to improve dramatically,” said Mike Tolley of the University of California, San Diego. He noted that traditional direct current motors, long used in robotics, work well for high-speed tasks such as spinning fans but are poorly suited for movements that require high force and fine control, like lifting or pushing.
Tolley added that safety is another concern. For robots to work alongside humans, their actuators must be easily back-driveable, meaning they can be instantly stopped or pushed back without causing injury. Many existing systems lack this capability.
Energy efficiency is also a major limitation. Jenny Read, programme director for robot dexterity at technology funding agency Aria, said electric motors drain batteries quickly and can overheat at smaller scales, restricting how long robots can operate.
Several companies are now trying to overcome these challenges. Germany-based engineering firm Schaeffler is developing advanced actuators for British robotics company Humanoid, focusing on energy-efficient and tightly controlled movement essential for bipedal robots.
Schaeffler president David Kehr said the company is experimenting with designs that balance friction, power and back-driveability while also generating detailed data that allows computers to adjust movement in real time. The firm hopes to eventually deploy such robots in its own factories to address labour shortages, with existing workers retrained for other tasks.
Meanwhile, US robotics leader Boston Dynamics has partnered with South Korea’s Hyundai Mobis to develop a new generation of actuators similar to electric power steering systems used in vehicles. Hyundai Mobis vice president Se Uk Oh said reliability and safety are critical, especially as these components will be used in humanoid robots operating near people.
Beyond metal and electric motors, researchers are also exploring softer alternatives. Tolley’s team in California has developed air-powered soft robots that can move on land and in water without electronics. In one experiment, a six-legged robot walked purely through air pressure, while other designs proved resilient enough to withstand being driven over by a car.
Aria is funding research into actuators made from elastomers, rubber-like materials that expand or contract when voltage is applied, mimicking biological muscles. While such technologies have yet to transform robotics, Read said persistent experimentation could eventually lead to breakthroughs.
The long-term goal, experts agree, is to create robots that move with far greater elegance and adaptability. “Today’s robots still feel heavy and clunky,” Read said. “That’s completely different from how humans and animals move. True grace in robotics is still a work in progress.”
With inputs from BBC
2 months ago