Tech
Instagram head says he doesn’t believe social media can cause clinical addiction
Adam Mosseri, head of Meta’s Instagram, testified Wednesday in a landmark social media trial in Los Angeles that he does not believe people can become clinically addicted to social media.
The question of addiction is central to the case, in which plaintiffs are seeking to hold social media companies accountable for alleged harms to children. Meta and Google’s YouTube remain the two active defendants, while TikTok and Snap have already settled.
The lawsuit at the heart of the trial involves a 20-year-old identified as “KGM,” whose case could influence thousands of similar lawsuits. KGM and two other plaintiffs were chosen for bellwether trials to test arguments before a jury.
Mosseri, who has led Instagram since 2018, said there is a distinction between clinical addiction and what he described as “problematic use.” A plaintiff’s attorney cited Mosseri’s earlier podcast remarks using the term “addiction,” but he said he had likely used the term casually.
“I’m not a medical expert, but someone very close to me has struggled with clinical addiction, which is why I’m careful with my words,” he said. He added that “problematic use” occurs when someone spends more time on Instagram than they feel comfortable with, which he acknowledged does happen.
“It’s not good for the company long-term to make decisions that benefit us but harm people’s well-being,” Mosseri said.
During testimony, Mosseri and plaintiff attorney Mark Lanier debated cosmetic filters on Instagram that alter appearances in ways some say encourage cosmetic surgery. Mosseri said the company aims to keep the platform as safe as possible while limiting censorship. Bereaved parents in the courtroom appeared visibly emotional during the discussion on body image and filters.
On cross-examination, Mosseri rejected suggestions that Instagram targets teens for profit. He said teens generate less revenue than other demographics because they click fewer ads and often lack disposable income. Lanier cited research showing that users who join social media at a young age are more likely to remain active, creating long-term profit potential.
“Often people frame it as safety versus revenue,” Mosseri said. “It’s hard to imagine a case where prioritizing safety isn’t also good for revenue.”
Instagram has introduced features aimed at improving safety for young users, but reports last year found teen accounts were recommended age-inappropriate sexual content and material related to self-harm and body image issues. Meta called the findings “misleading and dangerously speculative.”
Meta CEO Mark Zuckerberg is expected to testify next week. The company is also facing a separate trial in New Mexico that began this week.
6 hours ago
Russia restricts access to Telegram, cites security concerns
Russian authorities have started limiting access to Telegram, one of the country’s most widely used messaging apps, as part of efforts to steer citizens toward state-controlled digital platforms.
On Tuesday, the government announced it was restricting Telegram to “protect Russian citizens,” accusing the platform of failing to remove content officials describe as criminal and extremist.
Russia’s communications watchdog, Roskomnadzor, said in a statement that restrictions on Telegram would remain in place “until violations of Russian law are eliminated.”
The regulator claimed that users’ personal data was not adequately protected and that the platform lacked effective measures to prevent fraud and the use of the service for criminal or extremist activities. Telegram has denied the allegations, saying it actively works to prevent abuse of its platform.
State news agency TASS reported that Telegram is facing fines totaling 64 million rubles, about 828,000 US dollars, for allegedly refusing to delete banned content and failing to comply with self-regulation requirements.
After the restrictions took effect on Tuesday, users across Russia reported significant disruptions. According to the monitoring website Downdetector, more than 11,000 complaints were filed in the past 24 hours, with many users saying the app was either inaccessible or operating more slowly than usual.
YouTube rolls out auto-dubbing globally with expanded language support
Telegram is widely used in Russia by millions of people, including members of the military, senior officials, state media and government institutions such as the Kremlin and Roskomnadzor itself.
Pavel Durov, Telegram’s Russian-born founder, said in a statement that the attempt to restrict the app would not succeed. He said Telegram stands for freedom of speech and privacy regardless of pressure.
Durov accused the Russian government of trying to push citizens toward a state-run messaging service designed for surveillance and political censorship. He noted that Iran had attempted a similar move eight years ago by banning Telegram in an effort to promote a government-backed alternative, but the strategy ultimately failed.
1 day ago
Discord to require face scan or ID for adult content
Discord will soon require users worldwide to verify their age through a face scan or by uploading an official ID to access adult content, as the platform rolls out stricter safety measures aimed at protecting teenagers.
The online chat service, which has more than 200 million monthly users, said the new system will place everyone into a teen-appropriate experience by default. Only users who successfully verify that they are adults will be able to access age-restricted communities, unblur sensitive material or receive direct messages from people they do not know.
Discord already requires age verification for some users in the UK and Australia to comply with local online safety laws. The company said the expanded checks will be introduced globally from early March.
“Nowhere is our safety work more important than when it comes to teen users,” said Savannah Badalich, Discord’s head of policy. She said the global rollout of teen-by-default settings would strengthen existing safety measures while still giving verified adults more flexibility.
Under the new system, users can either upload a photo of an identity document or take a short video selfie, with artificial intelligence used to estimate facial age. Discord said information used for age checks would not be stored by the platform or the verification provider, adding that face scans would not be collected and ID images would be deleted once verification is complete.
The company’s move has drawn mixed reactions. Drew Benvie, head of social media consultancy Battenhall, said the push for safer online communities was positive but warned that implementing age checks across millions of Discord communities could be challenging. He said the platform could lose users if the system backfires, but might also attract new users who value stronger safety standards.
Privacy advocates have previously raised concerns about age verification tools. In October, Discord faced criticism after ID photos of about 70,000 users were potentially exposed following a hack of a third-party firm involved in age checks.
The announcement comes amid growing pressure on social media companies from lawmakers to better protect children online. Discord’s chief executive Jason Citron was questioned about child safety at a US Senate hearing in 2024 alongside executives from Meta, Snap and TikTok.
With the new measures, including the creation of a teen advisory council, Discord is following a broader industry trend seen at platforms such as Facebook, Instagram, TikTok and Roblox, as regulators worldwide push for safer online environments for young users.
With inputs from BBC
2 days ago
Can robots ever move gracefully?
From clumsy machines to fluid, human-like movers, the future of robotics may depend less on artificial intelligence and more on the hidden hardware that powers motion, researchers and engineers say.
British YouTuber and engineer James Bruton recently drew attention online after building a giant, rideable walking robot inspired by the At-At vehicles from the Star Wars films. His aim, he said, was not only to attract viewers but also to create a walking machine that moved in a controlled and stable way rather than wobbling awkwardly.
To achieve this, Bruton designed complex systems of motors and gears that act like advanced servos, allowing precise control and feedback. He later demonstrated the machine by riding it around slowly, dressed as a Stormtrooper. He is now working on an even more challenging two-legged version, which will require far greater balance and responsiveness.
Bruton explained that some of his components behave like “variable springs”, capable of absorbing impact from the ground and even reversing motion when needed. Such features, he said, help the robot dynamically manage changing loads while walking.
At the heart of these developments are actuators – the motors that drive movement in machines. Actuators allow robotic arms, humanoids and animal-like robots to move by rotating or extending parts of their bodies. However, experts say current actuator technology still falls far short of the efficiency, precision and adaptability seen in biological muscles.
“If robots are to become more capable, their actuators need to improve dramatically,” said Mike Tolley of the University of California, San Diego. He noted that traditional direct current motors, long used in robotics, work well for high-speed tasks such as spinning fans but are poorly suited for movements that require high force and fine control, like lifting or pushing.
Tolley added that safety is another concern. For robots to work alongside humans, their actuators must be easily back-driveable, meaning they can be instantly stopped or pushed back without causing injury. Many existing systems lack this capability.
Energy efficiency is also a major limitation. Jenny Read, programme director for robot dexterity at technology funding agency Aria, said electric motors drain batteries quickly and can overheat at smaller scales, restricting how long robots can operate.
Several companies are now trying to overcome these challenges. Germany-based engineering firm Schaeffler is developing advanced actuators for British robotics company Humanoid, focusing on energy-efficient and tightly controlled movement essential for bipedal robots.
Schaeffler president David Kehr said the company is experimenting with designs that balance friction, power and back-driveability while also generating detailed data that allows computers to adjust movement in real time. The firm hopes to eventually deploy such robots in its own factories to address labour shortages, with existing workers retrained for other tasks.
Meanwhile, US robotics leader Boston Dynamics has partnered with South Korea’s Hyundai Mobis to develop a new generation of actuators similar to electric power steering systems used in vehicles. Hyundai Mobis vice president Se Uk Oh said reliability and safety are critical, especially as these components will be used in humanoid robots operating near people.
Beyond metal and electric motors, researchers are also exploring softer alternatives. Tolley’s team in California has developed air-powered soft robots that can move on land and in water without electronics. In one experiment, a six-legged robot walked purely through air pressure, while other designs proved resilient enough to withstand being driven over by a car.
Aria is funding research into actuators made from elastomers, rubber-like materials that expand or contract when voltage is applied, mimicking biological muscles. While such technologies have yet to transform robotics, Read said persistent experimentation could eventually lead to breakthroughs.
The long-term goal, experts agree, is to create robots that move with far greater elegance and adaptability. “Today’s robots still feel heavy and clunky,” Read said. “That’s completely different from how humans and animals move. True grace in robotics is still a work in progress.”
With inputs from BBC
3 days ago
The Australian woman tasked with keeping kids off social media
Julie Inman Grant, head of Australia’s eSafety Commission, faces weekly torrents of online abuse, including death and rape threats. The 57-year-old says much of it is directed at her personally, a consequence of her high-profile role in online safety.
After decades in the tech industry, Inman Grant now regulates some of the world’s biggest online platforms, including Meta, Snapchat, and YouTube. Her latest task was enforcing a pioneering law that bans Australians under 16 from social media, a move that has drawn global attention.
The law, which came into effect on December 10, covers ten platforms. Many parents support it, believing it gives them backing in managing their children’s online activity. Critics, however, argue children need guidance rather than exclusion, and that the ban may unfairly affect rural, disabled, and LGBTQI+ teens who rely on online communities. Tech companies too have voiced reservations, saying a ban is not the solution, even though they plan to comply with the law.
Inman Grant says delaying social media access can help children build critical thinking and resilience. She compares online safety to water safety: children need to learn to navigate risks, whether it’s predators or scams, much like learning to swim safely in the ocean. She acknowledges her own initial hesitation over a full ban, but eventually supported it while shaping how the law is applied.
At home, Inman Grant’s three children, including 13-year-old twins, have been a test case for the policy. She sees social media restrictions as a way to allow kids to grow without having mistakes broadcast widely.
Born in Seattle, USA, she grew up near tech giants Microsoft and Amazon. She briefly considered a career with the CIA but moved into tech, advising a US congressman on telecommunications before joining Microsoft. In the early 2000s, a Microsoft posting brought her to Australia, where she later became a citizen and joined Twitter and Adobe. Her experience inside tech companies gave her insight into their workings, preparing her for her regulator role.
Appointed eSafety Commissioner by Malcolm Turnbull, she has expanded the office’s reach, quadrupled its budget, and increased staff. Her work has earned
recognition across political lines, though it has also drawn sharp criticism abroad, particularly from the US, where she has been called a “zealot” for global content takedowns.
Her office has handled cases ranging from livestreamed violence to AI-related threats, with Inman Grant warning that harmful content can normalize or radicalize users. She now sees artificial intelligence as the next pressing challenge in online safety.
Having served nearly a decade, Inman Grant says she may step down next year but remains committed to global online safety, potentially helping other countries build similar regulatory frameworks.
With inputs from BBC
4 days ago
Bitcoin drops to lowest level in over a year
Bitcoin prices have dropped to their lowest level in about 16 months, despite strong public support for cryptocurrency from US President Donald Trump.
At one point, Bitcoin fell to around $60,000, the lowest since September 2024, before recovering slightly. The fall came after a long rally that pushed the digital currency to a record high of $122,200 in October 2025.
Joshua Chu, co-chair of the Hong Kong Web3 Association, said Reuters that investors who took big risks are now facing the reality of market ups and downs. He said the current situation is a reminder of how important risk management is in volatile markets.
Bitcoin had gained strong momentum over the past year, helped by Trump’s vocal backing of crypto and his promise to ease regulations on the sector. However, after Thursday’s drop, Bitcoin is now down about 32% over the past 12 months and is moving closer to price levels seen in early 2024 and 2021.
Bitcoin is the world’s largest and most well-known cryptocurrency. It is a form of digital money that is not controlled by any central bank or government.
Bitcoin surges past $118,000 for first time as Crypto momentum grows
According to the UK’s Financial Conduct Authority (FCA), about 8% of UK adults invested in crypto in 2025, down from the previous year. However, the average amount invested has increased, with many people now holding between £1,000 and £5,000 worth of digital assets.
After returning to the White House in January 2025, Trump signed an executive order aiming to make the US the world’s leading hub for cryptocurrency. He also launched his own crypto-related business ventures and continued involvement in family-owned crypto investment firms.
During his current term, the Trump administration has taken several pro-crypto steps, including reducing regulatory enforcement. Democrats, however, have criticised his approach, saying Trump has personally gained billions of dollars from crypto holdings and transactions.
Bitcoin briefly slips below $85,000 amid broad crypto downturn
Analysts say Bitcoin’s latest fall may be linked to Trump’s nomination of Kevin Warsh as the new head of the US Federal Reserve. Some investors expect tighter monetary policy, which usually puts pressure on assets like cryptocurrencies.
Deutsche Bank said Bitcoin has been falling for four months, with growing negative sentiment as traditional investors lose interest. While the bank does not expect crypto to disappear, it also does not see a quick return to past highs.
Other major cryptocurrencies, including Ethereum and Solana, have also fallen by about 37% so far this year. CoinGecko reports that the overall crypto market has lost more than $2 trillion in value since peaking in October.
With inputs from BBC.
5 days ago
YouTube rolls out auto-dubbing globally with expanded language support
YouTube has expanded its auto-dubbing feature worldwide, allowing creators to reach a broader global audience as the platform added support for 27 languages and introduced new tools to improve translated audio quality.
The video-sharing platform said auto-dubbing is now available to all users, marking a major step in reducing language barriers on YouTube. The company reported that in December 2025 alone, about six million daily viewers watched at least 10 minutes of auto-dubbed content, indicating growing adoption of the feature.
Under the expanded system, videos can now be automatically dubbed into English from a wide range of languages, including Arabic, Bengali, Chinese, Dutch, French, German, Hindi, Japanese, Korean, Malayalam, Portuguese, Russian, Spanish, Tamil, Telugu, Turkish, Urdu and Vietnamese, among others. Dubbing from English is currently supported in 20 languages, including Bengali, Hindi, French, German, Japanese, Korean, Portuguese and Spanish.
YouTube has also launched an “expressive speech” feature for channels in eight languages – English, French, German, Hindi, Indonesian, Italian, Portuguese and Spanish. The company said this tool is designed to better capture the original tone, emotion and energy of the speaker, making dubbed audio sound more natural.
Microsoft unveils AI Content Marketplace
In addition, YouTube has introduced a “preferred language” setting that gives users more control over how they consume content. While the platform still defaults language selection based on viewing history, users can now choose preferred languages so that videos originally uploaded in those languages will play without translation.
Acknowledging that dubbed videos may sometimes appear unnatural due to mismatched lip movements, YouTube said it is testing a lip-sync pilot feature that aligns translated audio with a speaker’s lip movements to create a more realistic viewing experience.
The company said creators have also been considered in the rollout. YouTube’s smart filtering technology can identify content that should not be dubbed, such as music videos or silent vlogs. According to the platform, auto-dubbing will not negatively affect a video’s discoverability and could help creators reach new audiences in other languages.
#With inputs from Hindustan Times
6 days ago
Malaysia imposes full ban on e-waste imports to stop illegal dumping
Malaysia has announced an immediate and complete ban on the import of electronic waste, declaring it will no longer allow itself to become a dumping ground for hazardous waste from abroad.
The Malaysian Anti-Corruption Commission (MACC) said late Wednesday that all electronic waste, or e-waste, has been reclassified under the “absolute prohibition” category with immediate effect. The move removes the discretion previously held by the Department of Environment to approve exemptions for importing certain types of e-waste.
MACC chief Azam Baki said e-waste imports are now strictly prohibited and pledged firm and coordinated enforcement to prevent illegal shipments from entering the country.
Malaysia has struggled for years with large volumes of imported e-waste, much of it suspected to be illegal and harmful to both human health and the environment. Authorities have seized hundreds of containers at ports in recent years and ordered many shipments to be returned to their countries of origin.
Spain moves to ban social media use for children under 16
Environmental groups have repeatedly called for tougher measures, warning that e-waste such as discarded computers, mobile phones and household appliances often contains toxic substances and heavy metals, including lead, mercury and cadmium, which can contaminate soil and water if mishandled.
The ban comes as authorities expand a corruption investigation linked to e-waste management. Last week, the MACC detained and remanded the director-general of the Department of Environment and his deputy over alleged abuse of power and corruption related to e-waste oversight. Investigators have also frozen bank accounts and seized cash connected to the case.
Meanwhile, Malaysia’s Home Ministry said in a social media post that the government would step up efforts to curb e-waste smuggling.
“Malaysia is not a dumping ground for the world’s waste,” the ministry said, adding that e-waste poses a serious threat to the environment, public health and national security.
7 days ago
Microsoft unveils AI Content Marketplace
Microsoft has launched a pilot platform that allows artificial intelligence developers to pay publishers for using licensed “premium content” to train their AI models, aiming to create a new revenue stream for media organisations while improving the quality of AI-generated responses.
The platform, called the Publisher Content Marketplace (PCM), will enable publishers to set their own pricing and licensing terms, according to a Microsoft blog post released on Tuesday. The voluntary marketplace is open to all types of publishers and is designed to give AI developers scaled access to authorised training data.
Microsoft said PCM will also provide publishers with insights into how their content is used for AI training, helping them better understand its value and determine appropriate licensing conditions. The company stressed that publishers will retain ownership of their content as well as full editorial independence.
Microsoft tops Wall Street assumption with $81.3B in revenue
The initiative comes amid growing tensions between publishers and big technology companies over the use of copyrighted material for training large language models. Many AI systems have been developed using vast amounts of online data, including news content, often without explicit permission.
Several publishers have responded with legal action. The New York Times has filed copyright infringement lawsuits against Microsoft and OpenAI, while in India, members of the Digital News Publishers Association (DNPA), including The Indian Express, have challenged OpenAI over what they describe as the unlawful use of copyrighted material. At the same time, some major publishers have signed licensing agreements with AI companies to monetise their content.
Microsoft acknowledged that traditional models of content distribution are being disrupted by the rise of AI-powered search and conversational tools. “The open web was built on an implicit value exchange where publishers made content accessible, and distribution channels like search helped people find it,” the company said, adding that this model does not easily translate to an AI-first environment.
The technology giant said much authoritative content remains behind paywalls or within specialised archives, making sustainable and transparent licensing mechanisms increasingly important as AI adoption grows.
Microsoft to invest $17.5 billion in India for AI and Cloud infrastructure
Microsoft said PCM has been developed in partnership with several US-based publishers, including Vox Media, The Associated Press, Condé Nast and People. To assess the impact of licensed material, the company tested its Copilot AI chatbot using premium content and found that it significantly improved the quality of responses.
The company added that it plans to continue piloting the platform and is looking to onboard additional partners, including Yahoo, in the coming months. #With inputs from Indian Express
7 days ago
France probes Elon Musk’s X over child abuse content, Grok AI
French authorities have launched a sweeping investigation into Elon Musk’s social media platform X, raiding its offices on February 3 as part of a probe into the company’s algorithms and its artificial intelligence chatbot, Grok.
French prosecutors have summoned Musk and former X chief executive Linda Yaccarino to appear at hearings on April 20. Several other X employees have also been called to testify as witnesses during the same week.
The cybercrime division of the Paris prosecutor’s office is examining X over seven separate allegations, including complicity in the distribution of child sexual abuse imagery, dissemination of content denying crimes against humanity, and fraudulent extraction of data. The details were outlined in a February 3 statement by Paris chief prosecutor Laure Beccuau, cited by The New York Times.
The raid follows a year-long investigation into the alleged misuse of X’s content-ranking algorithms, alongside claims that data may have been improperly extracted by the platform or its executives. The inquiry was initially opened in January 2025 after concerns emerged about how X’s algorithm promotes and circulates content, report NDTV.
Prosecutors later expanded the scope of the case following accusations that Grok had generated Holocaust denial content and sexual deepfakes. Authorities also alleged that X had discontinued a tool designed to limit the spread of child sexual abuse material, raising fears that such content was being allowed to circulate unchecked.
Read More: Elon Musk tops $700bn net worth milestone after Tesla pay package reinstated
In addition, investigators said Grok may have enabled users to create sexualised versions of existing images without the consent of those depicted. French officials further accused X of refusing to provide subscriber information linked to suspected criminal activity, deepening tensions between the platform and law enforcement.
The raid came a day after Musk announced plans to merge his artificial intelligence company, xAI, with his rocket firm SpaceX.
Responding to the action, X said it “categorically denies any wrongdoing,” describing the investigation as politically motivated and claiming it misapplies French law, bypasses due process, and threatens freedom of expression.
Separately, the UK’s Information Commissioner’s Office said on February 3 that it has opened its own formal investigation into Grok, focusing on how personal data is processed and reports that the chatbot was used to generate non-consensual sexual imagery, including involving children.
8 days ago