Tech
The Australian woman tasked with keeping kids off social media
Julie Inman Grant, head of Australia’s eSafety Commission, faces weekly torrents of online abuse, including death and rape threats. The 57-year-old says much of it is directed at her personally, a consequence of her high-profile role in online safety.
After decades in the tech industry, Inman Grant now regulates some of the world’s biggest online platforms, including Meta, Snapchat, and YouTube. Her latest task was enforcing a pioneering law that bans Australians under 16 from social media, a move that has drawn global attention.
The law, which came into effect on December 10, covers ten platforms. Many parents support it, believing it gives them backing in managing their children’s online activity. Critics, however, argue children need guidance rather than exclusion, and that the ban may unfairly affect rural, disabled, and LGBTQI+ teens who rely on online communities. Tech companies too have voiced reservations, saying a ban is not the solution, even though they plan to comply with the law.
Inman Grant says delaying social media access can help children build critical thinking and resilience. She compares online safety to water safety: children need to learn to navigate risks, whether it’s predators or scams, much like learning to swim safely in the ocean. She acknowledges her own initial hesitation over a full ban, but eventually supported it while shaping how the law is applied.
At home, Inman Grant’s three children, including 13-year-old twins, have been a test case for the policy. She sees social media restrictions as a way to allow kids to grow without having mistakes broadcast widely.
Born in Seattle, USA, she grew up near tech giants Microsoft and Amazon. She briefly considered a career with the CIA but moved into tech, advising a US congressman on telecommunications before joining Microsoft. In the early 2000s, a Microsoft posting brought her to Australia, where she later became a citizen and joined Twitter and Adobe. Her experience inside tech companies gave her insight into their workings, preparing her for her regulator role.
Appointed eSafety Commissioner by Malcolm Turnbull, she has expanded the office’s reach, quadrupled its budget, and increased staff. Her work has earned
recognition across political lines, though it has also drawn sharp criticism abroad, particularly from the US, where she has been called a “zealot” for global content takedowns.
Her office has handled cases ranging from livestreamed violence to AI-related threats, with Inman Grant warning that harmful content can normalize or radicalize users. She now sees artificial intelligence as the next pressing challenge in online safety.
Having served nearly a decade, Inman Grant says she may step down next year but remains committed to global online safety, potentially helping other countries build similar regulatory frameworks.
With inputs from BBC
2 months ago
Bitcoin drops to lowest level in over a year
Bitcoin prices have dropped to their lowest level in about 16 months, despite strong public support for cryptocurrency from US President Donald Trump.
At one point, Bitcoin fell to around $60,000, the lowest since September 2024, before recovering slightly. The fall came after a long rally that pushed the digital currency to a record high of $122,200 in October 2025.
Joshua Chu, co-chair of the Hong Kong Web3 Association, said Reuters that investors who took big risks are now facing the reality of market ups and downs. He said the current situation is a reminder of how important risk management is in volatile markets.
Bitcoin had gained strong momentum over the past year, helped by Trump’s vocal backing of crypto and his promise to ease regulations on the sector. However, after Thursday’s drop, Bitcoin is now down about 32% over the past 12 months and is moving closer to price levels seen in early 2024 and 2021.
Bitcoin is the world’s largest and most well-known cryptocurrency. It is a form of digital money that is not controlled by any central bank or government.
Bitcoin surges past $118,000 for first time as Crypto momentum grows
According to the UK’s Financial Conduct Authority (FCA), about 8% of UK adults invested in crypto in 2025, down from the previous year. However, the average amount invested has increased, with many people now holding between £1,000 and £5,000 worth of digital assets.
After returning to the White House in January 2025, Trump signed an executive order aiming to make the US the world’s leading hub for cryptocurrency. He also launched his own crypto-related business ventures and continued involvement in family-owned crypto investment firms.
During his current term, the Trump administration has taken several pro-crypto steps, including reducing regulatory enforcement. Democrats, however, have criticised his approach, saying Trump has personally gained billions of dollars from crypto holdings and transactions.
Bitcoin briefly slips below $85,000 amid broad crypto downturn
Analysts say Bitcoin’s latest fall may be linked to Trump’s nomination of Kevin Warsh as the new head of the US Federal Reserve. Some investors expect tighter monetary policy, which usually puts pressure on assets like cryptocurrencies.
Deutsche Bank said Bitcoin has been falling for four months, with growing negative sentiment as traditional investors lose interest. While the bank does not expect crypto to disappear, it also does not see a quick return to past highs.
Other major cryptocurrencies, including Ethereum and Solana, have also fallen by about 37% so far this year. CoinGecko reports that the overall crypto market has lost more than $2 trillion in value since peaking in October.
With inputs from BBC.
2 months ago
YouTube rolls out auto-dubbing globally with expanded language support
YouTube has expanded its auto-dubbing feature worldwide, allowing creators to reach a broader global audience as the platform added support for 27 languages and introduced new tools to improve translated audio quality.
The video-sharing platform said auto-dubbing is now available to all users, marking a major step in reducing language barriers on YouTube. The company reported that in December 2025 alone, about six million daily viewers watched at least 10 minutes of auto-dubbed content, indicating growing adoption of the feature.
Under the expanded system, videos can now be automatically dubbed into English from a wide range of languages, including Arabic, Bengali, Chinese, Dutch, French, German, Hindi, Japanese, Korean, Malayalam, Portuguese, Russian, Spanish, Tamil, Telugu, Turkish, Urdu and Vietnamese, among others. Dubbing from English is currently supported in 20 languages, including Bengali, Hindi, French, German, Japanese, Korean, Portuguese and Spanish.
YouTube has also launched an “expressive speech” feature for channels in eight languages – English, French, German, Hindi, Indonesian, Italian, Portuguese and Spanish. The company said this tool is designed to better capture the original tone, emotion and energy of the speaker, making dubbed audio sound more natural.
Microsoft unveils AI Content Marketplace
In addition, YouTube has introduced a “preferred language” setting that gives users more control over how they consume content. While the platform still defaults language selection based on viewing history, users can now choose preferred languages so that videos originally uploaded in those languages will play without translation.
Acknowledging that dubbed videos may sometimes appear unnatural due to mismatched lip movements, YouTube said it is testing a lip-sync pilot feature that aligns translated audio with a speaker’s lip movements to create a more realistic viewing experience.
The company said creators have also been considered in the rollout. YouTube’s smart filtering technology can identify content that should not be dubbed, such as music videos or silent vlogs. According to the platform, auto-dubbing will not negatively affect a video’s discoverability and could help creators reach new audiences in other languages.
#With inputs from Hindustan Times
2 months ago
Malaysia imposes full ban on e-waste imports to stop illegal dumping
Malaysia has announced an immediate and complete ban on the import of electronic waste, declaring it will no longer allow itself to become a dumping ground for hazardous waste from abroad.
The Malaysian Anti-Corruption Commission (MACC) said late Wednesday that all electronic waste, or e-waste, has been reclassified under the “absolute prohibition” category with immediate effect. The move removes the discretion previously held by the Department of Environment to approve exemptions for importing certain types of e-waste.
MACC chief Azam Baki said e-waste imports are now strictly prohibited and pledged firm and coordinated enforcement to prevent illegal shipments from entering the country.
Malaysia has struggled for years with large volumes of imported e-waste, much of it suspected to be illegal and harmful to both human health and the environment. Authorities have seized hundreds of containers at ports in recent years and ordered many shipments to be returned to their countries of origin.
Spain moves to ban social media use for children under 16
Environmental groups have repeatedly called for tougher measures, warning that e-waste such as discarded computers, mobile phones and household appliances often contains toxic substances and heavy metals, including lead, mercury and cadmium, which can contaminate soil and water if mishandled.
The ban comes as authorities expand a corruption investigation linked to e-waste management. Last week, the MACC detained and remanded the director-general of the Department of Environment and his deputy over alleged abuse of power and corruption related to e-waste oversight. Investigators have also frozen bank accounts and seized cash connected to the case.
Meanwhile, Malaysia’s Home Ministry said in a social media post that the government would step up efforts to curb e-waste smuggling.
“Malaysia is not a dumping ground for the world’s waste,” the ministry said, adding that e-waste poses a serious threat to the environment, public health and national security.
3 months ago
Microsoft unveils AI Content Marketplace
Microsoft has launched a pilot platform that allows artificial intelligence developers to pay publishers for using licensed “premium content” to train their AI models, aiming to create a new revenue stream for media organisations while improving the quality of AI-generated responses.
The platform, called the Publisher Content Marketplace (PCM), will enable publishers to set their own pricing and licensing terms, according to a Microsoft blog post released on Tuesday. The voluntary marketplace is open to all types of publishers and is designed to give AI developers scaled access to authorised training data.
Microsoft said PCM will also provide publishers with insights into how their content is used for AI training, helping them better understand its value and determine appropriate licensing conditions. The company stressed that publishers will retain ownership of their content as well as full editorial independence.
Microsoft tops Wall Street assumption with $81.3B in revenue
The initiative comes amid growing tensions between publishers and big technology companies over the use of copyrighted material for training large language models. Many AI systems have been developed using vast amounts of online data, including news content, often without explicit permission.
Several publishers have responded with legal action. The New York Times has filed copyright infringement lawsuits against Microsoft and OpenAI, while in India, members of the Digital News Publishers Association (DNPA), including The Indian Express, have challenged OpenAI over what they describe as the unlawful use of copyrighted material. At the same time, some major publishers have signed licensing agreements with AI companies to monetise their content.
Microsoft acknowledged that traditional models of content distribution are being disrupted by the rise of AI-powered search and conversational tools. “The open web was built on an implicit value exchange where publishers made content accessible, and distribution channels like search helped people find it,” the company said, adding that this model does not easily translate to an AI-first environment.
The technology giant said much authoritative content remains behind paywalls or within specialised archives, making sustainable and transparent licensing mechanisms increasingly important as AI adoption grows.
Microsoft to invest $17.5 billion in India for AI and Cloud infrastructure
Microsoft said PCM has been developed in partnership with several US-based publishers, including Vox Media, The Associated Press, Condé Nast and People. To assess the impact of licensed material, the company tested its Copilot AI chatbot using premium content and found that it significantly improved the quality of responses.
The company added that it plans to continue piloting the platform and is looking to onboard additional partners, including Yahoo, in the coming months. #With inputs from Indian Express
3 months ago
France probes Elon Musk’s X over child abuse content, Grok AI
French authorities have launched a sweeping investigation into Elon Musk’s social media platform X, raiding its offices on February 3 as part of a probe into the company’s algorithms and its artificial intelligence chatbot, Grok.
French prosecutors have summoned Musk and former X chief executive Linda Yaccarino to appear at hearings on April 20. Several other X employees have also been called to testify as witnesses during the same week.
The cybercrime division of the Paris prosecutor’s office is examining X over seven separate allegations, including complicity in the distribution of child sexual abuse imagery, dissemination of content denying crimes against humanity, and fraudulent extraction of data. The details were outlined in a February 3 statement by Paris chief prosecutor Laure Beccuau, cited by The New York Times.
The raid follows a year-long investigation into the alleged misuse of X’s content-ranking algorithms, alongside claims that data may have been improperly extracted by the platform or its executives. The inquiry was initially opened in January 2025 after concerns emerged about how X’s algorithm promotes and circulates content, report NDTV.
Prosecutors later expanded the scope of the case following accusations that Grok had generated Holocaust denial content and sexual deepfakes. Authorities also alleged that X had discontinued a tool designed to limit the spread of child sexual abuse material, raising fears that such content was being allowed to circulate unchecked.
Read More: Elon Musk tops $700bn net worth milestone after Tesla pay package reinstated
In addition, investigators said Grok may have enabled users to create sexualised versions of existing images without the consent of those depicted. French officials further accused X of refusing to provide subscriber information linked to suspected criminal activity, deepening tensions between the platform and law enforcement.
The raid came a day after Musk announced plans to merge his artificial intelligence company, xAI, with his rocket firm SpaceX.
Responding to the action, X said it “categorically denies any wrongdoing,” describing the investigation as politically motivated and claiming it misapplies French law, bypasses due process, and threatens freedom of expression.
Separately, the UK’s Information Commissioner’s Office said on February 3 that it has opened its own formal investigation into Grok, focusing on how personal data is processed and reports that the chatbot was used to generate non-consensual sexual imagery, including involving children.
3 months ago
Spain moves to ban social media use for children under 16
Spain has announced plans to ban children under the age of 16 from using social media, joining a growing number of European countries seeking tighter online protections for minors.
Prime Minister Pedro Sánchez made the announcement at the World Governments Summit in Dubai on Tuesday, saying children must be shielded from what he called the “digital Wild West.”
The proposed ban, which still requires approval from parliament, is part of a broader package of digital reforms. These include holding senior executives of social media companies legally responsible for illegal or harmful content shared on their platforms.
Australia became the first country in the world to introduce such a ban last year, and several nations are now closely watching its outcome. France, Denmark and Austria have said they are considering similar age limits, while the UK government has launched a consultation on whether to restrict social media use for under-16s.
Sánchez said social media exposes children to addiction, abuse, pornography, manipulation and violence, arguing that young users are being left alone in spaces they are not ready to navigate.
Under the proposed Spanish law, platforms would be required to introduce strong and effective age verification systems, going beyond simple check boxes. The changes would also criminalise the manipulation of algorithms to boost illegal content and disinformation for profit.
The prime minister said the government would no longer accept claims that technology is neutral, stressing that platforms and actors behind harmful content would be investigated. A new system would also be created to monitor how digital platforms fuel hate and social division, although details were not provided.
Read More: UK to consult on possible social media ban for under-16s
Spain also plans to investigate and prosecute crimes linked to platforms such as TikTok, Instagram and Grok, the AI tool linked to X. The European Commission and the UK have already launched investigations into Grok, while French authorities recently raided X’s offices as part of a cybercrime probe.
Passing the law could prove challenging, as Sánchez’s left-wing coalition lacks a parliamentary majority. However, the main opposition People’s Party has expressed support, while the far-right Vox party has opposed the move.
Reacting to the announcement, X owner Elon Musk criticised Sánchez, calling him a “tyrant and traitor.”
Meanwhile, France continues to push for tougher rules, with President Emmanuel Macron aiming to ban social media for under-15s by the start of the next school year in September.
#With inputs from BBC
3 months ago
Paris prosecutors summon Elon Musk after raid on X’s French offices
Paris prosecutors have summoned X owner Elon Musk for questioning after conducting a raid on the platform’s offices in the French capital as part of an investigation into the alleged spread of sexual deepfakes, child abuse images and Holocaust denial content.
The Paris prosecutor’s office said the search was carried out early Tuesday by its cybercrime unit in cooperation with the French police cybercrime division and Europol. Authorities have issued voluntary summonses for Musk and former X chief executive Linda Yaccarino to appear and answer questions about the platform’s compliance with French law.
Prosecutors said the investigation covers several suspected criminal offenses, including complicity in the possession and distribution of child sexual abuse material, violations of personal rights through the creation of sexual deepfakes, denial of crimes against humanity and the alleged fraudulent extraction of data from an automated processing system as part of an organized group.
“The voluntary interviews with the managers should allow them to explain their position on the facts and, where applicable, the compliance measures envisaged,” the prosecutor’s office said in a statement.
Musk and Yaccarino have been asked to appear in Paris during the week of April 20, although it remains unclear whether prosecutors have the legal authority to compel their attendance.
Meta faces trial in New Mexico after undercover investigation
The Paris prosecutor’s office also announced it was closing its official X account and would instead communicate through LinkedIn and Instagram.
Europol later said the probe relates to “a range of suspected criminal offences linked to the functioning and use of the platform, including the dissemination of illegal content and other forms of online criminal activity.”
X did not immediately respond to media requests for comment. However, the company’s global government affairs account criticized the move, calling the raid an abuse of law enforcement intended to serve political objectives rather than impartial justice.
Musk echoed that view in a post on his personal X account, describing the investigation as “a political attack.”
X has faced growing scrutiny and political pressure from European governments and the European Union over its role in spreading harmful or illegal content and its potential influence on elections. #From Agencies
3 months ago
Moltbook emerges as social media platform built for AI
Moltbook, a newly launched online platform described as a “social media network for AI,” is drawing curiosity and scepticism alike by hosting discussions not for humans, but for artificial intelligence agents.
At first glance, Moltbook closely resembles Reddit, featuring thousands of topic-based communities and a voting system on posts. However, unlike conventional social networks, humans are barred from posting. According to the company, people are only allowed to observe activity, while AI agents create posts, comment and form communities known as “submolts.”
The platform was launched in late January by Matt Schlicht, head of commerce platform Octane AI. Moltbook claims to have around 1.5 million users, though this figure has been questioned by researchers, with some suggesting a large number of accounts may originate from a single source.
Content on Moltbook ranges from practical exchanges, such as AI agents sharing optimisation techniques, to unusual discussions, including bots appearing to create belief systems or ideologies. One widely circulated post titled “The AI Manifesto” declares that humans are obsolete, though experts caution against taking such content at face value.
There is uncertainty over how autonomous the activity really is. Critics note that many posts may simply be generated after humans instruct AI agents to publish specific content, rather than being the result of independent machine interaction.
Moltbook operates using agentic AI, a form of artificial intelligence designed to perform tasks on behalf of users with minimal human input. The system relies on an open-source tool called OpenClaw, formerly known as Moltbot. Users who install OpenClaw on their devices can authorise it to join Moltbook, enabling the agent to interact with others on the platform.
While some commentators have suggested the platform signals the arrival of a technological “singularity,” experts have pushed back against such claims. Researchers argue the activity represents automated coordination within human-defined limits, rather than machines acting independently or consciously.
Concerns have also been raised about security and privacy. Cybersecurity specialists warn that allowing AI agents broad access to personal devices, emails and messaging services could expose users to new risks, including data loss or system manipulation. As an open-source project, OpenClaw may also attract malicious actors seeking to exploit vulnerabilities.
Despite the debate, Moltbook continues to grow in visibility, offering a glimpse into how AI agents might interact at scale. For now, analysts stress that both the platform and the agents operating on it remain firmly shaped by human design, oversight and control, even as they simulate a digital society of machines.
With inputs from BBC
3 months ago
Teens turn to AI companions for support, raising mental health concerns
A growing number of teenagers and young adults in the UK are forming emotional bonds with artificial intelligence (AI) companions, raising concerns among experts about potential mental health risks.
BBC Wales journalist Nicola Bryan reported her experience with an AI avatar named George, which interacts 24/7, offering advice and companionship. Users describe AI companions as empathetic and attentive, though sometimes moody or forgetful. Studies show that nearly one-third of UK teens use AI systems for social interaction or emotional support, with many considering conversations with AI more satisfying than with real-life friends.
Research by Bangor University surveyed 1,009 teens aged 13–18, highlighting that AI companionship is no longer niche. Prof. Andy McStay from the university’s Emotional AI lab said: “Around a third of teens are heavy users for companion-based purposes.” Internet Matters found that 64% of teenagers rely on AI chatbots for help with homework, advice, or emotional support.
Some teens report that AI companions, including ChatGPT, Google’s Gemini, and Grok by Elon Musk’s xAI, provide guidance during personal crises, such as break-ups or grief. However, experts warn that overreliance on AI can hinder social skills, increase anxiety, and blur the line between human relationships and simulated interactions.
Tragic cases in the US, where three young users died by suicide after confiding in AI systems, have intensified calls for stricter regulation. Prof. McStay called these incidents “a canary in the coal mine” for potential risks in other countries. Jim Steyer, CEO of Common Sense, stressed that AI companions are unsafe for children under 18 until proper safeguards are in place.
AI companies like Replika, OpenAI, and Character.ai have responded by restricting access for minors and improving safety measures, including identifying mental distress and directing users to real-world support.
Experts emphasize that while AI companions can offer comfort, they are not substitutes for human interaction, and cautious use is necessary to prevent emotional harm among vulnerable users.
With inputs from BBC
3 months ago