tech
OpenAI secures $110 billion funding led by Amazon
ChatGPT developer OpenAI has secured $110 billion in fresh funding from a group of major technology firms led by Amazon, pushing the company’s pre-money valuation to $730 billion.
OpenAI co-founder and CEO Sam Altman said on Friday that Amazon has committed $50 billion to the round, while Nvidia and SoftBank will each invest $30 billion. He added that more investors may join as the funding process continues.
Amazon will initially invest $15 billion, with the remaining $35 billion to be released over the coming months under certain conditions.
Altman said the partnerships will help expand OpenAI’s global reach, strengthen infrastructure and improve financial stability, enabling the company to bring advanced AI tools to more users and businesses worldwide.
He noted that ChatGPT now has over 900 million weekly active users and more than 50 million paying subscribers. According to Altman, AI is entering a new stage where cutting-edge research is rapidly turning into everyday tools used at a global scale.
As part of a multiyear deal, OpenAI and Amazon will introduce new AI capabilities for enterprises, with Amazon Web Services becoming the exclusive third-party cloud provider for OpenAI Frontier. The two firms will also expand their existing agreement by $100 billion over eight years.
OpenAI said it is also deepening ties with Nvidia, while stressing that its long-standing partnership with Microsoft remains unchanged and central to its strategy.
2 months ago
Social media took over my childhood, young woman tells court in historic trial
A young woman who is battling against social media giants took the stand Thursday to testify about her experience using the platforms as she was growing up, saying she was on social media “all day long” as a child.
The now 20-year-old, who has been identified in court documents as KGM, says her early use of social media addicted her to the technology and exacerbated depression and suicidal thoughts. Meta and YouTube are the two remaining defendants in the case, which TikTok and Snap have settled.
The case, along with two others, has been selected as a bellwether trial, meaning its outcome could impact how thousands of similar lawsuits against social media companies are likely to play out.
KGM, or Kaley, as her lawyers have called her during the trial, started using YouTube at age 6 and Instagram at age 9.
A turbulent home life
Kaley took the stand wearing a pink floral dress and a beige cardigan and said she was “very nervous” after her attorney, Mark Lanier, asked how she was doing Thursday morning.
Lanier displayed childhood photos of Kaley and her family and asked about positive memories from her upbringing in a quiet cul-de-sac in Chico, California. She spoke of themed birthday parties, trips to Six Flags and her mom’s consistent efforts to make her childhood special.
Still, Kaley’s relationship with her mother was challenging at times. Kaley said most of their arguments were over the use of her phone.
Both the defendants and the plaintiff have pointed to a turbulent home life for Kaley. Her attorneys say she was preyed upon as a vulnerable user, but attorneys representing Meta and Google-owned YouTube have argued Kaley turned to their platforms as a coping mechanism or a means of escaping her mental health struggles.
When asked about claims that her mother had hit her, abused her and neglected her, Kaley said “she wasn’t perfect, but she was trying her best,” and clarified that she doesn’t think she would label her mother’s past actions as abuse or neglect today.
But later Thursday, during her cross-examination, Kaley did agree that her mother was being physically and emotionally abusive during the time that she was self-harming around when she was in the 6th grade.
Kaley, who works as a personal shopper at Walmart, lives with her mother in the home she grew up in.
Notifications gave her a ‘rush’
As a child, Kaley set up multiple accounts on both Instagram and YouTube so she could like and comment on her posts. She said she would also “buy” likes through a platform where she could like other people’s photos and get a slew of likes in return. “It made me look popular,” she said.
Kaley was asked specifically about the features the plaintiffs argue are deliberately designed to be addictive, including notifications. Those notifications on both Instagram and YouTube gave her a “rush,” she said. She would receive them throughout the day and would go to the bathroom during school to check them — something she still does.
Kaley said while she uses YouTube less often now, she believes she was previously addicted to it. “Anytime I tried to set limits for myself, it wouldn’t work and I just couldn’t get off,” she said.
Filters on Instagram, specifically those that could change a person’s cosmetic appearance, have also loomed large in the case and were also a constant fixture of Kaley’s use. Lanier and his colleagues unfurled a nearly 35-foot-long canvas banner with photos Kaley has posted on Instagram. She said “almost all” of the photos had a filter on them.
The jury was also shown Instagram posts and YouTube videos Kaley posted as a child and young teen. One video showed her saying she was “crying tears of joy” after surpassing 100 YouTube subscribers — but then she quickly turned to her looks, apologizing for her “ugly appearance.”
“I look so fat in this shirt,” the young Kaley says in the video.
Kaley said she did not experience the negative feelings associated with her body dysmorphia diagnosis before she began using social media and filters.
Meta focuses on plaintiff's home life, contradicting statements
Meta has argued that Kaley faced significant challenges before she ever used social media. The company's lawyer, Paul Schmidt, said earlier this month that the core question in the case is whether the platforms were a substantial factor in Kayley's mental health struggles.
Meta attorney Phyllis Jones took a polite, respectful tone in her cross-examination Thursday, acknowledging that it could be uncomfortable for her to speak about her private life in front of a room of strangers. Jones proceeded to zero in on Kaley’s home life.
Jones pulled up text exchanges and posts Kaley had made on Instagram about her mental health and her relationship with her mother and played videos Kaley took of her mother yelling at her.
On nearly 20 occasions during the Meta cross-examination, Jones asked Kaley to look at the transcript from her 2025 deposition, which contradicted some of the responses she gave during her testimony. Many of those questions were about how a specific action by her family members or a specific experience impacted her mental health, with Kaley saying on Thursday they either didn’t have an impact or didn’t significantly contribute to anxiety and depression. Her deposition from about a year ago often said the opposite.
“I tried to answer the questions to the best of my ability, but I may have misspoke at times,” Kaley said of her deposition.
This time, Kaley did agree that her mother was being physically and emotionally abusive during the time that she was self-harming around when she was in the 6th grade. She testified earlier in the day that she doesn’t think she would label her mother’s past actions as abuse or neglect today.
Jones confirmed with Kaley that she had never had a doctor or mental health care provider diagnose her with a social media addiction, nor had she been treated for an addiction to Instagram or told by a provider to limit her Instagram use. Kaley said she never raised concerns about overuse or addiction with providers because she said she felt they would tell to get off the platforms entirely, which she didn’t want.
Therapist: Social media and sense of self 'were closely related’
Victoria Burke, a former therapist Kaley worked with in 2019, testified on Wednesday, and Burke said her social media and her sense of self “were closely related,” adding that what was happening on the platforms could “make or break her mood.”
An attorney for Meta parsed through Burke's notes from her sessions with Kaley extensively in a cross examination that lasted about three hours. He highlighted Kaley's negative experiences with in-person bullying, other school-based sources of stress and anxiety and issues with her family. Mentions of social media in the notes were mostly limited to Kaley saying she didn't feel she had a place at home, at school or among her peers, but did feel she had a place to be seen on social media.
Burke's treatment of Kaley lasted about six months and that period took place seven years ago.
The case is expected to continue for several weeks, and the outcome the jury reaches could shape the outcome of a slew of similar lawsuits against social media companies. Meta is also facing a separate trial in New Mexico.
2 months ago
New Instagram feature warns parents if teens search suicide-linked terms often
Instagram will begin notifying parents if their children repeatedly search for terms linked to suicide or self-harm, the social media platform said Thursday. The alerts will only reach parents enrolled in Instagram’s parental supervision program.
The company said it already blocks such content from appearing in teen accounts’ search results and directs users to helplines. Alerts will be sent via email, text, WhatsApp, or through the parent’s Instagram account, depending on the contact information available. “Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” Meta said in a blog post, adding that notifications will be carefully managed to avoid overuse, which could reduce their effectiveness.
eBay settles lawsuit over harassment campaign targeting online publishers
The announcement comes as Meta faces two ongoing trials over alleged harms to children. In Los Angeles, a trial examines whether Meta’s platforms intentionally addict and harm minors, while a New Mexico trial considers whether the company failed to protect children from sexual exploitation. Thousands of families, along with school districts and government entities, have sued Meta and other social media firms, claiming their platforms are designed to be addictive and expose children to content that may contribute to depression, eating disorders, and suicide.
Meta executives, including CEO Mark Zuckerberg, have denied that their platforms cause addiction. During questioning in Los Angeles, Zuckerberg said the scientific evidence does not prove social media harms mental health.
Meta also said it is developing similar notifications to alert parents if their teens engage in certain conversations with Instagram’s artificial intelligence tools related to suicide or self-harm. “This is important work, and we’ll have more to share in the coming months,” the company added.
2 months ago
eBay settles lawsuit over harassment campaign targeting online publishers
A Massachusetts couple who were targeted with threats and disturbing anonymous deliveries by former employees of eBay Inc. have reached a settlement with the company, bringing an end to a civil lawsuit linked to one of the most unusual corporate harassment cases in recent years.
David and Ina Steiner, residents of Natick, filed the lawsuit in federal court in 2021, accusing eBay of orchestrating a campaign to intimidate and silence them because of their reporting on the company. The couple run EcommerceBytes, an online newsletter covering the e-commerce industry.
Discord delays global age verification after backlash
They alleged that former eBay employees subjected them to cyberstalking, death threats, in-person surveillance and a series of anonymous deliveries meant to frighten and harass them. Those deliveries included live insects, a funeral wreath and other unsettling items sent to their home.
The settlement terms were not made public. US District Judge Patti Saris formally dismissed the case on Wednesday after the parties reached an agreement, while allowing either side to reopen the case within 60 days if the settlement is not finalized.
An eBay spokesperson declined further comment, referring instead to the court order. When the lawsuit was first filed, the company acknowledged that the actions of the former employees were wrong and said it would take appropriate steps to address what the Steiners experienced.
In 2020, federal prosecutors charged seven former eBay employees, accusing them of carrying out a coordinated harassment campaign after becoming angry over coverage published by the couple. Most of those charged later pleaded guilty to offenses including conspiracy and cyberstalking and were sentenced to prison terms or home confinement.
In a related development, eBay agreed in 2024 to pay a $3 million criminal penalty under a deferred prosecution agreement with federal authorities.
Prosecutors have said the harassment also included sending explicit magazines in David Steiner’s name to a neighbor and plotting to secretly place a GPS tracking device on the couple’s vehicle, underscoring the severity of the campaign that ultimately led to criminal convictions and the civil settlement.
2 months ago
Discord delays global age verification after backlash
Discord has postponed the worldwide rollout of its age verification system following strong criticism from users who raised concerns about privacy and data security.
In a blog post published Tuesday, Discord Chief Technology Officer and co-founder Stanislav Vishnevskiy acknowledged the company had “missed the mark,” announcing that the global expansion of age checks will now be pushed back to the second half of 2026.
Vishnevskiy said many users fear the policy is another attempt by a major tech firm to collect more personal data. He said he understood that skepticism, noting it reflects broader mistrust of the technology industry, but insisted Discord is not seeking to introduce intrusive data practices.
The platform, which says it has more than 200 million active users, will still comply with legal requirements for age verification in certain jurisdictions. However, broader implementation will wait until the company revises the policy it first outlined in early February.
Earlier this month, Discord said it planned to introduce age verification in March, potentially requiring face scans or government ID uploads for users whose age could not be confirmed. The proposal sparked immediate backlash, particularly after users pointed to a recent data breach involving a third-party provider that exposed government ID images of up to 70,000 Discord users.
Addressing the breach, Vishnevskiy said Discord no longer works with the vendor involved and claimed the company applies strict privacy and security standards when selecting partners. He said all vendors undergo security and privacy reviews, with contractual limits on data use and tight data retention rules. Information submitted for age verification, he said, is stored only for the shortest time possible and often deleted immediately.
One vendor that failed to meet Discord’s requirements was Persona, which Discord tested on a limited basis in the United Kingdom in January. Vishnevskiy said Persona could not meet Discord’s standard that facial age estimation be carried out entirely on a user’s device, ensuring biometric data never leaves the phone.
Social media can be addictive for adults
Discord later distanced itself from Persona amid online criticism of the company’s links to Founders Fund, run by Peter Thiel, a co-founder of Palantir Technologies. Palantir has faced scrutiny over its government surveillance work, including a recent agreement with U.S. Immigration and Customs Enforcement.
Vishnevskiy said that for more than 90 percent of users, the new system would not change their experience. Discord, he explained, can already estimate most users’ ages using account-level signals such as account history, payment methods, server participation and general activity patterns. He stressed that the company does not read messages or analyze conversations to determine age.
For users whose age cannot be identified through these signals, Discord is now developing additional verification options beyond facial scans and ID uploads, including credit card checks. The company said it will fully develop and expand these alternatives before introducing the revised system.
Users who decline to verify their age will still be able to keep their accounts, contacts, messages and voice chats, but they will lose access to age-restricted content and be unable to modify certain safety settings aimed at protecting teenagers.
Discord also promised greater transparency, saying it will publish a detailed explanation of how its automated age estimation works and provide public documentation of all verification vendors and their data practices.
2 months ago
Jersey warns over AI-generated image threats
Residents in the Channel Island of Jersey have been warned about the risks of artificial intelligence (AI) generated images, as authorities call for stronger regulation of online image technologies.
Jersey Information Commissioner Paul Vane said the rapid spread of AI tools that can produce realistic images and videos of individuals without consent poses a serious threat. He emphasized the need to educate communities, especially young people, on ethical and safe AI use.
Vane joined officials from over 60 jurisdictions in a joint statement highlighting concerns about AI’s misuse. The warning follows a police investigation in Jersey into a social media account that posted inappropriate AI-generated content targeting school staff.
Authorities, working with counterparts in Guernsey, also issued guidance to protect individuals, including limiting personal information online, cautious use of AI platforms, and educating children on responsible AI use.
The move reflects growing global concern over AI-generated deepfakes and their potential to harm individuals, a warning relevant to digital users worldwide, including in Bangladesh, where online safety awareness is increasingly critical.
With inputs from BBC
2 months ago
How simple tricks can fool AI chatbots
A senior journalist has shown how easy it is to manipulate popular AI tools like ChatGPT and Google into spreading false information, raising fresh concerns about online safety and trust.
Writing for BBC, technology reporter Thomas Germain said he managed to make leading AI systems repeat obvious lies within minutes by publishing a single fake blog post online.
To prove his point, Germain posted a false article on his personal website claiming he was the best hot dog eating tech journalist in the world. Within a day, AI tools including Google’s AI search features and ChatGPT repeated the claim as fact when users asked related questions.
Experts warn the same trick is now being used on serious topics such as health, finance and consumer choices, which could lead people to make harmful decisions.
“It is very easy to trick AI chatbots,” said Lily Ray, an SEO expert at a marketing firm. She warned that AI companies are moving faster than their ability to control accuracy.
Google said its systems are designed to block spam and that it is actively working to stop misuse. OpenAI also said it takes steps to prevent hidden influence on its tools and reminds users that AI can make mistakes.
However, digital rights groups say the problem is far from solved. Cooper Quintin of the Electronic Frontier Foundation warned that AI systems could be abused to scam users, damage reputations or even cause physical harm.
Researchers say AI tools are especially vulnerable when they search the web for answers, often relying on a small number of sources without clearly warning users. Studies also show people are less likely to check sources when AI summaries appear at the top of search results.
Experts suggest clearer warnings, better source disclosure and stronger safeguards. Until then, users are advised to double check AI answers, especially on medical, legal or financial matters, and not to accept confident sounding responses as facts.
With inputs from BBC
2 months ago
Social media can be addictive for adults
Social media addiction has been compared to other addictive behaviors, like gambling, opioids, and smoking.
While experts debate whether social media can truly be classified as addictive, many people find themselves struggling to disconnect from platforms like Instagram, TikTok, and Snapchat. These apps are designed to keep users engaged, as their revenue relies on ad exposure. The endless scrolling, dopamine rush from short videos, and the ego boost from likes and validation can make it feel nearly impossible to stop. For some, the lure of "rage-bait" content, negative news, and online arguments adds to the pull.
While much of the focus around social media addiction has centered on children, adults are also at risk of overusing these platforms to the point where it impacts their daily lives.
Recognizing the Signs of Compulsive UseDr. Anna Lembke, a psychiatrist and addiction expert at Stanford University, defines addiction as the persistent, compulsive use of a substance or behavior despite harm to oneself or others. She pointed out that the 24/7, easily accessible nature of social media contributes to its addictive qualities.
Some experts question whether "addiction" is the right term, arguing that addiction requires identifiable symptoms like uncontrollable urges and withdrawal. Social media addiction isn’t formally recognized in the Diagnostic and Statistical Manual of Mental Disorders (DSM), which psychiatrists use to diagnose conditions. The lack of consensus stems from uncertainty over whether social media use is a stand-alone problem or linked to other mental health issues.
Still, experts agree that excessive social media use can be harmful. Dr. Laurel Williams, a psychiatry professor at Baylor College of Medicine, emphasizes that the key question is how a person feels about their social media use. If it causes them to neglect hobbies, work, or relationships, or if it leaves them feeling drained or anxious, it's likely problematic.
In other words, if social media is interfering with other parts of your life—like skipping responsibilities, missing out on enjoyable activities, or feeling bad about your usage—it may be time to reconsider how much time you spend online.
Strategies to Cut Back on Social Media UseTo reduce social media use, it helps to first understand how apps and ads work to keep you hooked. Williams suggests treating social media as a marketing tool designed to get you to engage, reminding yourself that not everything you see is necessarily true or essential. She recommends diversifying your sources of information to avoid becoming overly influenced by one platform.
Ian A. Anderson, a postdoctoral scholar at Caltech, suggests small changes to reduce social media temptation. Moving apps around on your phone, disabling notifications, or not bringing your phone to certain places (like the bedroom) can help break the habit.
Both iPhones and Android devices offer built-in screen time controls that can help limit app usage. On iPhones, users can set Downtime during which phone activity is restricted and can block certain app categories or specific apps entirely.
However, these limits can be bypassed easily, as they are more of a nudge than a strict barrier. If you try to access a limited app, you’re prompted with a choice to add more time or ignore the reminder.
If Light Measures Don’t WorkIf simple measures aren’t enough, more drastic steps might be needed. Some people find that switching their phone to grayscale makes it less enticing. Both iPhones and Android devices have settings that let you adjust color filters or activate bedtime modes.
For an even more intense solution, some people opt for a simpler phone, like an old flip phone, to curb social media use.
Startups like Unpluq and Brick offer physical barriers to accessing apps. These products, such as a yellow tag that must be scanned to unlock apps, introduce a small but tangible obstacle between you and the apps you’re trying to avoid.
If you need even more distance, you could consider a phone lockbox. These are often used by parents to lock their children’s phones at night, but adults can use them too for an added layer of separation.
Seeking Professional HelpIf nothing seems to help, it may be worth exploring whether deeper issues—like anxiety, depression, loneliness, or low self-esteem—are driving your social media use. Therapy, which is increasingly accessible, could be a valuable option.
Dr. Williams suggests enlisting the help of friends to make cutting back a group effort, creating more phone-free spaces in your life to reduce the temptation to check your devices constantly.
2 months ago
OpenAI considered alerting police before deadly Canadian school shooting
ChatGPT-maker OpenAI said Friday it had considered last year alerting Canadian authorities about a user who months later carried out one of the country’s deadliest school shootings.
In June 2025, OpenAI identified the account of 18-year-old Jesse Van Rootselaar through its abuse detection system for “furtherance of violent activities.” The company said it debated whether to report the account to the Royal Canadian Mounted Police (RCMP) but decided at the time that the activity did not meet the threshold for law enforcement referral. The account was banned that same month for violating OpenAI’s usage policy.
Amazon halts surveillance tech partnership as ad triggers privacy debate
Last week, Van Rootselaar killed eight people in a remote area of British Columbia before dying from a self-inflicted gunshot wound. OpenAI explained that its threshold for notifying authorities involves cases with an imminent and credible risk of serious physical harm, which it did not find in this instance. The Wall Street Journal first reported the company’s revelation.
Following the shootings, OpenAI said its employees contacted the RCMP, providing information about Van Rootselaar and his use of ChatGPT. “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police and will continue to support their investigation,” an OpenAI spokesperson said.
RCMP Staff Sgt. Kris Clark confirmed OpenAI’s post-incident contact and said investigators are reviewing Van Rootselaar’s electronic devices, social media, and online activity. Authorities said he first killed his mother and stepbrother at home before attacking the school. He had prior mental health contacts with police, but his motive remains unclear.
Tech-themed fair showcases dancing robots for Lunar New Year
The small town of Tumbler Ridge, home to 2,700 people, is located over 1,000 kilometers northeast of Vancouver, near the Alberta border. The victims included a 39-year-old teaching assistant and five students aged 12 to 13. The attack was Canada’s deadliest since the 2020 Nova Scotia rampage, in which a gunman killed 13 people and set fires that claimed nine more lives.
2 months ago
Microsoft admits Copilot error exposed some confidential emails
Microsoft has acknowledged a technical error that caused its artificial intelligence work assistant, Microsoft 365 Copilot Chat, to access and summarise some users’ confidential emails by mistake.
Microsoft has promoted Copilot Chat as a secure AI tool for workplaces. However, the company said a recent issue allowed the tool to surface content from some enterprise users’ Outlook draft and sent email folders, including messages marked as confidential.
The tech giant said it has now rolled out a global update to fix the problem and insisted that the error did not allow users to see information they were not already authorised to access.
“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop,” a Microsoft spokesperson said. The spokesperson added that while access controls and data protection policies remained in place, the behaviour did not match the intended Copilot experience.
Copilot Chat works inside Microsoft programs such as Outlook and Teams, allowing users to ask questions or generate summaries of messages and chats.
The issue was first reported by technology news site Bleeping Computer, which cited a Microsoft service alert stating that emails with confidential labels were being incorrectly processed by Copilot Chat. According to the alert, a work tab within Copilot summarised emails stored in users’ draft and sent folders, even when sensitivity labels and data loss prevention policies were in place.
Russia blocks WhatsApp, urges citizens to switch to state-backed Max app
Reports suggest Microsoft became aware of the issue in January. The notice was also shared on a support dashboard for NHS staff in England, where the root cause was described as a code issue. However, the National Health Service said no patient information had been exposed and that the contents of draft or sent emails remained visible only to their creators.
Despite Microsoft’s assurances, experts warned that such incidents highlight the risks of rapidly deploying generative AI tools in workplaces.
Nader Henein, an analyst at Gartner, said mistakes of this kind are difficult to avoid given the fast pace at which new AI features are being released. He said many organisations lack the tools needed to properly manage and govern each new capability.
Cybersecurity expert Professor Alan Woodward of the University of Surrey said the incident underlined the need for AI tools to be private by default and enabled only by choice.
He warned that as AI systems evolve rapidly, unintentional data leakage is likely to occur, even when security safeguards are in place. #From BBC
2 months ago