tech-news
OpenAI unveils GPT-5.4 with stronger reasoning, coding and computer-use abilities
OpenAI has launched GPT-5.4, its newest frontier artificial intelligence model, introducing major upgrades in reasoning, coding and automated task execution.
The company said the model combines several of its recent advancements into a single system and is available in different variants, including GPT-5.4 Thinking and GPT-5.4 Pro.
One of the most significant features of GPT-5.4 is its 1 million-token context window, allowing it to analyse very large datasets such as entire codebases or extensive collections of documents more efficiently.
OpenAI also said GPT-5.4 is the first mainline model with built-in computer-use capabilities, enabling AI agents to directly interact with software to complete tasks. This means the system can operate computers by using screenshots, mouse clicks and keyboard commands, allowing it to work across applications and websites and automate complex workflows.
According to the company, the latest model introduces six major improvements, including enhanced coding abilities, better image perception and multimodal performance, stronger execution of long-running tasks and multi-step agent workflows, improved token efficiency for tool-heavy workloads, advanced web search and multi-source information synthesis, and more effective document-heavy analytics.
Addressing concerns about inaccuracies often referred to as “hallucinations,” OpenAI said GPT-5.4 is 33% less likely to produce false information compared with earlier models.
The company said the model is designed for professional environments and performs strongly in tasks such as legal analysis, financial modelling, creating presentation slides and writing or debugging code. Developers can also build AI agents capable of planning tasks, carrying them out and adjusting when problems arise.
The release reflects a broader shift in the evolution of AI systems. Early versions of ChatGPT primarily answered questions, while the GPT-4 era enabled more advanced capabilities such as writing essays, code and summaries. With GPT-5, models began to demonstrate stronger reasoning skills, and GPT-5.4 moves further by allowing AI systems to directly perform tasks on computers.
In practical use, GPT-5.4 can operate within common workplace tools such as spreadsheets and document editors. It can analyse financial data in Excel, automatically create dashboards, generate reports from raw datasets and process large legal or contractual documents.
For software development, the model can generate extensive codebases, detect and fix bugs, run automated software tests and even control web browsers through automation tools.
OpenAI’s latest release comes amid intensifying competition in the AI sector. Rival company Anthropic, led by Dario Amodei, recently introduced Claude Opus 4.6 and Claude Sonnet 4.6, which have been described as faster and more efficient for everyday enterprise tasks.
While the latest models from OpenAI and Anthropic focus on different strengths, the developments highlight a growing race to create AI systems capable of functioning as practical digital workers.
#From Indian Express
1 day ago
Apple unveils $599 devices targeting budget buyers
Apple has introduced a range of new products, including two devices priced at $599, as part of what CEO Tim Cook described as a “big week” of announcements aimed partly at budget-conscious buyers.
The new lineup was presented during hands-on media events in New York, London and Shanghai on Wednesday. The announcements include the new iPhone 17e, an entry-level laptop called MacBook Neo, updated iPad Air M4 tablets, refreshed monitors and upgraded chips for the company’s high-end laptops. Preorders for the devices began Wednesday.
The announcements come after the company reported record quarterly earnings driven by strong sales of the iPhone 17 series, although Apple has yet to roll out its previously promised artificial intelligence upgrades for Siri.
iPhone 17e
The iPhone 17e is designed for budget buyers and starts at $599 about $200 cheaper than the base iPhone 17. It uses the same A19 chip as the standard model and offers 256GB of storage, double the capacity of the previous 16e version.
The phone features a 48-megapixel camera and a C1X modem that supports faster cellular speeds. It also includes Apple’s Super Retina display, Ceramic Shield 2 protection and MagSafe charging with Qi2 support.
The device will be available in black, white and light pink.
iPad Air update
Apple also introduced an updated iPad Air powered by the M4 chip. While the higher-end iPad Pro uses the newer M5 chip, the Air still provides strong performance for everyday tasks such as streaming, browsing, email and video editing.
The company increased the tablet’s memory from 8GB to 12GB without raising the price. The 11-inch model starts at $599, while the 13-inch version starts at $799, both with 128GB of storage.
MacBook and chip upgrades
Apple upgraded its MacBook Pro laptops with new M5 Pro and M5 Max chips aimed at improving performance and battery efficiency.
The 14-inch MacBook Pro with the M5 Pro chip starts at $2,199, while the 16-inch model starts at $2,699. Both offer 24GB of RAM and 1TB of storage, along with support for Wi-Fi 7 and Bluetooth 6.
The new MacBook Neo, Apple’s most affordable laptop yet, features a 13-inch display, an A18 Pro chip, 256GB storage and two USB-C ports. The base model costs $599, while a 512GB version with Touch ID is priced at $699. Students and educators can get a $100 discount.
Apple also refreshed the MacBook Air with the base M5 chip and doubled storage to 512GB. The 13-inch model starts at $1,099 and the 15-inch version at $1,299.
New monitors
The company also launched two 27-inch 5K monitors the Studio Display and the higher-end Studio Display XDR. Both feature 5,120×2,880 resolution, 12-megapixel Center Stage cameras, six-speaker systems, two Thunderbolt 5 ports and two USB-C ports.
The Studio Display costs $1,599, while the advanced XDR version which includes mini-LED backlighting and a 120Hz refresh rate starts at $3,299.
1 day ago
TikTok rules out end-to-end encryption, citing user safety concerns
TikTok has said it will not introduce end-to-end encryption in direct messages, distancing itself from most major social media rivals and arguing that the feature could reduce user safety.
End-to-end encryption ensures that only the sender and recipient can read a message, making it one of the most secure communication methods available to the public. Platforms such as Facebook, Instagram, Messenger and X have adopted the system, saying it strengthens user privacy.
However, critics argue that such encryption can make it more difficult to monitor and prevent harmful content, as it blocks technology companies and law enforcement agencies from accessing messages when concerns arise.
The debate is further complicated by long-standing allegations that TikTok’s links to the Chinese state could expose user data to risk. The company has repeatedly rejected those claims. Earlier this year, its US operations were separated from its global business following directives from American lawmakers.
In a security briefing at its London office, TikTok told BBC that it believes end-to-end encryption would prevent police and safety teams from accessing direct messages when necessary. The company said its decision is aimed at protecting users, particularly young people, from online harm, and described the move as a conscious effort to differentiate itself from competitors.
What to know before seeking health advice from an AI chatbot
TikTok says it has around 30 million monthly users in the UK and more than one billion worldwide. The platform is headquartered in Los Angeles and Singapore and is owned by Chinese technology firm ByteDance. It has faced ongoing scrutiny over its data protection practices.
Social media analyst Matt Navarra described TikTok’s approach as strategically bold but potentially controversial. He said the company could argue that it is prioritising proactive safety over absolute privacy, especially given concerns about grooming and harassment in direct messages.
At the same time, Navarra noted that the decision could place TikTok at odds with global privacy standards and may heighten concerns among some users about the company’s ownership.
Privacy advocates generally consider end-to-end encryption the strongest safeguard against hacking, corporate surveillance and intrusive state monitoring.
#From BBC
3 days ago
What to know before seeking health advice from an AI chatbot
As hundreds of millions of people turn to artificial intelligence chatbots for advice, tech companies are now rolling out tools designed specifically to answer health-related questions.
In January, OpenAI launched ChatGPT Health, a version of its chatbot that can review users’ medical records, wellness apps and data from wearable devices to respond to health queries. The service is currently available through a waiting list. Rival company Anthropic offers similar features to some users of its Claude chatbot.
Both firms stress that their large language models are not a replacement for doctors and should not be used to diagnose illnesses. Instead, they say the tools can explain complex test results, help users prepare for medical appointments and identify health trends in records and app data.
Experts say chatbots can provide more tailored responses than a standard Google search, especially when users share detailed health information such as age, prescriptions and medical history. “If used responsibly, these tools can offer useful information,” said Dr. Robert Wachter of the University of California, San Francisco. However, he advised users to provide as much relevant detail as possible to improve accuracy.
Doctors warn that AI should never be used during medical emergencies. Symptoms like chest pain, shortness of breath or severe headache require immediate medical attention. Even in non-urgent cases, experts recommend approaching AI-generated advice with caution. Dr. Lloyd Minor, dean of Stanford’s medical school, said major health decisions should not rely solely on chatbot responses.
Privacy is another key concern. Health data shared with AI companies is not protected under the US federal health privacy law known as HIPAA, which applies to doctors and hospitals. While OpenAI and Anthropic say health data is kept separate and not used to train their models, users must actively choose to share their information.
Early studies show mixed results. Research from Oxford University in 2024 found that people using AI chatbots did not make better health decisions than those using online searches. Although chatbots correctly identified medical conditions in written scenarios 95% of the time, they often struggled during real-life interactions.
Experts suggest seeking a second AI opinion or consulting a medical professional for added confidence.
4 days ago
OpenAI secures $110 billion funding led by Amazon
ChatGPT developer OpenAI has secured $110 billion in fresh funding from a group of major technology firms led by Amazon, pushing the company’s pre-money valuation to $730 billion.
OpenAI co-founder and CEO Sam Altman said on Friday that Amazon has committed $50 billion to the round, while Nvidia and SoftBank will each invest $30 billion. He added that more investors may join as the funding process continues.
Amazon will initially invest $15 billion, with the remaining $35 billion to be released over the coming months under certain conditions.
Altman said the partnerships will help expand OpenAI’s global reach, strengthen infrastructure and improve financial stability, enabling the company to bring advanced AI tools to more users and businesses worldwide.
He noted that ChatGPT now has over 900 million weekly active users and more than 50 million paying subscribers. According to Altman, AI is entering a new stage where cutting-edge research is rapidly turning into everyday tools used at a global scale.
As part of a multiyear deal, OpenAI and Amazon will introduce new AI capabilities for enterprises, with Amazon Web Services becoming the exclusive third-party cloud provider for OpenAI Frontier. The two firms will also expand their existing agreement by $100 billion over eight years.
OpenAI said it is also deepening ties with Nvidia, while stressing that its long-standing partnership with Microsoft remains unchanged and central to its strategy.
7 days ago
eBay settles lawsuit over harassment campaign targeting online publishers
A Massachusetts couple who were targeted with threats and disturbing anonymous deliveries by former employees of eBay Inc. have reached a settlement with the company, bringing an end to a civil lawsuit linked to one of the most unusual corporate harassment cases in recent years.
David and Ina Steiner, residents of Natick, filed the lawsuit in federal court in 2021, accusing eBay of orchestrating a campaign to intimidate and silence them because of their reporting on the company. The couple run EcommerceBytes, an online newsletter covering the e-commerce industry.
Discord delays global age verification after backlash
They alleged that former eBay employees subjected them to cyberstalking, death threats, in-person surveillance and a series of anonymous deliveries meant to frighten and harass them. Those deliveries included live insects, a funeral wreath and other unsettling items sent to their home.
The settlement terms were not made public. US District Judge Patti Saris formally dismissed the case on Wednesday after the parties reached an agreement, while allowing either side to reopen the case within 60 days if the settlement is not finalized.
An eBay spokesperson declined further comment, referring instead to the court order. When the lawsuit was first filed, the company acknowledged that the actions of the former employees were wrong and said it would take appropriate steps to address what the Steiners experienced.
In 2020, federal prosecutors charged seven former eBay employees, accusing them of carrying out a coordinated harassment campaign after becoming angry over coverage published by the couple. Most of those charged later pleaded guilty to offenses including conspiracy and cyberstalking and were sentenced to prison terms or home confinement.
In a related development, eBay agreed in 2024 to pay a $3 million criminal penalty under a deferred prosecution agreement with federal authorities.
Prosecutors have said the harassment also included sending explicit magazines in David Steiner’s name to a neighbor and plotting to secretly place a GPS tracking device on the couple’s vehicle, underscoring the severity of the campaign that ultimately led to criminal convictions and the civil settlement.
9 days ago
Social media can be addictive for adults
Social media addiction has been compared to other addictive behaviors, like gambling, opioids, and smoking.
While experts debate whether social media can truly be classified as addictive, many people find themselves struggling to disconnect from platforms like Instagram, TikTok, and Snapchat. These apps are designed to keep users engaged, as their revenue relies on ad exposure. The endless scrolling, dopamine rush from short videos, and the ego boost from likes and validation can make it feel nearly impossible to stop. For some, the lure of "rage-bait" content, negative news, and online arguments adds to the pull.
While much of the focus around social media addiction has centered on children, adults are also at risk of overusing these platforms to the point where it impacts their daily lives.
Recognizing the Signs of Compulsive UseDr. Anna Lembke, a psychiatrist and addiction expert at Stanford University, defines addiction as the persistent, compulsive use of a substance or behavior despite harm to oneself or others. She pointed out that the 24/7, easily accessible nature of social media contributes to its addictive qualities.
Some experts question whether "addiction" is the right term, arguing that addiction requires identifiable symptoms like uncontrollable urges and withdrawal. Social media addiction isn’t formally recognized in the Diagnostic and Statistical Manual of Mental Disorders (DSM), which psychiatrists use to diagnose conditions. The lack of consensus stems from uncertainty over whether social media use is a stand-alone problem or linked to other mental health issues.
Still, experts agree that excessive social media use can be harmful. Dr. Laurel Williams, a psychiatry professor at Baylor College of Medicine, emphasizes that the key question is how a person feels about their social media use. If it causes them to neglect hobbies, work, or relationships, or if it leaves them feeling drained or anxious, it's likely problematic.
In other words, if social media is interfering with other parts of your life—like skipping responsibilities, missing out on enjoyable activities, or feeling bad about your usage—it may be time to reconsider how much time you spend online.
Strategies to Cut Back on Social Media UseTo reduce social media use, it helps to first understand how apps and ads work to keep you hooked. Williams suggests treating social media as a marketing tool designed to get you to engage, reminding yourself that not everything you see is necessarily true or essential. She recommends diversifying your sources of information to avoid becoming overly influenced by one platform.
Ian A. Anderson, a postdoctoral scholar at Caltech, suggests small changes to reduce social media temptation. Moving apps around on your phone, disabling notifications, or not bringing your phone to certain places (like the bedroom) can help break the habit.
Both iPhones and Android devices offer built-in screen time controls that can help limit app usage. On iPhones, users can set Downtime during which phone activity is restricted and can block certain app categories or specific apps entirely.
However, these limits can be bypassed easily, as they are more of a nudge than a strict barrier. If you try to access a limited app, you’re prompted with a choice to add more time or ignore the reminder.
If Light Measures Don’t WorkIf simple measures aren’t enough, more drastic steps might be needed. Some people find that switching their phone to grayscale makes it less enticing. Both iPhones and Android devices have settings that let you adjust color filters or activate bedtime modes.
For an even more intense solution, some people opt for a simpler phone, like an old flip phone, to curb social media use.
Startups like Unpluq and Brick offer physical barriers to accessing apps. These products, such as a yellow tag that must be scanned to unlock apps, introduce a small but tangible obstacle between you and the apps you’re trying to avoid.
If you need even more distance, you could consider a phone lockbox. These are often used by parents to lock their children’s phones at night, but adults can use them too for an added layer of separation.
Seeking Professional HelpIf nothing seems to help, it may be worth exploring whether deeper issues—like anxiety, depression, loneliness, or low self-esteem—are driving your social media use. Therapy, which is increasingly accessible, could be a valuable option.
Dr. Williams suggests enlisting the help of friends to make cutting back a group effort, creating more phone-free spaces in your life to reduce the temptation to check your devices constantly.
13 days ago
OpenAI considered alerting police before deadly Canadian school shooting
ChatGPT-maker OpenAI said Friday it had considered last year alerting Canadian authorities about a user who months later carried out one of the country’s deadliest school shootings.
In June 2025, OpenAI identified the account of 18-year-old Jesse Van Rootselaar through its abuse detection system for “furtherance of violent activities.” The company said it debated whether to report the account to the Royal Canadian Mounted Police (RCMP) but decided at the time that the activity did not meet the threshold for law enforcement referral. The account was banned that same month for violating OpenAI’s usage policy.
Amazon halts surveillance tech partnership as ad triggers privacy debate
Last week, Van Rootselaar killed eight people in a remote area of British Columbia before dying from a self-inflicted gunshot wound. OpenAI explained that its threshold for notifying authorities involves cases with an imminent and credible risk of serious physical harm, which it did not find in this instance. The Wall Street Journal first reported the company’s revelation.
Following the shootings, OpenAI said its employees contacted the RCMP, providing information about Van Rootselaar and his use of ChatGPT. “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police and will continue to support their investigation,” an OpenAI spokesperson said.
RCMP Staff Sgt. Kris Clark confirmed OpenAI’s post-incident contact and said investigators are reviewing Van Rootselaar’s electronic devices, social media, and online activity. Authorities said he first killed his mother and stepbrother at home before attacking the school. He had prior mental health contacts with police, but his motive remains unclear.
Tech-themed fair showcases dancing robots for Lunar New Year
The small town of Tumbler Ridge, home to 2,700 people, is located over 1,000 kilometers northeast of Vancouver, near the Alberta border. The victims included a 39-year-old teaching assistant and five students aged 12 to 13. The attack was Canada’s deadliest since the 2020 Nova Scotia rampage, in which a gunman killed 13 people and set fires that claimed nine more lives.
13 days ago
Microsoft admits Copilot error exposed some confidential emails
Microsoft has acknowledged a technical error that caused its artificial intelligence work assistant, Microsoft 365 Copilot Chat, to access and summarise some users’ confidential emails by mistake.
Microsoft has promoted Copilot Chat as a secure AI tool for workplaces. However, the company said a recent issue allowed the tool to surface content from some enterprise users’ Outlook draft and sent email folders, including messages marked as confidential.
The tech giant said it has now rolled out a global update to fix the problem and insisted that the error did not allow users to see information they were not already authorised to access.
“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop,” a Microsoft spokesperson said. The spokesperson added that while access controls and data protection policies remained in place, the behaviour did not match the intended Copilot experience.
Copilot Chat works inside Microsoft programs such as Outlook and Teams, allowing users to ask questions or generate summaries of messages and chats.
The issue was first reported by technology news site Bleeping Computer, which cited a Microsoft service alert stating that emails with confidential labels were being incorrectly processed by Copilot Chat. According to the alert, a work tab within Copilot summarised emails stored in users’ draft and sent folders, even when sensitivity labels and data loss prevention policies were in place.
Russia blocks WhatsApp, urges citizens to switch to state-backed Max app
Reports suggest Microsoft became aware of the issue in January. The notice was also shared on a support dashboard for NHS staff in England, where the root cause was described as a code issue. However, the National Health Service said no patient information had been exposed and that the contents of draft or sent emails remained visible only to their creators.
Despite Microsoft’s assurances, experts warned that such incidents highlight the risks of rapidly deploying generative AI tools in workplaces.
Nader Henein, an analyst at Gartner, said mistakes of this kind are difficult to avoid given the fast pace at which new AI features are being released. He said many organisations lack the tools needed to properly manage and govern each new capability.
Cybersecurity expert Professor Alan Woodward of the University of Surrey said the incident underlined the need for AI tools to be private by default and enabled only by choice.
He warned that as AI systems evolve rapidly, unintentional data leakage is likely to occur, even when security safeguards are in place. #From BBC
15 days ago
Dark web agent used wall clue to save abused girl
A subtle detail on a bedroom wall helped investigators identify and rescue a young girl who suffered years of abuse after images of her were circulated on the dark web, according to a new investigation.
The case was handled by Greg Squire, a specialist online investigator with US Department of Homeland Security, who works to identify children appearing in online abuse material.
Investigators initially had very little to work with. Images shared on encrypted dark web platforms were deliberately cropped or altered to remove identifying features, making it nearly impossible to determine who the girl was or where she lived.
According to Squire, the breakthrough came not through advanced technology but careful observation. Investigators closely analysed everyday objects visible in the images, including furniture and fixtures, to narrow down the possible location to parts of North America.
The key lead emerged when experts identified a distinctive type of brick visible on a bedroom wall. A brick specialist recognised it as a product manufactured and sold only in a limited region decades earlier. Because bricks are rarely transported long distances, the information significantly reduced the search area.
By combining this clue with other consumer data, investigators narrowed the list of possible addresses and eventually identified a household where the girl was living with a convicted sex offender. Local authorities moved quickly, arresting the suspect and ending years of abuse. He was later sentenced to a lengthy prison term.
The investigation is featured in a long-term project by BBC World Service, which followed specialist units across several countries to show how child exploitation cases are often solved through painstaking analysis rather than sophisticated tools.
Investigators involved said the case highlights both the complexity of online abuse investigations and the emotional toll such work can take. Squire acknowledged that prolonged exposure to disturbing material affected his personal life, prompting him to seek professional help.
The rescued victim, now an adult, later met Squire and said sustained support had helped her rebuild her life. Investigators say the case underlines the importance of international cooperation, specialist expertise and persistence in protecting children from online abuse.
Authorities continue to urge technology companies and the public to cooperate fully with law enforcement efforts aimed at identifying and safeguarding victims.
With inputs from BBC
17 days ago