tech
What to Stream This Week: ‘Zootopia 2,’ Oscars, Kim Gordon, ‘One Piece’ and ‘Scarpetta’
Viewers have several new streaming options this week, including Taylor Sheridan's neo-Western family drama series ‘The Madison’on Paramount+ and Disney’s animated hit ‘Zootopia 2’ on Disney+. Other highlights selected by AP entertainment journalists include the Academy Awards on Hulu, Nicole Kidman starring as forensic pathologist Kay Scarpetta in a new series, and Kim Gordon’s third solo album, ‘Play Me’.
Movies (March 9-15)
Disney’s ‘Zootopia 2’, the sequel to the 2016 hit, arrives on Disney+ Wednesday after earning $1.85 billion at the box office. The story continues the adventures of rabbit cop Judy Hopps (Ginnifer Goodwin) and fox partner Nick Wilde (Jason Bateman), as a mysterious viper (Key Huy Quan) uncovers new secrets in the animal metropolis. AP’s review described it as “a more timid and tame movie that leans largely on the duo of Hopps and Wilde.”
For the first time, the Oscars will be streamed on Hulu alongside ABC’s live broadcast on Sunday, March 15. Subscribers can watch without a cable connection. Viewers can also catch nominated films on various platforms, including HBO Max, Netflix, Peacock, Apple TV+, and Hulu.
Music (March 13)
Kim Gordon, co-founder of Sonic Youth, releases her third solo album Play Me, following her Grammy-nominated The Collective (2024). The album features propulsive, confrontational tracks exploring themes from convenience culture to billionaire obsession with space.
Heavy metal band Lamb of God launches their tenth studio album, ‘Into Oblivion’, reflecting frontman Randy Blythe’s take on current world affairs.
Series (March 9-15)
Netflix adds four new Sesame Street episodes Monday as it continues the show’s 56th season.
One Piece returns for Season 2 Tuesday, following Monkey D. Luffy and the Straw Hat pirates on their quest through the Grand Line.
Nicole Kidman stars as Kay Scarpetta in a new series released Wednesday, portraying the character across two timelines with Rosy McEwen as the younger Scarpetta.
Taylor Sheridan’s ‘The Madison ‘ premieres Saturday on Paramount+, centering on the Clyburn family who move to Montana after a tragedy.
Video Games (March 9-15
Monster Hunter Stories 3: Twisted Reflection launches Friday on PS5, Xbox X/S, Switch 2, and PC, letting players team up with monsters and engage in turn-based battles amid warring kingdoms.
12 hours ago
Pokémon criticises White House for using its imagery in political meme
The Pokémon Company International has criticised the White House for using its imagery, including the popular character Pikachu, in a political meme posted online with the slogan “Make America Great Again”.
The company said it had no involvement in the creation or distribution of the meme and had not given permission to use its intellectual property.
Pokémon spokeswoman Sravanthi Dev said,“We were not involved in its creation or distribution, and no permission was granted for the use of our intellectual property.”
She added that the company’s mission is to bring people together and that it is not linked to any political viewpoint or agenda.
This is not the first time the company has objected to the Trump administration’s use of its content. In September, Pokémon also criticised a video that used its theme song and the slogan “Gotta catch ’em all” while showing arrests made by US border patrol and immigration agents as part of the administration’s deportation campaign.
The latest meme appears to use an image from the recently released game Pokopia for Nintendo. The slogan was written in a font similar to the game’s style, with a small version of Pikachu appearing behind the letter “e” in the word “make”.
When asked about the criticism, the White House referred the BBC to a post on X by spokesman Kaelan Dorr.In the post, Dorr shared a 10-year-old Wall Street Journal article about former Democratic presidential candidate Hillary Clinton, who once referenced the mobile game Pokémon Go during the 2016 election campaign, saying she was trying to get supporters to “have Pokémon go to the polls”.
“Hey Mr Pikachu, big fan. Question for you – why no response to articles like this?” Dorr wrote on X, suggesting the company might have a political bias.
The Pokémon Company did not say whether it plans to take legal action over the use of its content.
During Donald Trump’s second term, the White House has frequently used popular internet memes on official social media accounts to promote its policies.
White House spokeswoman Abigail Jackson earlier defended the approach, saying the administration was using engaging posts and memes to communicate the president’s agenda.
Recently, the White House also posted a video combining images from the war with Iran and scenes from the video game series Call of Duty.
Several artists and public figures have criticised the administration for using their content without permission. Comedian and podcaster Theo Von last year objected after the Department of Homeland Security used a clip of him in a video highlighting deportation numbers.
Von responded on X saying he did not approve the use of the clip and asked the agency to remove it.
Source: BBC
1 day ago
AI-generated misinformation about Iran war spreads widely online as creators profit from new technology
An extraordinary surge of AI-generated misinformation linked to the US-Israel war with Iran is being exploited by online content creators who are using advanced generative AI tools to generate revenue, experts have told BBC Verify.
Analysis by BBC Verify uncovered numerous instances of AI-created videos and manipulated satellite images being circulated online to support false or misleading claims about the conflict. Collectively, such content has drawn hundreds of millions of views across social media platforms.
“The scale is deeply concerning and the current war has brought the issue into sharp focus,” said Timothy Graham, a digital media specialist at Queensland University of Technology.
“What previously required professional video production teams can now be produced within minutes using AI tools. The barrier to creating convincing synthetic footage of conflict has effectively disappeared,” he added.
The United States and Israel began launching military strikes on Iran on February 28. In response, Iran has carried out drone and missile attacks targeting Israel as well as several Gulf countries and US military assets across the region.
As the conflict escalated rapidly over the past week, many people turned to social media platforms to follow developments, seek updates and share information about the unfolding situation.
Social media platform X announced this week that it will temporarily remove creators from its monetisation programme if they share AI-generated videos of armed conflicts without clearly labelling them.
Under the programme, eligible users receive payments when their posts attract large numbers of views, likes, shares and comments.
Mahsa Alimardani, a researcher on Iran at the Oxford Internet Institute, said the decision signals that the platform recognises the scale of the problem.
“It’s a significant indication that they understand this is a major issue,” she said.
BBC Verify contacted TikTok and Meta, the parent company of Facebook and Instagram, to ask whether they plan to introduce similar measures. Neither company responded to requests for comment.
One example of misleading AI-generated content identified by BBC Verify appears to show missiles hitting the Israeli city of Tel Aviv while explosions can be heard in the background.
The clip has appeared in more than 300 separate posts and has been shared tens of thousands of times across multiple social media platforms.
Some users on X asked the platform’s AI chatbot Grok to verify whether the footage was authentic. However, BBC Verify found that in several cases the chatbot incorrectly claimed the AI-generated footage was real.
Another fabricated video, which has been viewed tens of millions of times, purports to show the Burj Khalifa skyscraper in Dubai engulfed in flames while crowds appear to run toward the building.
The AI-generated clip circulated widely online during a period of heightened anxiety among residents and tourists following reports of drone and missile strikes targeting the city.
According to Alimardani, such fabricated content damages public confidence in reliable information.
“Videos like these undermine trust in verified information available online and make it far more difficult to document genuine evidence,” she said.
BBC Verify also identified a new element emerging in the conflict: the spread of AI-generated satellite images.
On the first day of the war, BBC Verify confirmed several authentic videos showing Iranian drones and missiles striking the headquarters of the US Navy’s Fifth Fleet in Bahrain.
However, a manipulated satellite image shared on X by the state-linked newspaper The Tehran Times began circulating the following day, claiming to show severe destruction at the military facility.
The fabricated image appears to have been derived from a real satellite photo of a US naval base in Bahrain taken in February 2025, which is publicly available online.
Google’s SynthID watermark detection system indicates that the altered image was generated or modified using a Google AI tool.
Further examination shows that three vehicles parked outside the base appear in exactly the same positions in both the genuine satellite photo and the manipulated AI image, even though the pictures supposedly represent scenes captured a year apart.
Google’s AI products, including the video-generation tool Veo, are among a growing number of widely used AI platforms. Others include OpenAI’s Sora model, the Chinese AI application Seedance, and Grok, which is integrated into X.
Henry Ajder, a specialist in generative AI, said the range and accessibility of such tools has grown dramatically.
“The number of tools now available to create highly realistic AI manipulations across different formats is unprecedented,” he said.
“We have never seen these technologies so accessible, so simple to use and so inexpensive,” Ajder added.
Victoire Rio, executive director of the technology policy non-profit What To Fix, said this has contributed to a sharp rise in AI-generated material online because the process of producing and distributing such content can now be largely automated.
Meanwhile, X’s head of product said on Tuesday that about 99 percent of accounts sharing AI-generated war footage were attempting to “game monetisation” by posting content designed to attract high engagement and earn payments through the platform’s Creator Revenue Sharing programme.
X does not disclose how many accounts participate in the programme or the amount of money creators can earn from it.
However, Graham estimates that X may pay between eight and 12 dollars for every one million verified user impressions.
To qualify for the programme, creators must generate at least five million organic impressions within three months and maintain an X Premium subscription, he said.
“Once creators qualify, viral AI-generated content effectively becomes a money-making machine,” Graham added. “It has created the ultimate misinformation enterprise.”
X did not respond to BBC Verify’s requests for comment or questions about the Creator Revenue Sharing programme.
Experts told BBC Verify that although social media companies say they are attempting to improve moderation and detection systems to manage the rapid spread of AI-generated content, addressing the issue remains complex.
“The deeper problem is that monetisation driven by engagement and the distribution of accurate information are fundamentally at odds,” Graham said. “No platform has fully solved that conflict, and perhaps none ever will.”
2 days ago
OpenAI unveils GPT-5.4 with stronger reasoning, coding and computer-use abilities
OpenAI has launched GPT-5.4, its newest frontier artificial intelligence model, introducing major upgrades in reasoning, coding and automated task execution.
The company said the model combines several of its recent advancements into a single system and is available in different variants, including GPT-5.4 Thinking and GPT-5.4 Pro.
One of the most significant features of GPT-5.4 is its 1 million-token context window, allowing it to analyse very large datasets such as entire codebases or extensive collections of documents more efficiently.
OpenAI also said GPT-5.4 is the first mainline model with built-in computer-use capabilities, enabling AI agents to directly interact with software to complete tasks. This means the system can operate computers by using screenshots, mouse clicks and keyboard commands, allowing it to work across applications and websites and automate complex workflows.
According to the company, the latest model introduces six major improvements, including enhanced coding abilities, better image perception and multimodal performance, stronger execution of long-running tasks and multi-step agent workflows, improved token efficiency for tool-heavy workloads, advanced web search and multi-source information synthesis, and more effective document-heavy analytics.
Addressing concerns about inaccuracies often referred to as “hallucinations,” OpenAI said GPT-5.4 is 33% less likely to produce false information compared with earlier models.
The company said the model is designed for professional environments and performs strongly in tasks such as legal analysis, financial modelling, creating presentation slides and writing or debugging code. Developers can also build AI agents capable of planning tasks, carrying them out and adjusting when problems arise.
The release reflects a broader shift in the evolution of AI systems. Early versions of ChatGPT primarily answered questions, while the GPT-4 era enabled more advanced capabilities such as writing essays, code and summaries. With GPT-5, models began to demonstrate stronger reasoning skills, and GPT-5.4 moves further by allowing AI systems to directly perform tasks on computers.
In practical use, GPT-5.4 can operate within common workplace tools such as spreadsheets and document editors. It can analyse financial data in Excel, automatically create dashboards, generate reports from raw datasets and process large legal or contractual documents.
For software development, the model can generate extensive codebases, detect and fix bugs, run automated software tests and even control web browsers through automation tools.
OpenAI’s latest release comes amid intensifying competition in the AI sector. Rival company Anthropic, led by Dario Amodei, recently introduced Claude Opus 4.6 and Claude Sonnet 4.6, which have been described as faster and more efficient for everyday enterprise tasks.
While the latest models from OpenAI and Anthropic focus on different strengths, the developments highlight a growing race to create AI systems capable of functioning as practical digital workers.
#From Indian Express
3 days ago
Apple unveils $599 devices targeting budget buyers
Apple has introduced a range of new products, including two devices priced at $599, as part of what CEO Tim Cook described as a “big week” of announcements aimed partly at budget-conscious buyers.
The new lineup was presented during hands-on media events in New York, London and Shanghai on Wednesday. The announcements include the new iPhone 17e, an entry-level laptop called MacBook Neo, updated iPad Air M4 tablets, refreshed monitors and upgraded chips for the company’s high-end laptops. Preorders for the devices began Wednesday.
The announcements come after the company reported record quarterly earnings driven by strong sales of the iPhone 17 series, although Apple has yet to roll out its previously promised artificial intelligence upgrades for Siri.
iPhone 17e
The iPhone 17e is designed for budget buyers and starts at $599 about $200 cheaper than the base iPhone 17. It uses the same A19 chip as the standard model and offers 256GB of storage, double the capacity of the previous 16e version.
The phone features a 48-megapixel camera and a C1X modem that supports faster cellular speeds. It also includes Apple’s Super Retina display, Ceramic Shield 2 protection and MagSafe charging with Qi2 support.
The device will be available in black, white and light pink.
iPad Air update
Apple also introduced an updated iPad Air powered by the M4 chip. While the higher-end iPad Pro uses the newer M5 chip, the Air still provides strong performance for everyday tasks such as streaming, browsing, email and video editing.
The company increased the tablet’s memory from 8GB to 12GB without raising the price. The 11-inch model starts at $599, while the 13-inch version starts at $799, both with 128GB of storage.
MacBook and chip upgrades
Apple upgraded its MacBook Pro laptops with new M5 Pro and M5 Max chips aimed at improving performance and battery efficiency.
The 14-inch MacBook Pro with the M5 Pro chip starts at $2,199, while the 16-inch model starts at $2,699. Both offer 24GB of RAM and 1TB of storage, along with support for Wi-Fi 7 and Bluetooth 6.
The new MacBook Neo, Apple’s most affordable laptop yet, features a 13-inch display, an A18 Pro chip, 256GB storage and two USB-C ports. The base model costs $599, while a 512GB version with Touch ID is priced at $699. Students and educators can get a $100 discount.
Apple also refreshed the MacBook Air with the base M5 chip and doubled storage to 512GB. The 13-inch model starts at $1,099 and the 15-inch version at $1,299.
New monitors
The company also launched two 27-inch 5K monitors the Studio Display and the higher-end Studio Display XDR. Both feature 5,120×2,880 resolution, 12-megapixel Center Stage cameras, six-speaker systems, two Thunderbolt 5 ports and two USB-C ports.
The Studio Display costs $1,599, while the advanced XDR version which includes mini-LED backlighting and a 120Hz refresh rate starts at $3,299.
4 days ago
South Korean chip industry worries Iran war could affect Middle East data centre plans
South Korea’s chip industry has expressed concern that the ongoing conflict in Iran could disrupt plans by major technology firms to establish AI data centres in the Middle East, lawmaker Kim Young-bae said on Wednesday.
Speaking to Reuters after meetings with executives from companies including Samsung Electronics, Kim warned that prolonged regional instability could delay infrastructure projects, potentially affecting the already strong global demand for semiconductors.
Industry officials also highlighted risks to the supply of critical chip-making materials, such as helium, sourced from the Middle East. Kim said companies were closely monitoring the situation, noting that any disruption could have ripple effects on production and logistics in the semiconductor sector.
The remarks come amid heightened tensions in the Gulf region, with the Iran war raising concerns over global supply chains and prompting technology firms to reassess investment timelines for advanced computing facilities.
4 days ago
Over 2.5 million users boycott ChatGPT after OpenAI-Pentagon deal
More than 2.5 million users have pledged to boycott ChatGPT following OpenAI’s agreement with the Pentagon, triggering widespread criticism of the AI developer.
A website tracking the boycott reported that over 2.5 million users have already left ChatGPT, which has a global user base exceeding 900 million, after OpenAI signed a contract last week allowing the U.S. Department of Defense to use the AI model on its classified network. The boycott figures are based on website pledges, social media shares, and app usage data, indicating growing disillusionment among users.
“We’re organising Americans and people worldwide to quit ChatGPT,” the boycott website said, adding that the campaign aims to send a strong message to technology enablers that such actions will not go unchallenged.
Following the backlash, competitor chatbots gained traction. Claude, developed by Anthropic, surged to the top of Apple’s App Store charts, surpassing ChatGPT, while U.S. mobile app uninstalls for ChatGPT jumped 295 percent in a single day, according to TechCrunch and analysis by Sensor Tower.
The criticism intensified as OpenAI signed the deal shortly after Anthropic, the Pentagon’s previous AI contractor, withdrew, citing concerns that the AI would be used for domestic surveillance, conflicting with the company’s democratic values.
OpenAI CEO Sam Altman acknowledged the misstep on social media, saying the announcement was rushed and that the company should have communicated more clearly. “We shouldn’t have rushed to get this out on Friday,” Altman wrote. “The issues are super complex and demand clear communication. Our intention was to de-escalate, but it appeared opportunistic and sloppy.”
According to The Guardian, OpenAI is now revising the agreement, explicitly prohibiting the use of its technology for mass surveillance or deployment by intelligence agencies such as the National Security Agency (NSA).
With inputs from NDTV
5 days ago
TikTok rules out end-to-end encryption, citing user safety concerns
TikTok has said it will not introduce end-to-end encryption in direct messages, distancing itself from most major social media rivals and arguing that the feature could reduce user safety.
End-to-end encryption ensures that only the sender and recipient can read a message, making it one of the most secure communication methods available to the public. Platforms such as Facebook, Instagram, Messenger and X have adopted the system, saying it strengthens user privacy.
However, critics argue that such encryption can make it more difficult to monitor and prevent harmful content, as it blocks technology companies and law enforcement agencies from accessing messages when concerns arise.
The debate is further complicated by long-standing allegations that TikTok’s links to the Chinese state could expose user data to risk. The company has repeatedly rejected those claims. Earlier this year, its US operations were separated from its global business following directives from American lawmakers.
In a security briefing at its London office, TikTok told BBC that it believes end-to-end encryption would prevent police and safety teams from accessing direct messages when necessary. The company said its decision is aimed at protecting users, particularly young people, from online harm, and described the move as a conscious effort to differentiate itself from competitors.
What to know before seeking health advice from an AI chatbot
TikTok says it has around 30 million monthly users in the UK and more than one billion worldwide. The platform is headquartered in Los Angeles and Singapore and is owned by Chinese technology firm ByteDance. It has faced ongoing scrutiny over its data protection practices.
Social media analyst Matt Navarra described TikTok’s approach as strategically bold but potentially controversial. He said the company could argue that it is prioritising proactive safety over absolute privacy, especially given concerns about grooming and harassment in direct messages.
At the same time, Navarra noted that the decision could place TikTok at odds with global privacy standards and may heighten concerns among some users about the company’s ownership.
Privacy advocates generally consider end-to-end encryption the strongest safeguard against hacking, corporate surveillance and intrusive state monitoring.
#From BBC
5 days ago
What to know before seeking health advice from an AI chatbot
As hundreds of millions of people turn to artificial intelligence chatbots for advice, tech companies are now rolling out tools designed specifically to answer health-related questions.
In January, OpenAI launched ChatGPT Health, a version of its chatbot that can review users’ medical records, wellness apps and data from wearable devices to respond to health queries. The service is currently available through a waiting list. Rival company Anthropic offers similar features to some users of its Claude chatbot.
Both firms stress that their large language models are not a replacement for doctors and should not be used to diagnose illnesses. Instead, they say the tools can explain complex test results, help users prepare for medical appointments and identify health trends in records and app data.
Experts say chatbots can provide more tailored responses than a standard Google search, especially when users share detailed health information such as age, prescriptions and medical history. “If used responsibly, these tools can offer useful information,” said Dr. Robert Wachter of the University of California, San Francisco. However, he advised users to provide as much relevant detail as possible to improve accuracy.
Doctors warn that AI should never be used during medical emergencies. Symptoms like chest pain, shortness of breath or severe headache require immediate medical attention. Even in non-urgent cases, experts recommend approaching AI-generated advice with caution. Dr. Lloyd Minor, dean of Stanford’s medical school, said major health decisions should not rely solely on chatbot responses.
Privacy is another key concern. Health data shared with AI companies is not protected under the US federal health privacy law known as HIPAA, which applies to doctors and hospitals. While OpenAI and Anthropic say health data is kept separate and not used to train their models, users must actively choose to share their information.
Early studies show mixed results. Research from Oxford University in 2024 found that people using AI chatbots did not make better health decisions than those using online searches. Although chatbots correctly identified medical conditions in written scenarios 95% of the time, they often struggled during real-life interactions.
Experts suggest seeking a second AI opinion or consulting a medical professional for added confidence.
6 days ago
AI edges closer to decoding human thoughts
Artificial intelligence is rapidly reshaping scientists’ ability to interpret the brain’s complex electrical signals, bringing researchers closer than ever to decoding human thoughts and inner speech.
In a recent breakthrough, a 52-year-old woman who lost her ability to speak clearly after a stroke nearly two decades ago was able to see her unspoken thoughts appear as text on a screen. The woman, identified only as participant T16, had a tiny array of electrodes surgically implanted in the front part of her brain. As she imagined speaking words, a computer system powered by artificial intelligence translated her neural signals into readable sentences in real time.
The experiment was conducted by researchers at Stanford University in the United States as part of a wider study involving patients with amyotrophic lateral sclerosis (ALS), a progressive neurodegenerative disease. Scientists described the achievement as the closest step yet towards a form of “mind reading”.
The findings were unveiled in August 2025. Soon after, researchers in Japan reported another major advance, demonstrating a “mind captioning” technique that could generate detailed descriptions of images people were seeing or imagining, using non-invasive brain scans combined with multiple AI systems.
Experts say such breakthroughs are opening an unprecedented window into the inner workings of the human brain while offering new communication pathways for people who are unable to speak or move.
“In the next few years, we will begin to see these technologies being commercialised and deployed at scale,” said neuroengineer Maitreyee Wairagkar of the University of California, Davis, who works on brain-computer interfaces. Several companies, including Elon Musk’s Neuralink, are already pursuing commercial brain implants designed to move the technology from laboratories into everyday use.
Brain-computer interfaces, or BCIs, are not new. Scientists have been experimenting with direct brain communication since the late 1960s. For decades, BCIs have allowed users to control prosthetic limbs or computer cursors by decoding brain signals linked to movement. However, translating speech and complex thoughts has proven far more challenging.
Progress has accelerated in recent years, particularly for patients with severe communication impairments. In 2021, Stanford researchers showed that a paralysed man could form English sentences by imagining himself writing letters in the air. More recently, Wairagkar’s team demonstrated a system that converted the attempted speech of an ALS patient into text at about 32 words per minute with nearly 98% accuracy.
These systems rely on tiny microelectrode arrays implanted on the brain’s surface, typically over regions involved in speech and movement. Machine-learning algorithms then analyse vast amounts of neural data, identifying patterns associated with different sounds or phonemes. Researchers often compare the process to voice assistants such as Amazon Alexa—except that instead of interpreting sound waves, the AI decodes neural activity.
A major challenge, however, is that patients usually need to actively attempt speech for accurate decoding, a process that can be tiring and slow. To address this, Stanford scientists explored whether “inner speech”—the words people silently say in their minds—could also be detected.
The results were promising but limited. When participants imagined specific sentences, the system achieved accuracy rates of up to 74% in real time. Performance dropped for more spontaneous thoughts, and open-ended prompts often produced meaningless output. Researchers said the findings suggest inner speech uses neural pathways similar to spoken speech, though the signals are weaker.
Beyond text, scientists are now pushing towards capturing the full richness of human speech. In 2025, Wairagkar’s lab showed it could decode not just words, but also tone, pitch and rhythm, allowing an ALS patient to convey emotion and emphasis. While only about 60% of the generated speech was judged clearly understandable, researchers say it points to a future where brain-driven speech sounds increasingly natural.
Further advances are expected as technology improves. Current studies typically sample only a few hundred neurons, a tiny fraction of the brain’s total. Expanding electrode coverage could significantly boost accuracy and speed, researchers say.
Meanwhile, other teams are using AI to reconstruct what people see or hear by analysing brain scans. By combining functional MRI data with image-generation tools such as Stable Diffusion, scientists have managed to recreate rough versions of images viewed by participants. Japanese researcher Yu Takagi of the Nagoya Institute of Technology says the work has revealed how different brain regions process visual information.
Similar efforts are under way to reconstruct music from brain activity, using advanced algorithms developed by companies such as Google. Although results remain imperfect, researchers believe the approach could eventually help explain how the brain interprets sound, images and even dreams.
While experts caution that fully decoding unfiltered thoughts remains far off, many believe the rapid pace of progress signals a profound shift ahead. As AI continues to unlock the brain’s hidden signals, technologies once confined to science fiction are moving steadily closer to reality.
With inputs from BBC
7 days ago