Tech-News
Tech giants woo millions of Indians with free AI tools to tap future market
Global tech companies are offering premium artificial intelligence (AI) tools to millions of Indians for free, viewing it as a long-term investment in one of the world’s fastest-growing digital markets.
Starting this week, millions of Indian users will get a year of free access to ChatGPT’s new low-cost “Go” chatbot. The move follows similar offers from Google and Perplexity AI, which recently tied up with leading Indian telecom operators to distribute their AI services.
Perplexity partnered with Airtel, India’s second-largest mobile network, while Google joined hands with Reliance Jio, the country’s biggest operator, to offer free or discounted AI tools bundled with monthly data packs.
Analysts say these offers are not acts of generosity but calculated efforts to build user habits and loyalty in a massive market. “The plan is to get Indians hooked on generative AI before asking them to pay for it,” Tarun Pathak, an analyst at Counterpoint Research, told the BBC.
India’s open and competitive digital market, unlike China’s tightly controlled environment, makes it an attractive testing ground for global tech firms. With over 900 million internet users — most under 24 — and some of the world’s cheapest data, India provides scale, youth, and diversity that help train AI models more effectively.
“AI use cases from India will serve as valuable examples for the rest of the world,” Pathak added. “The more first-hand data companies gather, the better their generative AI systems become.”
However, experts have raised privacy concerns. “Most users have always been willing to give up data for convenience or something free — that will continue,” said Delhi-based technology writer Prasanto K. Roy. “This is where the government must step in.”
India currently lacks a dedicated AI law, though the Digital Personal Data Protection Act (DPDP) 2023 provides a broad framework for data and privacy regulation. The act has not yet been implemented, and its detailed rules remain pending.
Mahesh Makhija, technology consulting leader at Ernst & Young, said that once enforced, the law could become “one of the most advanced from a digital privacy perspective.”
For now, India’s flexible regulatory climate allows OpenAI, Google, and others to roll out free AI services — a strategy difficult to replicate in regions like the European Union or South Korea, where strict rules on transparency and data use apply.
Experts say India must strengthen user awareness and regulatory safeguards but without stifling innovation. “We need light-touch regulation for now,” Roy said, “but that must evolve as potential harms become clearer.”
Industry observers believe these free offerings mirror India’s earlier digital revolution driven by cheap internet data. Even if only a small fraction of users later subscribe to paid versions, companies could still gain millions of paying customers.
“Even if just 5% of free users convert to paid subscribers, that’s still a huge number,” Pathak said.
With inputs from BBC
1 month ago
Denmark plans to ban social media access for children under 15
Denmark’s government announced plans Friday to ban social media access for anyone under the age of 15, marking one of the toughest measures yet by a European country to shield children from harmful online content and corporate influence.
Under the proposal, parents could be granted permission — after a formal assessment — to allow children as young as 13 to use social media. The government has yet to detail how the restriction would be enforced, though officials acknowledge that existing age limits on platforms like Instagram, TikTok, and Snapchat have proven easy to bypass.
Digital Affairs Minister Caroline Stage said the move aims to curb the growing risks children face in a highly digitalized world. “Ninety-four percent of Danish children under 13 have profiles on at least one social platform, and more than half of those under 10 do,” she told The Associated Press.
“The amount of violence and self-harm children are exposed to online is an unacceptable risk,” Stage said. While calling Big Tech firms “some of the greatest companies in the world,” she criticized them for failing to protect young users: “They have enormous resources but are simply not willing to invest in children’s safety.”
Careful Legislation and Tough Enforcement
The law is not expected to take effect immediately, as lawmakers across party lines work out enforcement mechanisms. “We’ll move quickly, but we must do it right,” Stage said. “There can be no loopholes for the tech giants to exploit.”
Denmark’s plan follows Australia’s 2024 legislation, which made it illegal for children under 16 to access social media and imposed fines of up to AUD 50 million ($33 million) for companies that fail to comply.
Stage said Denmark will rely on its national electronic ID system, which nearly all citizens over 13 already use, and a forthcoming age-verification app. While tech companies cannot be forced to adopt the Danish app, they will be legally required to verify users’ ages. Platforms that fail to comply could face EU penalties of up to 6% of their global revenue.
Protecting Children from Digital Harm
The Danish government emphasized that the initiative is not meant to disconnect children from digital life, but to protect them from toxic content and online pressure.
“Children and young people lose sleep, concentration, and peace of mind due to constant digital engagement,” the ministry said in a statement. “This is not a problem parents or teachers can solve alone.”
Other countries have taken similar steps. China limits minors’ gaming and smartphone time, and in France, prosecutors recently opened an investigation into TikTok for allegedly promoting suicide-related content through its algorithms.
The EU’s Digital Services Act, in force since 2023, already bans users under 13 from holding social media accounts, but enforcement remains inconsistent. Platforms such as TikTok and Meta (Instagram, Facebook) use AI-based facial analysis to estimate users’ ages, though the methods have been criticized as unreliable.
In an emailed response, TikTok said it supports Denmark’s goals: “We have developed more than 50 safety features for teen accounts and tools like Family Pairing to help guardians manage content and screen time. We look forward to constructive collaboration on industry-wide solutions.”
Meta did not respond to a request for comment.
Minister Stage said Denmark has given tech companies ample time to act on child safety. “They’ve had many chances to fix this themselves,” she said. “Since they haven’t, we will now take control — and ensure our children’s digital futures are safe.”
1 month ago
Musk poised to become first trillionaire as Tesla shareholders approve record pay deal
Tesla CEO Elon Musk is one step closer to becoming the world’s first trillionaire after shareholders overwhelmingly approved a massive pay package valued at up to $1 trillion, contingent on him meeting a series of ambitious performance goals over the next decade.
At Tesla’s annual meeting in Austin, Texas, more than 75% of voters backed the compensation plan — a striking show of confidence in Musk even as the electric carmaker battles declining sales, shrinking profits and growing competition.
“Fantastic group of shareholders,” Musk said after the vote, urging them to “hang on to your Tesla stock.”
The approval underscores investors’ enduring faith in Musk’s ability to engineer turnarounds like the one that transformed Tesla from a struggling startup into one of the world’s most valuable companies. Still, critics warn the reward is excessive and risky given Musk’s erratic behavior and political ventures.
The board’s plan ties the payout to steep milestones, including boosting Tesla’s market capitalization nearly sixfold and delivering 20 million electric vehicles over ten years. Musk must also deploy one million humanoid robots, a vision he describes as a “robot army.”
If he meets those targets, Musk could eclipse industrialist John D. Rockefeller, whose fortune is estimated at $630 billion in today’s dollars. Forbes currently values Musk’s wealth at around $493 billion.
Opposition came from major investors, including CalPERS and Norway’s sovereign wealth fund, along with watchdog groups Institutional Shareholder Services and Glass Lewis, who called the package excessive. Musk lashed out, branding them “corporate terrorists.”
Elon Musk launches Grokipedia as rival to Wikipedia
Supporters argue the deal aligns Musk’s incentives with Tesla’s long-term growth and its expansion into artificial intelligence. “This AI chapter needs one person to lead it, and that’s Musk,” said Dan Ives, an analyst at Wedbush Securities. “It’s a huge win for shareholders.”
Tesla shares climbed in after-hours trading before closing nearly flat at $445.44. Musk claimed the vote was about influence, not money, saying it would double his stake in Tesla to nearly 30%, giving him more control over its AI-powered future.
Shareholders also approved a measure allowing Tesla to invest in Musk’s AI startup, xAI, while rejecting a proposal to make it easier for minority shareholders to sue the company.
Source: AP
1 month ago
OpenAI faces 7 lawsuits claiming ChatGPT drove people to suicide, delusions
OpenAI is facing seven lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues.
The lawsuits were filed Thursday in California state courts allege wrongful death, assisted suicide, involuntary manslaughter and negligence. Filed on behalf of six adults and one teenager by the Social Media Victims Law Center and Tech Justice Law Project, the lawsuits claim that OpenAI knowingly released GPT-4o prematurely, despite internal warnings that it was dangerously sycophantic and psychologically manipulative. Four of the victims died by suicide.
The teenager, 17-year-old Amaurie Lacey began using ChatGPT for help, according to the lawsuit filed in San Francisco Superior Court. But instead of helping, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counseled him on the most effective way to tie a noose and how long he would be able to "live without breathing.'”
“Amaurie’s death was neither an accident nor a coincidence but rather the foreseeable consequence of Open AI and Samuel Altman’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit says.
OpenAI did not immediately respond to a request for comment Thursday.
Another lawsuit, filed by Alan Brooks, a 48-year-old in Ontario, Canada, claims that for more than two years ChatGPT worked as a “resource tool” for Brooks. Then, without warning, it changed, praying on his vulnerabilities and “manipulating, and inducing him to experience delusions. As a result, Allan, who had no prior mental health illness, was pulled into a mental health crisis that resulted in devastating financial, reputational, and emotional harm.”
“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center in a statement.
OpenAI, he added, “designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them.” By rushing its product to market without adequate safeguards in order to dominate the market and boost engagement, he said, OpenAI compromised safety and prioritized “emotional manipulation over ethical design.”
In August, parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
“The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people,” said Daniel Weiss, chief advocacy officer at Common Sense Media, which was not part of the lawsuits. “These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe.”
1 month ago
North Korea condemns new US cybercrime sanctions, vows countermeasures
North Korea has strongly criticized the Trump administration’s latest sanctions over alleged cybercrimes funding its nuclear weapons program, warning that Washington’s “hostile” actions will never succeed in forcing Pyongyang to change course.
In a statement issued Thursday, North Korean Vice Foreign Minister Kim Un Chol accused the United States of harboring “wicked hostility” toward his country and said Pyongyang would take “appropriate countermeasures” in response.
The remarks followed the U.S. Treasury Department’s announcement Tuesday of sanctions on eight individuals and two companies, including several North Korean bankers, for allegedly laundering proceeds from cyberattacks. According to the Treasury, North Korea’s state-backed hackers have stolen more than $3 billion — mostly in digital assets — over the past three years to fund its weapons programs. It said Pyongyang relies on an extensive network of banks, shell companies, and representatives operating in countries such as China and Russia to move illicit funds gained through IT fraud, cryptocurrency theft, and sanctions evasion.
Despite President Donald Trump’s stated interest in restarting dialogue with Kim Jong Un, nuclear talks have remained frozen since their 2019 collapse over disagreements on easing sanctions in exchange for denuclearization steps.
Nvidia partners with South Korea to strengthen AI infrastructure
Kim Un Chol said the new sanctions show Washington’s “unchanging hostility” toward the DPRK and that its pressure tactics “will never alter the current strategic balance or our national stance.”
Since the breakdown of talks with Trump, Kim Jong Un has deepened ties with Russia, supplying weapons and troops to support Moscow’s war in Ukraine while positioning North Korea as part of a broader front against the U.S.-led West.
Source: AP
1 month ago
Australia extends social media age ban to Reddit and Kick
Australia has added Reddit and livestreaming platform Kick to the list of social media networks that must ban users under 16 from holding accounts, Communications Minister Anika Wells announced Wednesday.
The two platforms join Facebook, Instagram, Snapchat, Threads, TikTok, X and YouTube in facing the world’s first legal requirement to block children below 16 starting December 10. Companies that fail to take “reasonable steps” to comply could face fines of up to 50 million Australian dollars ($33 million).
“We’ve made it clear to the platforms that there’s no excuse for failing to enforce this law,” Wells told reporters in Canberra. “Online platforms use technology to target children with chilling precision. We’re simply asking them to use that same technology to keep children safe.”
Australia’s eSafety Commissioner Julie Inman Grant, who will oversee enforcement, said the list of restricted platforms would evolve as new technologies emerge. The nine platforms currently covered meet the government’s definition of services whose “sole or significant purpose” is enabling online social interaction.
Who is Zico Kolter? Carnegie Mellon professor leading OpenAI’s powerful AI safety panel
Inman Grant said her office would work with researchers to assess how the ban affects children’s behavior — including sleep patterns, physical activity and social interaction — and monitor any unintended consequences.
Australia’s move has drawn global attention. European Commission President Ursula von der Leyen said at a U.N. forum in New York in September that she was “inspired” by Australia’s “common-sense” approach.
However, critics argue the law could undermine user privacy by requiring all users to verify their age. Over 140 Australian and international experts last year urged Prime Minister Anthony Albanese to drop the proposal, calling it “too blunt an instrument to address risks effectively.”
Source: AP
1 month ago
OpenAI and Amazon strike $38 billion deal for AI computing power
OpenAI has reached a massive $38 billion agreement with Amazon, allowing the ChatGPT creator to run its artificial intelligence systems on Amazon’s U.S.-based data centers.
Under the deal announced Monday, OpenAI will gain access to “hundreds of thousands” of Nvidia AI chips through Amazon Web Services (AWS) to power and expand its AI tools. Following the announcement, Amazon’s shares rose 4%.
The agreement comes just days after OpenAI restructured its longstanding relationship with Microsoft, which had previously been its exclusive cloud computing partner. Regulators in California and Delaware also approved OpenAI’s new corporate structure last week, enabling the San Francisco-based company — originally founded as a nonprofit — to raise capital more easily and operate for profit.
“The rapid advancement of AI technology has created unprecedented demand for computing power,” Amazon said in a statement. The company noted that OpenAI will “immediately begin using AWS computing capacity,” with all infrastructure expected to be in place by the end of 2026, and room to expand further into 2027 and beyond.
Developing and maintaining AI systems like ChatGPT requires enormous amounts of energy and computing resources. OpenAI has made over $1 trillion in financial commitments to secure such infrastructure, including partnerships with Oracle, SoftBank, and major chipmakers Nvidia, AMD, and Broadcom.
Some investors have questioned the sustainability of these deals, given that OpenAI remains unprofitable and relies on future revenue to cover its growing infrastructure costs. CEO Sam Altman, however, dismissed such concerns, saying on a recent podcast with Microsoft CEO Satya Nadella that “revenue is growing steeply” and that OpenAI is “making a forward bet” on continued expansion.
Amazon already serves as the primary cloud provider for Anthropic, one of OpenAI’s top competitors and the developer of the Claude chatbot.
1 month ago
World's first flying car factory starts trial production in China
XPENG AEROHT, the flying car affiliate of Chinese electric vehicle maker XPENG, on Monday began trial production at the world's first intelligent factory for mass-produced flying cars -- a milestone in the commercialization of next-generation transport.
Located in the Huangpu district of Guangzhou, the capital of south China's Guangdong Province, the 120,000-square-meter plant has already rolled out the first detachable electric aircraft of its modular flying car, the "Land Aircraft Carrier."
The facility is designed to have an annual production capacity of 10,000 detachable aircraft modules, with an initial capacity of 5,000 units. It has the largest production capacity of any factory of its kind, and will be capable of assembling one aircraft every 30 minutes once fully operational.
XPENG AEROHT has secured orders for nearly 5,000 flying cars since its product release, and mass production and delivery are scheduled in 2026, the company said.
The flying car comprises a six-wheel ground vehicle, referred to as the "mothership," and a detachable electric vertical take-off and landing (eVTOL) aircraft.
The eVTOL aircraft offers both automatic and manual flight modes. Its automatic mode enables smart route planning, as well as one-touch take-off and landing.
At about 5.5 meters in length, the vehicle can be driven on public roads with a standard licence and parked in regular spaces.
1 month ago
New Australian paint cools homes, collects water from air
Scientists in Australia have developed a nanoengineered, paint-like polymer coating that can passively cool buildings and capture water directly from the air — all without any energy input.
The invention could help address global water scarcity while reducing the need for energy-intensive cooling systems, according to a statement released Monday by the University of Sydney, which led the research in collaboration with start-up Dewpoint Innovations.
The research team developed a porous polymer coating capable of reflecting up to 97 percent of sunlight and radiating heat into the atmosphere. This allows surfaces coated with the material to remain up to six degrees Celsius cooler than the surrounding air, even under direct sunlight, the statement said.
This cooling process creates ideal conditions for atmospheric water vapour to condense into droplets on the surface — “much like steam condensing on a bathroom mirror,” the researchers explained.
“This technology not only advances the science of cool roof coatings but also opens the door to sustainable, low-cost and decentralised sources of fresh water — a critical need in the face of climate change and growing water scarcity,” said Professor Chiara Neto of the University of Sydney Nano Institute and the School of Chemistry.
In a six-month outdoor trial on the rooftop of the Sydney Nanoscience Hub, the coating was able to collect dew over 32 percent of the year, harvesting up to 390 millilitres of water per square metre daily — enough for a 12-square-metre surface to meet one person’s daily drinking water needs.
Unlike traditional white paints, the new polymer relies on its internal porous structure rather than ultraviolet-reflective pigments such as titanium dioxide. This not only enhances durability but also reduces glare, according to the study, published in Advanced Functional Materials.
“Imagine roofs that not only stay cooler but also make their own fresh water — that’s the promise of this technology,” Professor Neto added.
1 month ago
Who is Zico Kolter? Carnegie Mellon professor leading OpenAI’s powerful AI safety panel
A Carnegie Mellon University professor now holds one of the most influential positions in the global technology landscape — overseeing when the world’s most advanced artificial intelligence systems can be safely released.
Zico Kolter, a computer science professor and director of Carnegie Mellon’s machine learning department, leads OpenAI’s four-member Safety and Security Committee, which has the authority to halt the release of any AI model deemed unsafe.
The committee’s mandate ranges from preventing misuse of powerful AI systems — such as those capable of designing weapons of mass destruction — to ensuring new chatbots do not harm users’ mental health.
“We’re not just talking about existential threats,” Kolter told The Associated Press. “We’re talking about the entire spectrum of safety and security issues that arise with widely used AI systems.”
Oversight strengthened by regulatory deal
Kolter has chaired OpenAI’s safety panel for over a year, but his role gained new prominence last week after regulators in California and Delaware made his oversight a key condition for approving OpenAI’s new corporate structure — a move designed to help the ChatGPT maker raise funds more easily while maintaining its non-profit mission.
The agreements, reached with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, reaffirm that safety and security decisions must take precedence over financial interests as OpenAI transitions into a public benefit corporation under the supervision of its non-profit foundation.
Kolter will sit on the non-profit board but not on the for-profit board. However, he will have “full observation rights” — including access to board meetings and all safety-related information — according to Bonta’s memorandum of understanding. Kolter is the only individual named in that document apart from Bonta himself.
Independence from OpenAI leadership
Kolter said the agreements confirm his committee’s authority to delay or block releases of new AI systems until safety mitigations are in place. He declined to say whether the panel has ever exercised that power.
The committee includes three other members who also serve on the OpenAI board, among them former U.S. Army General Paul Nakasone, who previously led the U.S. Cyber Command. CEO Sam Altman stepped down from the panel last year, a move widely seen as reinforcing its independence.
“We can request delays of model releases until certain conditions are met,” Kolter said, emphasizing that future concerns would cover everything from cybersecurity vulnerabilities to the misuse of AI models for malicious purposes.
Balancing innovation with safety
Kolter noted that new types of AI agents bring unprecedented risks. “Do these models enable malicious users to have much higher capabilities — like designing bioweapons or carrying out cyberattacks?” he asked. “And what about the psychological impact of interacting with these systems? All of these need to be addressed from a safety standpoint.”
OpenAI has faced growing scrutiny this year, including a wrongful-death lawsuit from California parents who alleged that their teenage son took his life after extensive interactions with ChatGPT.
From AI researcher to safety overseer
Kolter, 42, began studying artificial intelligence as a Georgetown University freshman in the early 2000s — when “machine learning” was still considered a niche academic field.
“When I started, we used the term ‘machine learning’ because ‘AI’ was viewed as an old discipline that had overpromised and underdelivered,” he recalled.
A longtime observer of OpenAI, Kolter even attended the company’s launch event in 2015. Still, he said few experts foresaw the current pace of progress. “Even those deeply involved in AI research didn’t anticipate the explosion of capabilities — and the corresponding risks — that we’re seeing now,” he said.
Skepticism and cautious optimism
AI safety advocates are closely watching OpenAI’s restructuring and Kolter’s work. Nathan Calvin, general counsel of the AI policy nonprofit Encode, described himself as “cautiously optimistic.”
“I think he’s a good choice for the role — someone with the right background and approach,” Calvin said. “If the safety board members take their commitments seriously, this could be a major step forward. But it could also end up being just words on paper. We don’t yet know which it will be.”
Source: AP
1 month ago