tech-news
AI is at its best not when it replaces human thinking: NGOAB DG
An orientation workshop on “NGOAB Online Solution and Artificial Intelligence (AI) for NGOs” was held in Dhaka on Thursday, to introduce new digital systems and promote the responsible use of AI to modernise NGO service delivery, improve efficiency, and strengthen governance and accountability.
It called for continued collaboration among government, development partners, and civil society to scale up digital innovation while safeguarding rights and promoting trust in emerging technologies.
The workshop was organised by the NGO Affairs Bureau, in partnership with the United Nations Development Programme (UNDP) and with support from the Australian government under the Institutional Strengthening for Promoting Accelerated Transformation (ISPAT) project.
The workshop brought together government officials, development partners, and NGO representatives to explore how digital innovation and AI can enhance service delivery and operational effectiveness across the sector.
Barrister Md. Khalilur Rahman Khan, Director General (In-charge), NGOAB, noted the critical role of NGOs in Bangladesh’s development.
“AI is at its best not when it replaces human thinking, but when it sharpens it. It should serve as a tool we guide, not a force that guides us,” he said while speaking as the chief guest, emphasising balanced use of technology.
Chairing the session, Dr. K. M. Mamun Uzzaman, Director, NGOAB, highlighted the urgency of adapting to technological change. “Adopting new technologies is now a necessity but it must be done with accountability and ethical consideration,” he said.
Asif Kashem, Senior Programme Manager, Australian High Commission, underscored the importance of impact and responsible use. “Technology alone is not sufficient. We need to ensure it benefits people,” he said, highlighting the need for safety and data privacy.
Sheela Tasneem Haq, Senior Governance Specialist, UNDP, emphasised responsible and inclusive AI adoption. “We are the pilot, and AI is the co-pilot,” she noted, underscoring the importance of addressing data bias, ethics, and the digital divide.
She also stressed the need for public trust and multi-stakeholder engagement in managing risks such as misinformation and online harm.
A key highlight was the live demonstration of the NGOAB Online Solution, which marks a shift from paper-based processes to a fully digital system enabling online registration, application tracking, document submission, and integrated payments, UNDP said in a media release.
Participants engaged actively, raising practical questions on system usability, timelines, and future features.
Another session focused on practical applications of AI for NGOs, including analytics, compliance support, content generation, and chatbot services.
Technical experts demonstrated tools for reporting, data analysis, and workflow efficiency, emphasising responsible use and verification.
3 days ago
Microsoft adds high-volume email sending to Exchange Online
Microsoft has rolled out High Volume Email (HVE) for Exchange Online, enabling organizations to send large volumes of automated internal messages without encountering traditional sending limits designed for person-to-person emails.
HVE is a tenant-native feature built specifically for application-to-person communications. It uses dedicated HVE accounts separate from user or shared mailboxes, ensuring automated messages do not interfere with normal employee email workflows. All mail sent through HVE remains within Microsoft infrastructure and is subject to Exchange Online’s existing security, compliance, and policy controls. Administrators can configure and manage HVE through the Mail flow section in the Exchange admin center.
Key use casesHVE is intended for transactional and operational messaging to internal recipients. Microsoft identifies its primary applications as payroll and HR notifications, IT monitoring and service alerts, line-of-business application messaging, device-driven workflows such as printers and scanners, and security or compliance alerts.
Jeremy Carlson, Director of Product Marketing for M365 Portfolio Growth at Microsoft, clarified that HVE “does not include campaign tooling, templates, or engagement tracking. Instead, it supports the high-trust, high-reliability use cases that organizations depend on every day.”
Pricing and availabilityHVE is now generally available. Usage will be metered starting June 1, 2026, based on the number of expanded email recipients. Microsoft has set the cost at $42 per one million recipients, equivalent to $0.000042 per recipient, aligning pricing with internal email volume.
Infrastructure advantagesFor organizations previously using on-premises Exchange servers, third-party SMTP relays, or repurposed user mailboxes to handle high-volume automated emails, HVE provides a streamlined solution to consolidate this traffic within Microsoft 365. The service requires no additional infrastructure or third-party dependencies, running natively within an existing Exchange Online tenant.
#From helpnetsecurity.com
3 days ago
Meta, Snapchat, TikTok and YouTube aren't fully complying with child account ban: Australia
Australia’s online safety watchdog on Tuesday said it is considering court action against Meta, Snap Inc., TikTok and Alphabet Inc., alleging they are not doing enough to keep children under 16 off their platforms.
eSafety Commissioner Julie Inman Grant released her first compliance report since the law took effect on Dec. 10, calling on 10 platforms to remove all Australian account holders younger than 16.
The report said that although around 5 million Australian accounts had been deactivated, a significant number of children were still able to retain accounts, create new ones and bypass age assurance systems.
Inman Grant said her office had “significant concerns” about the compliance of half of the platforms and was gathering evidence to determine whether they failed to take “reasonable steps” to prevent underage users.
Courts could impose fines of up to 49.5 million Australian dollars (about $33 million) for systemic non-compliance. A decision on possible legal action is expected by midyear.
Platforms not under investigation include Reddit, X, Kick, Threads and Twitch.
Communications Minister Anika Wells accused some platforms of doing the bare minimum to comply with the law, saying they do not want the legislation to succeed.
The watchdog identified “poor practices” such as allowing unlimited attempts to pass age verification and prompting users to retry even after declaring themselves underage.
Meta said it is committed to complying with the law but acknowledged that accurately determining users’ ages remains a challenge.
Snap Inc. said it had locked 450,000 accounts in line with the rules and continues to take action.
TikTok declined to comment, while Alphabet did not immediately respond.
Lisa Given of RMIT University said courts would ultimately decide what constitutes “reasonable steps,” noting that age-verification technologies are not fully reliable.
Reddit has filed one of two constitutional challenges to the law in Australia’s High Court, along with the Digital Freedom Project, arguing it infringes on implied freedom of political communication.
A preliminary hearing is scheduled for May 21 to set a date for further proceedings, Reddit said.
4 days ago
Australia warns social media giants over under-16 ban compliance
Australia’s internet regulator has warned that major social media platforms are not doing enough to keep children under 16 off their services, despite a law that came into effect in December 2025.
The legislation prohibits anyone under 16 from using 10 platforms, including Facebook, Instagram, Snapchat, TikTok, and YouTube. However, eSafety expressed “significant concerns” about how these companies are implementing the restrictions.
The regulator’s first report since the ban found several compliance issues, such as allowing under-16 users to bypass age verification, insufficient measures to stop new underage accounts, and limited reporting options for parents to flag violations.
In the first month after the ban, about 4.7 million accounts were restricted or removed, according to eSafety. Commissioner Julie Inman Grant said the regulator will now begin actively enforcing the rules and gathering evidence to determine whether platforms have failed to take reasonable steps to prevent underage access.
Meta, which owns Facebook, Instagram, WhatsApp, Messenger, and Threads, said it is committed to complying with the law but highlighted that accurate age verification is a challenge across the industry. Snap, which operates Snapchat, said it had locked 450,000 accounts and continues to block more daily.
Despite the ban, many under-16s are still able to access social media. A BBC visit to a Sydney school found that most students who used social media before the law remained active on the platforms, some bypassing age checks entirely.
Parents have largely welcomed the policy, seeing it as support in limiting their children’s social media use. Critics, however, argue that educating children on online risks would be more effective than banning them. Some also say the law disproportionately affects minority groups, including rural, disabled, and LGBTQ+ youth, who often rely on online communities for support.
Inman Grant acknowledged that the reform is challenging entrenched social media habits built over two decades but said platforms are capable of complying immediately. She emphasized the role of parents as key partners in enforcing the ban and said Australia will continue pushing for cultural change despite resistance from tech companies.
#From BBC
5 days ago
DeepSeek chatbot suffers over seven-hour outage in China
DeepSeek’s chatbot experienced a major outage of more than seven hours overnight in China, prompting the AI firm to release multiple updates to fix the problem.
Users first reported issues on Sunday evening, according to Downdetector. DeepSeek’s status page noted an initial disruption at 9:35 p.m., which was marked resolved two hours later. However, further performance problems emerged on Monday, taking until 10:33 a.m. to fully address.
The exact cause of the outages remains unclear, and DeepSeek did not immediately respond to requests for comment.
Extended downtime is rare for DeepSeek, a globally used AI app that has maintained nearly a 99% uptime record since the launch of its R1 model in January 2025. The company has been recognized as one of China’s leading AI innovators.
Industry speculation suggests that DeepSeek, headquartered in Hangzhou, may be preparing a major update, following its high-profile debut on January 20 last year. The anticipated rollout has fueled competition, prompting rivals such as Alibaba Group, ByteDance, and Tencent Holdings to release new AI models over the Lunar New Year holiday. While anticipation for DeepSeek’s next move remains high, the company has not disclosed a timeline.
#From Bloomberg
6 days ago
Spyware links sent amid missile strikes highlight Iran-linked cyber threat
As Iranian missiles hit Israel, some Android users received text messages promising real-time updates on nearby bomb shelters. But instead of helpful information, the links installed spyware, giving hackers access to cameras, location data, and personal information.
The attack, linked to Iran, shows how cyber operations are now a key part of modern warfare. Experts say Tehran and its allied groups are using digital tactics to make up for military disadvantages, combining hacking, disinformation, and artificial intelligence.
Gil Messing, chief of staff at cybersecurity firm Check Point Research, said the texts were timed to coincide with missile strikes, creating a “digital-physical” attack. “This was sent to people while they were running to shelters,” he said. “The exact timing is unprecedented.”
Even if a ceasefire is reached, cyberattacks are expected to continue because they are cheap, fast, and focus on spying, theft, and intimidation rather than outright destruction.
High-volume, low-impact attacks
Most attacks so far have caused little direct damage but forced U.S. and Israeli companies to patch security weaknesses. DigiCert, a Utah-based cybersecurity firm, has tracked nearly 5,800 attacks by about 50 Iran-linked groups targeting networks in the U.S., Israel, and Gulf countries. Many attacks aim to intimidate rather than inflict major damage.
Recently, a pro-Iranian group claimed responsibility for breaching an account of FBI Director Kash Patel, posting old personal documents. Such attacks are often designed to boost supporters’ morale and unsettle opponents.
Hospitals and data centers under threat
Iran is likely to target weak points in U.S. infrastructure, including hospitals, supply chains, and critical data centers. Recent strikes included Michigan-based medical company Stryker and another unnamed healthcare firm, hit with ransomware that aimed to disrupt rather than demand money.
Cynthia Kaiser of Halcyon said, “There is a deliberate focus on the medical sector, and targeting is expected to increase.”
AI’s role in cyber warfare
Artificial intelligence is speeding up attacks and spreading false information, including deepfakes. One fake image of sunken U.S. warships received over 100 million views. Iranian authorities also control internet access to shape domestic perceptions of the war, sometimes labeling real footage as fake.
In response, the U.S. created a Bureau of Emerging Threats last year to counter risks from new technologies. AI also helps defenders respond faster, according to Director of National Intelligence Tulsi Gabbard.
While Russia and China remain the largest cyber threats, Iran has shown it can target American systems, including political campaigns, water plants, military networks, and online movements opposing Israel.
7 days ago
Yahoo bets on AI tool Scout to revive search ambitions
Yahoo is turning to artificial intelligence with its new answer engine, Scout, in a fresh attempt to regain its position in online search.
The AI-powered tool provides direct answers along with links to supporting sources. In a response to an AP query, Scout said Yahoo’s decline showed how early success can fade without constant innovation.
Yahoo CEO Jim Lanzone hopes to use AI to tap into the company’s global base of about 700 million users who still rely on its finance, sports, news and email services despite years of setbacks.
Lanzone took charge after Apollo Global Management acquired Yahoo for $5 billion in 2021, far below its peak value of $125 billion during the early 2000 dot-com boom. Before that, Verizon had bought Yahoo’s core business in 2017 but failed to integrate it successfully with AOL.
Years of missteps under multiple leaders weakened Yahoo’s standing, though it managed to survive, unlike some former tech giants, analysts say.
Since taking over, Lanzone has focused on cutting underperforming units, selling assets like TechCrunch and shutting down AOL’s dial-up service. He says Yahoo is now profitable and generating billions in revenue.
The company has also upgraded key products, including its fantasy sports platform and email service, which remains the second largest after Gmail.
With Scout now rolled out to 250 million users in the US, Yahoo aims to offer simpler and more personalised search results. However, it faces tough competition from Google and AI platforms like ChatGPT, Claude and Perplexity.
Yahoo is currently using AI technology licensed from Anthropic to run Scout. Lanzone said the tool is designed to deliver answers without mimicking human conversation.
Founded in the 1990s as a web directory, Yahoo lost its edge after shifting focus away from search, allowing Google to dominate the space.
8 days ago
Melania Trump shares spotlight with humanoid robot at White House tech event
Melania Trump drew attention at a recent education and technology summit in Washington, but this time she shared the spotlight with a humanoid robot.
On Wednesday, the former first lady attended the final day of a global summit held in the White House East Room, organized under her “Fostering the Future Together” initiative. The event brought together international representatives to explore how education, innovation, and technologies like artificial intelligence can help empower children.
Melania Trump entered the venue walking alongside the robot, both moving slowly down a red carpet. Just before entering the East Room, she paused while the robot continued forward, circling a table of panelists before stopping at the center of the room.
After briefly scanning the audience, the robot introduced itself as “Figure 03,” a humanoid created in the United States. It expressed gratitude for being invited and highlighted its role in supporting efforts to advance children’s education through technology. The robot also greeted attendees in multiple languages before exiting the room the same way it had entered.
Melania Trump later thanked the robot, joking that it was her first American-made humanoid guest at the White House.
The robot, developed by California-based Figure AI, was unveiled in October 2025 as a third-generation model designed to assist with everyday household chores such as cleaning, laundry, and dishwashing.
Figure AI’s CEO, Brett Adcock, said he was proud to see the robot become the first of its kind to appear at the White House. The company is among several competitors—including Boston Dynamics, Tesla, and firms in China working to develop advanced human-like robots capable of performing practical tasks.
10 days ago
Meta ordered to pay $375m over misleading claims on child safety
A court in New Mexico has ordered Meta Platforms to pay $375 million in damages after a jury found the company misled users about the safety of its platforms for children.
The verdict followed a lawsuit brought by New Mexico Attorney General Raul Torrez, who described the ruling as “historic” and said it marked the first successful case by a US state against Meta over child safety concerns.
The jury concluded that Meta — which owns Facebook, Instagram and WhatsApp — violated the state’s Unfair Practices Act by misleading the public about the risks faced by young users. Jurors found that the company’s platforms exposed children to sexually explicit content and contact with predators.
The case was heard over seven weeks, during which jurors reviewed internal company documents and heard testimony from former employees indicating that Meta was aware of such risks.
Among them was whistleblower Arturo Béjar, who told the court that his internal experiments showed underage users on Instagram were being served sexualised content. He also said his own daughter had received inappropriate sexual advances from a stranger on the platform.
Prosecutors also presented internal research suggesting that at one stage, 16% of Instagram users reported encountering unwanted nudity or sexual activity within a single week.
Meta, led by chief executive Mark Zuckerberg, rejected the findings and said it plans to appeal the decision. A company spokesperson said Meta continues to invest in safety measures and acknowledged the challenges of identifying harmful content and bad actors online, while maintaining confidence in its efforts to protect young users.
The total penalty of $375 million was calculated after the jury determined there had been thousands of violations, each carrying a potential fine of up to $5,000.
Meta argued that it has taken steps in recent years to improve user safety, including launching “Teen Accounts” on Instagram in 2024 to give younger users greater control, and introducing a feature last month to alert parents if their children search for self-harm-related content.
The company is also facing a separate trial in Los Angeles, where a woman alleges she became addicted to platforms such as Instagram and YouTube — owned by Google — during her childhood due to their design.
Thousands of similar lawsuits are currently pending across US courts.
New Mexico filed the case in 2023, accusing Meta of directing young users towards sexually explicit material, child abuse content, and even solicitation and trafficking-related risks through its recommendation algorithms.
“Meta executives knew their products harmed children, ignored warnings from their own staff, and misled the public,” Torrez said, adding that the jury’s decision reflects growing concern among families, educators and child safety advocates.
#From BBC
11 days ago
Three charged in US with conspiring to smuggle AI servers to China
A senior vice president of and two associates have been charged in the United States with conspiring to smuggle billions of dollars’ worth of computer servers equipped with advanced chips to in violation of U.S. export control laws.
Federal prosecutors said the defendants diverted large quantities of high-performance servers assembled in the U.S. to China between 2024 and 2025. Investigators allege they used fabricated documents, staged equipment to pass audits and relied on a pass-through company to conceal their activities and true customers.
The accused include Yih-Shyan “Wally” Liaw, 71, a U.S. citizen and senior vice president and board member of Super Micro Computer; Ting-Wei “Willy” Sun, 44, a company contractor; and Ruei-Tsang “Steven” Chang, a Taiwan-based sales manager who remains at large. Liaw was arrested in California and released on bail, while Sun was held pending a bail hearing.
According to court papers, Liaw and Chang directed a Southeast Asian firm to place about $2.5 billion in server orders from the California-based company, with at least $510 million later diverted to China.
Super Micro said the alleged conduct violated company policies and that it is cooperating with investigators. Nvidia said it maintains strict compliance measures and does not support systems diverted in breach of export regulations.
16 days ago