Tech-News
Militant groups experimenting with AI as risks rise
While the world races to leverage artificial intelligence, militant groups are also exploring the technology, even if their exact objectives remain unclear.
US national security experts and intelligence agencies warn that extremist organizations could use AI to recruit members, produce realistic deepfake content, and enhance cyberattacks.
A user on a pro-Islamic State website last month encouraged supporters to incorporate AI into their operations. “One of the best things about AI is how easy it is to use,” the user wrote in English.
“Some intelligence agencies worry that AI will contribute (to) recruiting,” the user continued. “So make their nightmares into reality.”
Though IS no longer controls territory in Iraq and Syria, the group operates as a decentralized network sharing a violent ideology. Experts say its early recognition of social media’s power for recruitment and disinformation makes its interest in AI unsurprising.
For loosely organized, under-resourced extremist groups—or even a single individual with internet access—AI can mass-produce propaganda or deepfakes, amplifying influence.
“For any adversary, AI really makes it much easier to do things,” said John Laliberte, former NSA vulnerability researcher and CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn't have a lot of money is still able to make an impact.”
How extremists are using AI
Since programs like ChatGPT became widely available, militant groups have experimented with AI to generate realistic photos and videos. Combined with social media algorithms, such content can attract new recruits, intimidate opponents, and spread propaganda on an unprecedented scale.
Two years ago, extremist groups circulated fabricated images of the Israel-Hamas war showing bloodied, abandoned children in destroyed buildings. The images fueled outrage and polarization while obscuring the actual horrors of the conflict. Similar tactics were used by violent groups in the Middle East and antisemitic organizations abroad.
Following a concert attack in Russia last year that killed nearly 140 people, AI-generated propaganda videos were widely shared online to recruit supporters.
IS has also created deepfake audio of its leaders reciting scripture and used AI to rapidly translate messages into multiple languages, according to SITE Intelligence Group, which monitors extremist activity.
‘Aspirational’ for now
Experts say these groups still lag behind state actors like China, Russia, or Iran and consider advanced uses of AI “aspirational.”
But Marcus Fowler, former CIA agent and CEO of Darktrace Federal, warned that the risks are growing as accessible AI tools expand. Hackers already use synthetic audio and video for phishing, impersonating officials to access sensitive networks. AI can also automate cyberattacks and generate malicious code.
A greater concern is that extremists could attempt to employ AI in developing biological or chemical weapons, compensating for technical gaps, a risk highlighted in the Department of Homeland Security’s recent Homeland Threat Assessment.
“ISIS got on Twitter early and found ways to use social media to their advantage,” Fowler said. “They are always looking for the next thing to add to their arsenal.”
Efforts to counter the threat
Lawmakers are pushing measures to address these dangers.
Sen. Mark Warner of Virginia, top Democrat on the Senate Intelligence Committee, said AI developers should be able to share information about malicious uses by extremists, hackers, or foreign spies.
“It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors,” Warner said.
House lawmakers recently learned that IS and al-Qaida have held AI training workshops for their supporters.
Legislation passed by the U.S. House last month requires homeland security officials to assess AI threats from extremist groups annually.
Guarding against AI misuse, Rep. August Pfluger, R-Texas, said, is similar to preparing for conventional attacks.
“Our policies and capabilities must keep pace with the threats of tomorrow,” he said.
8 hours ago
Militant groups experimenting with AI, raising security concerns
As artificial intelligence (AI) spreads globally, militant groups are experimenting with the technology, raising concerns among national security experts. Extremist organizations could use AI to recruit followers, produce realistic deepfakes, and refine cyberattacks.
A recent post on a pro-Islamic State (IS) forum encouraged supporters to integrate AI into operations, highlighting its ease of use. IS, once a territorial force in Iraq and Syria and now a decentralized network, has long exploited social media for recruitment and propaganda. Experts warn AI allows even small, poorly resourced groups to amplify their influence.
Pakistani forces kill 7 insurgents in raid on militant hideout in Balochistan
Researchers say extremist groups have created AI-generated photos and videos depicting conflict scenarios to recruit members and spread disinformation. AI is also used to produce deepfake audio of leaders and translate messages rapidly into multiple languages.
While sophisticated AI use remains “aspirational,” officials caution the risks are growing. Hackers are already using synthetic media for phishing and cyberattacks, and militant groups could potentially pursue AI-assisted chemical or biological weapons.
U.S. lawmakers are calling for urgent measures, including better information sharing among AI developers and annual assessments of AI threats by extremist organizations.
Source: AP
14 hours ago
Humanoid robots draw attention at Silicon Valley summit amid lingering doubts
Once viewed as an unattractive investment due to high costs and complexity, humanoid robots are again in the spotlight as advances in artificial intelligence revive ambitions to create machines that move and work like humans.
That renewed interest was on display at the Humanoids Summit in Mountain View, where more than 2,000 attendees, including engineers from Disney, Google and numerous startups, gathered to demonstrate emerging technologies and discuss how to speed up development. Summit founder and venture capitalist Modar Alaoui said many researchers now believe humanoid robots, or other physical forms of AI, could eventually become commonplace, though the timeline remains uncertain.
Microsoft to invest $17.5 billion in India for AI and Cloud infrastructure
Despite the enthusiasm, skepticism was widespread. Experts cautioned that major technical challenges remain before robots can serve as reliable workers in homes or offices. Cosima du Pasquier, founder of Haptica Robotics, said significant research gaps still need to be addressed, particularly in areas such as touch and dexterity.
China currently leads the sector, backed by government incentives and a national push to build a humanoid robotics ecosystem by 2025, according to McKinsey & Company. Chinese-made robots dominated displays at the summit, while U.S. firms are benefiting from advances in generative AI that help robots better understand and navigate their environments.
Even so, veteran roboticists warn that fully capable humanoid robots are still a distant goal, with questions remaining over whether current investments will deliver the promised breakthroughs.
Source: AP
1 day ago
South Africa relaxes affirmative action rules, clearing path for Starlink after Musk criticism
South Africa has adjusted its communications licensing rules to allow Elon Musk’s Starlink and other foreign-owned satellite internet companies to operate without transferring a 30% ownership stake to Black or other non-white South Africans.
Under the revised policy, announced Friday by Communications Minister Solly Malatsi, foreign firms seeking licenses in the communications sector can meet affirmative action requirements through alternative “equity equivalent” measures. These may include investments in skills development, training programs, or other initiatives designed to support historically disadvantaged communities, rather than direct shareholding.
Similar provisions already exist for foreign companies operating in other industries across South Africa.
Musk, who was born in South Africa, has previously criticized the country’s ownership rules, calling them “openly racist.” Earlier this year, he claimed on social media that Starlink was barred from operating in the country because he is not Black. Former U.S. President Donald Trump has also condemned South Africa’s affirmative action framework, portraying it as discriminatory against white people.
The regulations stem from South Africa’s Broad-Based Black Economic Empowerment policy, a key post-apartheid initiative intended to address decades of racial inequality under white minority rule. While the policy remains central to the government’s transformation agenda, critics argue it discourages foreign investment.
Starlink, a subsidiary of SpaceX, already provides low-Earth orbit satellite internet services in more than a dozen African nations, including several that border South Africa.
Minister Malatsi said the updated policy could help expand fast and reliable internet access, particularly in rural and underserved parts of the country, where connectivity remains limited.
2 days ago
Google facing new EU antitrust probe over content used for AI
Google is facing fresh antitrust scrutiny in Europe as EU regulators on Tuesday opened a new investigation into the company’s use of online content to develop its artificial intelligence models and services.
The European Commission, the bloc’s top competition watchdog, is examining whether Google violated EU rules by using content from web publishers and YouTube uploads for AI purposes without compensating creators or allowing them to opt out. Regulators are particularly concerned about two services — AI Overviews, which produces automated summaries at the top of search results, and AI Mode, which provides chatbot-style responses.
The probe will also assess whether Google uses YouTube videos under similar terms to train its generative AI models while restricting access for rival developers.
Officials said they aim to determine whether Google gave itself an unfair competitive edge through restrictive conditions or privileged access to content.
Google said the complaint “risks stifling innovation” and vowed to continue working with news and creative industries as they transition into the AI era.
The investigation falls under the EU’s traditional competition rules, not the newer Digital Markets Act designed to curb Big Tech dominance.
Microsoft to invest $17.5 billion in India for AI and Cloud infrastructure
EU competition chief Teresa Ribera said AI innovation must not undermine core societal principles.
Last week, the Commission launched a separate antitrust probe into WhatsApp’s AI policy and fined Elon Musk's platform X €120 million for digital rule violations, prompting criticism from Trump administration officials.
The EU is “agnostic” about company nationality and focuses solely on potential anti-competitive behavior, spokeswoman Arianna Podesta said.
Google will be able to respond to the concerns, and U.S. authorities have been notified. The case has no deadline and could lead to fines of up to 10% of Google’s global annual revenue.
Source: AP
5 days ago
Microsoft to invest $17.5 billion in India for AI and Cloud infrastructure
Microsoft on Tuesday announced its largest-ever investment in Asia, pledging $17.5 billion over the next four years to expand India’s cloud computing and artificial intelligence infrastructure.
CEO Satya Nadella revealed the plan on X following a meeting with Indian Prime Minister Narendra Modi in New Delhi. He said the investment aims to help India develop “infrastructure, skills, and sovereign capabilities” to support its AI ambitions.
The announcement highlights intensifying global competition among tech giants in India, one of the world’s fastest-growing digital markets. In October, Google committed $15 billion to establish its first AI hub in Visakhapatnam.
Massachusetts court reviews lawsuit accusing Meta of making Facebook and Instagram addictive for minors
Nadella’s three-day India visit includes policy discussions and participation in AI-focused events in Bengaluru and Mumbai. The government has set ambitious targets to become a global AI and semiconductor hub, offering incentives to attract multinational technology firms.
Microsoft, which has been in India for over three decades and employs more than 22,000 people, plans to scale up cloud and data center operations nationwide, including a new hyperscale data center expected to go live by mid-2026.
Source: AP
6 days ago
Bangladesh inks MoU with Thales Alenia Space to boost earth observation capabilities
Bangladesh Satellite Company Limited (BSCL) on Tuesday signed a Memorandum of Understanding (MoU) with Thales Alenia Space of Italy to enhance the country’s capacity in Earth Observation (EO) systems and expand the use of satellite imagery.
The MoU was signed at the conference room of the Posts and Telecommunications Division at the Secretariat in the presence of Foyez Ahmed Tayyeb, Special Assistant to the Chief Adviser and Antonio Alessandro, Ambassador of Italy to Bangladesh.
Under the MoU both organisations will collaborate on local skills development, knowledge transfer, and pilot applications of EO data to support national priorities such as disaster management, climate monitoring, agriculture, and urban planning.
Foyez Ahmed thanked Italian Ambassador Antonio Alessandro for Italy’s continued support to Bangladesh, calling Italy ‘a trusted friend and partner.’
He said Bangladesh is prioritising advanced technologies, especially satellite and space-based solutions to strengthen land management, agriculture, disaster monitoring, climate resilience and national security.
“Every year around 25,000 technology graduates enter our workforce. Creating opportunities for them is our national responsibility,” he said, stressing the need for a National Satellite Image Repository and unified data access for all ministries.
He called for greater collaboration with Thales in capacity building, institutional training, university partnerships and cybersecurity, noting that global best practices can help Bangladesh accelerate digital transformation.
Foyez Ahmed said the MoU will open new doors of cooperation between Bangladesh and Italy in emerging technologies.
Italian Ambassador Antonio Alessandro expressed his pleasure at attending the ceremony.
He highlighted the significance of the partnership in Earth observation and satellite technologies, noting its strategic importance for national planning, disaster management, and environmental monitoring.
Antonio Alessandro said the programme combines optical and radar observation which is particularly suited for Bangladesh given its weather conditions.
This partnership marks the beginning of a long-term collaboration between Italy and Bangladesh in advanced technologies.
The Ambassador praised Bangladesh’s commitment to digitalisation, modernisation, and technological advancement, emphasising Italy’s readiness to support the country’s journey toward becoming a technology-driven nation.
"Post and Telecommunications Secretary Abdun Naser Khan, BSCL Managing Director and CEO Dr. Muhammad Imadur Rahman, officials from Thales Alenia Space, and other officials from the Post and Telecommunications Division were present at the event.
6 days ago
Paramount launches hostile bid for Warner Bros., aiming to top Netflix’s $72 billion offer
Paramount on Monday unveiled a hostile takeover attempt for Warner Bros. Discovery, setting the stage for an intense showdown with rival bidder Netflix for control of the company behind HBO, CNN and one of Hollywood’s most iconic studios — and with it, enormous influence over America’s entertainment industry.
The move comes just days after Warner executives agreed to Netflix’s $72 billion acquisition proposal. Paramount’s rival offer, valued at $74.4 billion, bypasses Warner’s leadership and appeals directly to shareholders with a richer deal that also includes purchasing Warner’s entire business — including its cable networks, which Netflix does not want.
Paramount said it went hostile only after making several earlier proposals that Warner management largely ignored following the company’s October announcement that it was open to a sale.
In its message to investors, Paramount emphasized that its bid includes $18 billion more in cash than Netflix’s and argued it would face fewer regulatory hurdles under President Donald Trump, who often inserts himself into major corporate decisions.
Over the weekend, Trump suggested that a Netflix–Warner merger “could be a problem” because of its potential market dominance and said he planned to review the deal personally.
Netflix, however, insists Warner will ultimately reject Paramount’s offer and that both regulators and Trump will support its acquisition. Co-CEO Ted Sarandos pointed to several conversations he has had with Trump focused on Netflix’s hiring and growth. “The president’s interest is the same as ours — protecting and creating jobs,” Sarandos said Monday.
Political spotlight intensifiesParamount’s bid gained immediate attention in Washington, where lawmakers from both parties raised concerns about how the competing deals might affect streaming prices, movie theater jobs, and the diversity of media voices.
Paramount CEO David Ellison — whose family has deep ties to Trump — said the company had submitted six proposals to Warner over the last three months. He argued that his offer would strengthen Hollywood, boost competition rather than reduce it, and increase the number of films released in theaters.
Regulatory filings also revealed another possible advantage for Paramount: an investment firm run by Trump’s son-in-law Jared Kushner plans to join the deal. Also participating are sovereign wealth funds from three Persian Gulf countries, widely believed to be Saudi Arabia, Abu Dhabi and Qatar — nations where Trump’s family business has recently expanded with major real estate partnerships.
Recent editorial changes at CBS News, such as installing Bari Weiss as editor-in-chief after Paramount’s acquisition of The Free Press, could also appeal to conservatives who view the network as historically left-leaning.
Trump remains unpredictableDespite the connections, Trump’s involvement may not favor Paramount consistently. On Monday, he criticized the company for allowing 60 Minutes to interview Rep. Marjorie Taylor Greene, calling the network “NO BETTER THAN THE OLD OWNERSHIP.”
The struggle for control of Warner escalated Friday when Netflix unexpectedly announced it had struck a deal with Warner management to acquire the studios behind “Harry Potter,” HBO Max, and the DC franchise.
Netflix’s proposal includes cash and stock valued at $27.75 per Warner share, for a total enterprise value of $82.7 billion including debt. Paramount is offering $30 per share and values the deal at $108 billion including assumed debt. Its offer expires Jan. 8 unless extended.
However, the two bids are difficult to compare because they would result in different acquisitions. Netflix’s offer only proceeds after Warner spins off its cable networks, meaning CNN and Discovery are excluded — and the transaction is unlikely to close for at least a year.
Although the DOJ typically evaluates such mergers, Trump has broken precedent by taking a hands-on approach, alarming experts. Usha Haley of Wichita State University said Trump’s personal interest may be driven by a desire for “greater control over the media,” pointing to Paramount’s ties to Trump supporter Larry Ellison.
John Mayo, an antitrust expert at Georgetown, noted that although political rhetoric may intensify, DOJ analysts are likely to maintain nonpartisan standards regardless of the administration.
On Monday, Paramount shares rose 9%, Warner Bros. climbed 4.4%, and Netflix stock dropped 3.4%.
6 days ago
AI-powered police body cameras tested on Edmonton’s “high-risk” face
Police in Edmonton, Canada, have begun testing artificial intelligence–enabled body cameras capable of recognizing about 7,000 people on a “high-risk” watch list — a trial that could signal a major shift toward adopting facial recognition technology long deemed too invasive for law enforcement in North America.
The program marks a sharp turn from 2019, when Axon Enterprise, Inc., the top body-camera manufacturer, backed away from facial recognition amid serious ethical concerns. Now, the new pilot — launched last week — is drawing intense scrutiny well beyond Edmonton, the northernmost city in North America with over a million residents.
Barry Friedman, the former chair of Axon’s AI ethics board who once helped block the technology, told the Associated Press he fears the company is moving ahead without adequate transparency, public discussion or expert review.
“These tools carry major costs and risks,” said Friedman, now an NYU law professor. “There must be clear evidence of their benefits before deploying them."
Axon CEO Rick Smith insists the Edmonton trial is not a full-scale rollout but “early-stage field research” to evaluate performance and determine proper safeguards.
Testing the system in Canada allows the company to gather independent insights and refine oversight frameworks before any future U.S. consideration, Smith wrote in a blog post.
Edmonton police say the system is meant to enhance officer safety by detecting individuals flagged as violent, armed, dangerous or high-risk. The main list contains 6,341 names, with another 724 listed for serious outstanding warrants.
“We want this focused strictly on serious offenders,” said Ann-Li Cooke, Axon’s director of responsible AI.
Sam Altman issues ‘Code Red’ to boost ChatGPT as AI competition intensifies
The outcome could influence policing globally: Axon dominates the U.S. body-camera market and is expanding in Canada, recently beating Motorola Solutions for an RCMP contract. Motorola says it can enable facial recognition on its cameras but has purposely chosen not to use the feature for proactive identification — at least for now.
Alberta’s government mandated police body cameras provincewide in 2023 to increase accountability and speed up investigations. But real-time facial recognition remains divisive, with critics warning of surveillance overreach and racial bias. Some U.S. states have restricted the technology, while the European Union banned public real-time face scanning except in extreme cases.
In contrast, the U.K. has embraced it, with London’s system contributing to 1,300 arrests in two years.
Details about Edmonton’s pilot remain limited. Axon declined to disclose which third-party facial recognition model it uses. Police say the trial runs only in daylight through December due to Edmonton’s harsh winters and lighting challenges.
About 50 officers are participating, but they won’t see any real-time match alerts; results will be reviewed afterward. In the future, police hope it may warn officers of nearby high-risk individuals when responding to calls.
Privacy concerns are growing. Alberta’s privacy commissioner received a privacy impact assessment only on Dec. 2 — the day the trial was publicly announced — and is now reviewing it.
University of Alberta criminologist Temitope Oriola said Edmonton’s past tensions with Indigenous and Black communities make this experiment particularly sensitive. “Edmonton is essentially a testing ground,” he said. “It could lead to improvements — but that’s not guaranteed.”
Axon acknowledges accuracy challenges, especially under poor lighting, long distances or angles that disproportionately affect darker-skinned people. It insists every match will undergo human verification and says part of the test is determining how human reviewers must be trained to reduce risks.
Friedman argues Axon must release its findings — and that decisions about such technology shouldn’t be left to police agencies or private companies alone.
“A pilot can be valuable,” he said. “But it requires transparency and accountability. None of that is happening here. They’ve found a department willing to proceed, and they’re simply moving forward.”
7 days ago
EU fines Elon Musk’s X €120 million for violating social media regulations
The European Union on Friday slapped a 120 million euro ($140 million) fine on X, Elon Musk’s social media platform, for violating the bloc’s digital governance rules — a move likely to heighten tensions with Washington over issues of online speech.
The penalty follows a two-year investigation under the EU’s Digital Services Act (DSA), which requires major platforms to better protect users, curb illegal or harmful content, and increase transparency or face heavy sanctions. This is the first formal non-compliance ruling issued under the DSA.
EU officials said X committed three violations involving transparency, prompting the fine. The decision risks angering U.S. President Donald Trump, whose administration has criticized European digital rules as unfairly aimed at American tech firms.
U.S. Secretary of State Marco Rubio condemned the penalty on X, calling it an attack on American companies and citizens. Musk echoed Rubio’s message. Vice President JD Vance also accused the EU of trying to punish X for refusing to “censor” content.
EU officials rejected those claims. Commission spokesperson Thomas Regnier insisted the enforcement action is based solely on legal standards, not political motives or the nationality of companies.
X did not immediately respond to requests for comment.
Regulators first laid out their concerns in mid-2024, focusing on X’s blue checkmark system, which they described as a “deceptive design” that could mislead users and expose them to manipulation. Prior to Musk’s 2022 takeover, the badges signified verified public figures. Musk’s decision to sell checkmarks for $8 a month, without robust verification, left users unable to reliably assess account authenticity, the Commission said.
Officials also criticized X’s ad transparency database, which — under EU law — must display all ads, their funders, and target audiences. The Commission said X’s database suffers from poor design, limited accessibility, and long delays, hindering efforts to detect fraud and influence operations.
Additionally, the platform was accused of blocking researchers from accessing public data, limiting their ability to study risks faced by European users.
“Misleading users with blue checkmarks, hiding ad information, and restricting researchers have no place online in the EU,” said Henna Virkkunen, the Commission’s executive vice-president for tech sovereignty, security and democracy.
In a separate DSA case concluded Friday, TikTok agreed to modify its ad database to meet EU transparency standards.
9 days ago