artificial intelligence
AI boom hits Bangladesh amid global race in ‘Fourth Industrial Revolution’
Artificial intelligence (AI) is no longer just about ChatGPT, Gemini, Grok, or Perplexity; in an age charged with the excitement of the Fourth Industrial Revolution, billions of dollars are being poured into AI and automation as global tech giants compete fiercely, and this technological wave has now reached Bangladesh as well.
When businessman Selim Hossain called a private bank’s customer service recently, he expected to go through the usual menu options. Instead, he encountered something entirely different.
“An AI answered my call. It responded exactly as a human executive would — it felt like I was talking to a real customer service officer on a personal line,” he said, still astonished by the experience.
It is not just banking. Where once large customer service teams were required, now most tasks are handled through AI chatbots, and these have already evolved beyond text-based chat to live voice calls.
Starlink officially launched in Bangladesh
Faraz Ahmed, CEO of Global Leads Telesolution, a local teleservice company, said the industry has transformed drastically over the past five years.
“Previously, handling foreign clients required at least 15–20 team members, sometimes even 50 for large companies. Now, five people can manage an entire teleservice team — thanks to AI and automation,” he said.
He explained that AI can be trained to handle specific client interactions. Human intervention is only needed when an issue arises. “By subscribing to advanced AI software instead of maintaining large teams, we’ve redefined the entire teleservice job structure in Bangladesh,” Faraz added.
In the private job market, the familiar ‘curriculum vitae’ or ‘résumé’ is also seeing a shift.
According to a study titled ‘Application of Artificial Intelligence in Human Resource Management: A Bangladesh Perspective’ by the University Library of Munich, most Bangladeshi companies now use AI-based automation for CV and résumé screening.
AI is not only handling candidate screening but also the first stages of interviews. Various applications now replicate the functions of an entire HR team.
Mahmudul Hasan, Assistant Manager in the HR department of a software company, said most firms now rely on AI for attendance tracking, résumé screening, and even conducting preliminary interviews.
Starlink: CA Prof Yunus greets all involved
He mentioned AI-powered software like Olivia, HireVue, Leena, and Latis, which can conduct video interviews and assess candidates’ coding skills — even through complex tasks that human evaluators might find challenging.
“In a mid-sized company, an HR manager might earn around Tk 150,000 a month for tasks like talent acquisition, interviewing, attendance and performance checks. A single AI software can now handle all that for just Tk 100,000–200,000 a year,” Hasan noted.
In Bangladesh’s garment industry, the once-common position of ‘supervisor’ is disappearing fast.
“Our sewing machines now have screens displaying daily targets. If production falls below 50%, a red light flashes; above 70%, orange; and 100% completion triggers a green light,” explained one worker.
This monitoring system is now entirely AI-driven. A semi-automated application named Nidle is commonly used in the sector to track how long each worker operates a sewing machine and how much time is spent idle.
AI is also transforming video editing, content writing and voiceover work. Shamim Ahmed, CEO of View Motion360, a contract-based video content production firm, said Adobe’s AI tools have made their work much easier.
OpenAI adopts new business model, keeps Microsoft as key partner
“With Adobe Firefly, we can now handle graphics, image generation, and video editing with basic skills. In a few years, Photoshop and Premiere Pro will become fully AI-driven. Then, professional-quality content can be produced without hiring designers or editors,” Shamim added.
A 2019 UNDP study on Bangladesh’s job market projected that by 2030, around 5.38 million people may lose their jobs as automation replaces traditional roles. To survive, workers will need to adapt and upskill in line with new technologies.
According to a 2023 McKinsey study, half of all jobs worldwide could be AI-driven by 2060 — meaning offices that once needed 100 employees may function more efficiently with 50 or fewer.
Visiting Professor of Economics at the University of Reading, UK, Dr Niaz Asadullah, cautioned that Bangladesh’s pace of automation is outstripping the development of a skilled workforce. “Pursuing automation without upskilling people will lead to severe unemployment,” he warned.
He urged the government to overhaul the education system to ensure graduates leave with practical skills. “Existing workers also need proper training to remain relevant in an automated economy,” he added.
Bangladeshi IT expert Imtiaz Hasan, now working as a cybersecurity researcher at trading firm Deriv in Malaysia, said, “Many think automation is a threat to humans — but it’s actually two-sided. If you fall behind, AI becomes a threat. If you adapt and upskill, AI becomes your tool.”
He emphasised that while Bangladesh is advancing in automation and AI, the country should now focus on developing homegrown software and building a skilled, automation-resilient workforce instead of relying solely on foreign solutions.
1 month ago
Nvidia tops Q1 forecasts despite tariff hurdles
Artificial intelligence technology bellwether Nvidia overcame a wave of tariff-driven turbulence to deliver another quarter of robust growth amid feverish demand for its high-powered chips that are making computers seem more human.
The results announced Wednesday for the February-April period came against the backdrop of President Donald Trump’s on-again, off-again trade war that has whipsawed Nvidia and other Big Tech companies riding AI mania to propel their revenue and stock prices upward, AP reports.
But Trump’s tariffs — many of which have been reduced or temporarily suspended – hammered the market values of Nvidia and other tech powerhouses heading into the springtime earnings season as investors fretted about the trade turmoil dimming the industry’s prospects.
Those worries have eased during the past six weeks as most Big Tech companies lived up to or exceeded the analyst projections that steer investors, capped by Nvidia’s report for its fiscal first quarter.
Nvidia earned $18.8 billion, or 76 cents per share, for the period, a 26% increase from the same time last year. Revenue surged 69% from a year ago to $44.1 billion.
If not for a $4.5 billion charge that Nvidia absorbed to account for the US government’s restrictions on its chip sales to China, Nvidia would have made 96 cents per share, far above the 73 cents per share envisioned by analysts.
OpenAI Codex: A Pair Programmer to Shape the Future Coding Paradigm
In another positive sign, Nvidia predicted its revenue for the May-July period would be about $45 billion, roughly the level that investors had been anticipating. The forecast includes an estimated $8 billion loss in sales to China due to the export controls during its fiscal second quarter, after the restrictions cost it about $2.5 billion in revenue during the first quarter.
In a conference call with analysts, Nvidia CEO Jensen Huang lamented that the US government had effectively blocked off AI chip sales to China — a market that he estimated at $50 billion.
Huang warned the export controls have spurred China to build more of its own chips in a shift that he predicted the US will eventually regret.
“The US based its policy on the assumption that China cannot make AI chips. That assumption was always questionable, and now it’s clearly wrong,” Huang said.
Despite Nvidia's lost opportunities in China, investors were heartened by the company's first-quarter performance. Nvidia's shares gained more than 4% in extended trading after the numbers came out.
Nvidia’s stock price ended Wednesday’s regular trading session at $134.81, just slightly below where it stood before Trump’s January 20 inauguration.
The price had plunged to as low as $86.62 last month during a nosedive that temporarily erased $1.2 trillion in shareholder wealth.
6 months ago
Watch The Skies: World's First AI-dubbed Feature Film to Change The Way of Watching Movies
In the ever-evolving landscape of global cinema, ‘Watch the Skies’ emerges as a bold statement in technological storytelling. This Swedish sci-fi adventure made its American debut on May 9, 2025, signalling a shift in how a film can transcend language. With artificial intelligence now stepping into the dubbing booth, a new chapter begins for international film experiences. Let’s get to know the 2025 film 'Watch The Skies' and uncover the impact of this groundbreaking leap into AI-dubbed cinema.
Plot, Direction, Cast
Set in 1996, this nostalgic sci-fi adventure follows the journey of a teenage girl named Denise. Her world is still haunted by the sudden disappearance of her father. Years ago, he vanished one night while chasing a report of a flying saucer in the woods near their quiet town.
Denise’s search for answers draws her to a group of misfit investigators at a local UFO club- UFO Sweden. The story unfolds with eerie mystery and quiet intensity, echoing the spirit of the iconic ‘The X-Files.’
Read more: Asif Ali-starrer Tiki Taka Filmmaking in Full Swing, Release Window Revealed
This sci-fi movie comes to life under the direction of Victor Danell, produced by his banner, Crazy Pictures. Danell also co-wrote the script with Jimmy Nivren Olsson.
The movie stars Inez Dahl Torhaug, Jesper Barkselius, Sara Shirpey, Eva Melander, Hakan Ehn, Isabelle Kyed, Jean-Paul Lucasson, Joakim Sallquist, and Christoffer Nordenrot.
Albin Pettersson and Olle Tholen steer the production.
New Era of the Film Industry
‘Watch the Skies’ stands as the first feature film to showcase an innovative leap in cinematic technology: AI-powered visual dubbing. At its core is TrueSync, a tool developed by British AI startup Flawless. It adjusts actors’ facial movements and spoken lines to align seamlessly with dubbed audio. To ensure authenticity, the original Swedish cast re-recorded their lines in English. It allows the AI to sync both voice and lip movement without replacing the actors themselves.
Read more: 8 New K-Dramas to Watch in June 2025
The result is a dubbed version that preserves the actors' on-screen presence while eliminating the usual disconnect often found in traditional dubbing. XYZ Films, in partnership with Flawless, led this initiative.
It addresses the long-standing language barrier that foreign films face beyond their domestic markets. This method opens doors for international stories to be experienced by broader audiences, without losing their visual and emotional integrity.
Producer Pettersson, in a behind-the-scenes trailer, remarked that this shift represents a breakthrough for the global film industry. He emphasised how language can limit reach and that this technology overcomes that challenge.
Read more: Paresh Rawal Quits Hera Pheri 3, Leaving Franchise Fans Shocked
Director Danell acknowledged that while initial apprehension is natural among creatives, the process allowed them to retain full artistic control. He described the experience of re-performing the film in English as both unusual and exciting.
Following the 2025 movie ‘Watch the Skies,’ XYZ Films and Flawless are continuing their AI-dubbing collaborations with more multinational titles. The lineup includes ‘The Book of Solutions’ (France), ‘Smugglers’ (Korea), ‘Tatami’ (Iran), and ‘The Light’ (Germany).
Challenges to Embrace
Despite the growing excitement around visual dubbing, not all voices in the global film community are in full support.
Simon Kennedy, president of the Australian Association of Voice Actors (AAVA), acknowledged that the technology behind visual dubbing is undeniably fascinating. However, he pointed out that the ethical framework guiding its use remains unclear. He emphasised that issues of consent, control, and fair compensation must be taken seriously.
Read more: Bhool Chuk Maaf Movie Update: New Theatrical and OTT Release Dates
Kennedy stressed that artists should be fully informed and involved when their vocal likeness is used. He warned against scenarios where voices are manipulated without permission. Some potential risks lead to unauthorised content that an actor may never agree to deliver.
He also noted that Australia does not have a large dubbing industry. Instead, much of the English-language dubbing work for foreign films is carried out in the UK and the US.
Replacing Human Voice Artists
Kennedy expressed apprehension over how rapidly advancing technology could displace human talent. He noted that if Australian film companies gain access to sufficiently convincing AI-generated voices, many would likely adopt them without hesitation. Kennedy pointed to two recent ad campaigns that had quietly employed AI voices without disclosing their use.
These fears intensified in April 2025 with the recent incident in CADA, a Sydney-based station under the Australian Radio Network. They had used an AI-generated radio host for roughly six months without informing its audience.
Read more: Top 10 Netflix Originals Streaming in June 2025
Teresa Lim, vice president of AAVA and a fellow voice actor, reinforced the need for transparency. She acknowledged that ‘Watch the Skies’ still involved human translation and performance, despite relying on AI tools. In her view, this balanced approach remained acceptable. The real issue, she argued, emerges when human contributions are entirely removed. That’s when ethical risks deepen, ranging from the erosion of artistic integrity to the cultural dilution of performances.
Lim also warned that the technology is dangerously efficient and inexpensive. It predicts that such cost-saving appeal might lead producers to bypass traditional dubbing altogether.
Concerns raised by Australian voice artists align with those voiced across the global dubbing community. On March 28, prominent German dubbers, including Peter Flechtner and Claudia Urbschat-Mingues, released a widely circulated video. In it, they warned of growing threats to their profession and urged the public to prioritise ‘artistic intelligence’ over AI.
Read more: Godzilla x Kong: Supernova Teased for 2027 with Another Epic Monster Clash
Flechtner and Urbschat-Mingues are known, respectively, as the German voices for renowned Hollywood artists Ben Affleck and Angelina Jolie.
In France, resistance to AI-dubbing gained traction through the petition #TouchePasMaVF (Don’t Touch My Dubbing). It was initiated by the French Union of Performing Artists and Les Voix, a leading dubbing association. At the time of reporting, it had amassed 221,693 signatures, underscoring the depth of unease within the industry.
Palace Cinemas CEO Benjamin Zeccola, speaking from the Cannes Film Festival, reacted to the Swedish film with visible emotion. He admitted feeling disheartened, explaining that much of his joy in cinema stemmed from hearing actors speak in their native languages. For him, the natural cadence carried by original dialogue reflects one of humanity’s richest traits.
Still, Zeccola also recognised the potential of AI-dubbing to make international cinema more accessible. He acknowledged that such technology might help many reach wider audiences, especially in places like Australia. There, foreign-language titles often remain underexposed.
Read more: Top Amazon Prime Originals to Binge-Watch in June 2025
In a Nutshell
As the world’s first AI-dubbed feature, ‘Watch The Skies’ is truly set to shift how audiences experience foreign films. Backed by Flawless and XYZ Films, the TrueSync tool syncs English dialogue with actors' lip movements. Yet, concerns over ethics, consent, and job loss dominate global voice artist communities. Australian and European professionals warn of fading artistic integrity. Despite its accessibility gains and clear risks, this early stage of innovation is likely to spark further debate.
6 months ago
Speakers urge finalisation of AI policy through consultative process
Speakers on Saturday urged the government to publish an update on the National Artificial Intelligence Policy 2024 and finalise it through a consultative process.
They made the call at a webinar titled “Artificial Intelligence and the Future of Journalism in Bangladesh,” organised by Voices for Interactive Choice and Empowerment (VOICE), a rights-based research and advocacy organisation marking the World Press Freedom Day 2025.
The panel featured journalists, civil society members, human rights defenders, legal experts, technologists and researchers.
They discussed that literacy around the use of AI must be ensured as there is a growing threat posed by the spread of 'deepfakes', misinformation, algorithmic bias, enhanced surveillance, and the risk of job displacement for human journalists.
Ahmed Swapan Mahmud, Executive Director of VOICE said, “Journalists, human rights activists, and civil society actors must be consulted for formulation of a people-friendly AI policy.”
Md Saimum Reza Talukder, Senior Lecturer at BRAC University’s School of Law, raised concerns about how keyword filtering is influenced by political decisions and how platforms are shadow-banning content, limiting its reach to audiences.
He called for AI regulation to be grounded in human safety, emphasising that local norms and values must be incorporated into AI policies. He also underscored the need for Bangladesh to be added to the global AI readiness index.
Book on ‘Pharmaceutical Quality Assurance and Artificial Intelligence’ launched
Highlighting an absence of AI policy in newsrooms, Rezwan Islam from Engage Media said, “AI is beneficial to research and helps save time but it cannot write an article as it does not have the experience and judgement of the journalist.”
Miraj Ahmed Chowdhury, Founder of Digitally Right, discussed the challenge of copyright in the context of AI, noting that the issue will be tool-specific for derivative content.
Sharabon Tohura, Consultant at Nijera Kori, highlighted how misinformation on platforms like YouTube is consumed by elderly people and can reach epidemic levels, stressing the need for digital literacy campaigns.
Sharmin Khan, Legal Consultant at the International Center for Not-for-Profit Law (ICNL), Minhaj Aman, Research Coordinator at Digitally Right, and Tajul Islam from The Business Standard also spoke at the webinar.
7 months ago
Americans remain sceptical of generative AI in journalism, study reveals
Leading US newsrooms are experimenting with generative artificial intelligence (AI) tools to enhance reader experience but a research reveals a significant gap between newsroom innovation and audience readiness.
A wide-ranging study by the Poynter Institute and the University of Minnesota indicates nearly half of Americans are not comfortable receiving news from generative AI, while one in five believe publishers should avoid AI entirely.
Dozens of America’s most well-known newsrooms including the San Francisco Chronicle,the Texas Tribune, Time magazine and the Washington Post are experimenting with chatbots to help readers pick restaurants, learn more about political candidates and dive deeper into articles.
However, researchers suggest that public hesitation remains a key challenge.
Benjamin Toff, associate professor at the Hubbard School of Journalism and director of the Minnesota Journalism Center, presented the findings at the second Summit on AI, Ethics and Journalism, organised by Poynter and The Associated Press in New York City last week.
“The data suggests if you build it, do not expect overwhelming demand for it,” said Toff, who has been studying news audiences — and avoiders — for nearly a decade.
According to the Poynter Institute and the University of Minnesota, the survey found that 49.1% of respondents had no interest in using AI-based tools for information. Meanwhile, 39.3% said they would only use such a tool if editors verified its responses for factual accuracy, and just 9.9% expressed willingness to use the tools even if they occasionally misinterpreted published reporting.
Meredith Broussard, data journalist and associate professor at the Arthur L. Carter Journalism Institute of New York University, delivered a keynote at the summit where she spoke bluntly about user experiences with chatbots.
“Anybody really like using a chatbot? No. I can’t stand it. So, guess what? Your users feel like that, too,” she said. “They’re not excited about interacting with a chatbot on your site.”
The findings also revealed that many people have yet to interact with generative AI beyond customer service settings, making their scepticism about AI in journalism even more pronounced.
Furthermore, younger audiences, often perceived as early adopters of technology, are not as engaged with AI as expected. Nearly half of those aged 18 to 29 reported they hadn’t used or even heard of tools like ChatGPT.
Despite these reservations, some media organisations are pushing ahead with innovation. Hearst Newspapers launched the “Chowbot” in early 2024, an AI chatbot recommending restaurants based on decades of reporting.
Irish privacy regulator probes X over use of user data to train Grok AI
Ryan Serpico, deputy director of newsroom AI and automation at Hearst, defended the strategy, saying: “We are basing this off of 30 years of high-quality reporting, high-quality editing, that Google might push to the side or not value in their model.”
Christina Bruno, digital growth strategist at Spotlight PA, echoed the importance of exploring new formats. “We need to be experimenting with more formats of information delivery. I think chatbots are one way of doing that,” she said.
Internationally, audience-facing AI tools have gained more traction. In Sweden, publisher Aftonbladet’s EU election chatbot answered over 150,000 questions, while in Poland, a virtual assistant from Ringier Axel Springer helped generate 33,000 unique travel plans to promote German tourism.
“Experiments are great,” Broussard noted. “But you’ve got to pay attention to the results of the experiment.”
The study also showed a disconnect between public perception and newsroom practices. Respondents were asked, “Thinking about news media in general in the US right now, how often, if at all, do you think they currently use AI to do any of the following?” Of the 1,128 surveyed, 31.6% said AI is often used to make charts and infographics, and 6.2% said always.
For image creation when photographs are unavailable, 25.2% said often, and 6.2% said always. Meanwhile, 29.2% believed AI was often used to convert articles into audio or video, with 5.8% responding always.
Despite these assumptions, trust remains a pressing issue. More than half of respondents reported little or no confidence in newsrooms using AI to write articles or create imagery. For those with high news literacy, over 90% demanded clear disclosures when AI tools were used to generate text or edit photos.
Zuri Berry, digital strategy editor at The Baltimore Banner, sees this as a validation of their cautious approach. “It also serves as a confirmation of our current approach to AI, which entails disclosures, human review and verification and limitations on some tools that undermine our trust and credibility with readers,” he said.
7 months ago
As AI reshapes hospital care, human nurses push back
The next time you schedule a medical check-up, you might receive a call from someone like Ana—a reassuring voice ready to help you prepare for your appointment and answer any urgent queries.
With a soothing, friendly manner, Ana is designed to put patients at ease, much like many nurses across the U.S. However, unlike them, she is available round-the-clock and can communicate in multiple languages, from Hindi to Haitian Creole.
Baidu unveils new AI models, claims superiority over DeepSeek
That’s because Ana isn’t a person but an AI-powered programme developed by Hippocratic AI, one of several emerging companies focused on automating time-consuming tasks traditionally handled by nurses and medical assistants.
This marks the most visible integration of AI into healthcare, where hundreds of hospitals are increasingly relying on advanced computer systems to monitor patient vitals, identify emergencies, and initiate detailed care plans—responsibilities that were once solely managed by nurses and other healthcare professionals.
Hospitals argue that AI is enhancing efficiency among nurses while also tackling issues like burnout and staff shortages. However, nursing unions contend that this technology, which is not fully understood, is undermining nurses’ expertise and compromising the quality of patient care.
“Hospitals have long been waiting for a tool that seems credible enough to replace nurses,” said Michelle Mahon of National Nurses United. “The entire system is being shaped to automate, deskill, and eventually replace caregivers.”
National Nurses United, the largest nursing union in the U.S., has led over 20 protests at hospitals nationwide, demanding the right to influence AI implementation and protections against disciplinary action if nurses choose to override automated recommendations. Concerns escalated in January when Robert F. Kennedy Jr., the incoming health secretary, suggested AI nurses, "as good as any doctor," could be deployed in rural areas. On Friday, Dr. Mehmet Oz, nominated to oversee Medicare and Medicaid, expressed confidence that AI could “free doctors and nurses from excessive paperwork.”
Initially, Hippocratic AI advertised its AI assistants at $9 per hour, significantly lower than the approximately $40 per hour paid to registered nurses. The company has since removed this pricing from its promotional materials, instead focusing on demonstrating its capabilities and assuring customers of rigorous testing. The company declined requests for an interview.
Can technology help more sexual assault survivors in South Sudan?
AI in Hospitals Can Trigger False Alarms and Risky Advice
For years, hospitals have been testing various technologies aimed at improving patient care and reducing costs, incorporating tools like sensors, microphones, and motion-detecting cameras. Now, these devices are being connected to electronic medical records and analysed to predict medical conditions and guide nurses—sometimes even before a patient is examined.
Adam Hart, an emergency room nurse at Dignity Health in Henderson, Nevada, encountered this first-hand when the hospital’s AI system flagged a new patient as potentially having sepsis, a serious infection-related complication. According to protocol, he was required to administer a large dose of IV fluids immediately. However, upon further examination, Hart realised the patient was undergoing dialysis, meaning excessive fluids could strain their kidneys.
When he raised the issue with the supervising nurse, he was told to follow the protocol as instructed. It wasn’t until a nearby doctor intervened that the patient instead received a controlled IV fluid infusion.
“Nurses are being paid to think critically,” Hart said. “Handing over our decision-making to these systems is reckless and dangerous.”
While nurses acknowledge AI’s potential to assist in monitoring multiple patients and responding to emergencies swiftly, they argue that, in practice, it often results in an overwhelming number of false alerts. Some even mistakenly classify basic bodily functions—like a patient having a bowel movement—as emergencies.
“You’re trying to concentrate on your work, but you keep getting all these notifications that may or may not be significant,” said Melissa Beebe, a cancer nurse at UC Davis Medical Center in Sacramento. “It’s difficult to determine which alerts are accurate because there are so many false ones.”
Albanian opposition protests TikTok ban claiming election-related censorship
Can AI Be Beneficial in Hospitals?
Even the most advanced AI systems will overlook subtle signs that nurses instinctively notice, such as changes in facial expressions or unusual odours, noted Michelle Collins, dean of Loyola University’s College of Nursing. However, human nurses are not infallible either.
“It would be unwise to dismiss AI entirely,” Collins stated. “We should leverage its capabilities to enhance care, but we must also ensure it does not replace the human touch.”
An estimated 100,000 nurses left the workforce during the COVID-19 pandemic—the largest decline in staffing in four decades. With the aging population and more nurses retiring, the U.S. government projects over 190,000 nursing vacancies annually through 2032.
Given these challenges, hospital administrators view AI as a crucial support system—not to replace human care but to assist nurses and doctors in gathering information and communicating with patients.
‘Sometimes Patients Are Speaking to a Human, Sometimes They’re Not’
At the University of Arkansas Medical Sciences in Little Rock, staff make hundreds of calls each week to prepare patients for surgery. Nurses confirm prescription details, heart conditions, and other concerns—such as sleep apnea—that must be addressed before anaesthesia.
The challenge is that many patients are only available in the evenings, often during dinner or their children’s bedtime.
“So, we need to contact several hundred patients within a two-hour window—but I don’t want to pay my staff overtime to do that,” explained Dr. Joseph Sanford, the hospital’s health IT director.
Since January, the hospital has been using an AI assistant from Qventus to handle calls, exchange medical records, and summarise information for human staff. Qventus reports that 115 hospitals currently use its technology to improve hospital efficiency, reduce cancellations, and alleviate staff burnout.
Each call begins with a disclaimer identifying the programme as an AI assistant.
“We prioritise full transparency so that patients always know whether they are speaking to a person or an AI,” Sanford said.
While Qventus focuses on administrative tasks, other AI developers are aiming for a broader role in healthcare.
Israeli startup Xoltar has created lifelike avatars capable of conducting video calls with patients. In collaboration with the Mayo Clinic, the company is developing an AI assistant to teach cognitive techniques for managing chronic pain. Additionally, Xoltar is working on an AI avatar designed to assist smokers in quitting. During early trials, patients spent an average of 14 minutes interacting with the programme, which can detect facial expressions, body language, and other cues.
Nursing experts studying AI believe these tools might be effective for relatively healthy individuals who proactively manage their health. However, that doesn’t apply to the majority of patients.
“It’s the seriously ill who account for most healthcare needs in the U.S., and we need to carefully assess whether chatbots are truly suited for those cases,” said Roschelle Fritz of the University of California Davis School of Nursing.
8 months ago
Top Free Prompt Engineering Courses Online in 2025
Prompt engineering has become a critical skill in the age of artificial intelligence (AI), empowering users to create clear and effective instructions for generating accurate outputs. To meet growing demand, several online platforms are offering free online prompt engineering courses that teach you how to unlock the full potential of AI tools.
Best Free Online Prompt Engineering Courses
.
ChatGPT for Everyone by OpenAI and Learn Prompting
This 1-hour beginner-level course, led by Sander Schulhoff and Shyamal Anadkat, introduces the fundamentals of ChatGPT and generative AI. It explains how ChatGPT works, its diverse applications, and techniques for crafting effective prompts. The syllabus covers using ChatGPT as a personal assistant, enhancing productivity, and creating content.
Participants will learn prompt-writing strategies, including role assignment while understanding ethical considerations and ChatGPT's limitations. Real-world case studies further enhance the learning experience. The course is free, self-paced, and offers a certificate of completion through Learn Prompting Plus to showcase your skills.
Read more: 10 Best Free AI Image Generators in 2025
Course Link: https://learnprompting.org/courses/chatgpt-for-everyone
Free Prompt Engineering Course by Simplilearn
This 1-hour beginner-level course offers a free and comprehensive introduction to AI, NLP, and prompt engineering. It covers the fundamentals of AI and NLP, the concept and applications of prompt engineering, types of prompts, and techniques for creating effective and engaging prompts.
Taught by industry experts, the course combines theory with practical examples, real-world case studies, and hands-on exercises to enhance learning. Upon completion, participants receive a certificate, which can be shared on LinkedIn. Perfect for AIML engineers, chatbot developers, and data scientists, this course equips learners to design and optimize prompts for conversational AI systems.
Read more: 12 Most In-Demand Tech Skills for 2025: Stay Ahead in the Job Market
Course Link: https://www.simplilearn.com/prompt-engineering-free-course-skillup
Prompt Engineering for Everyone by IBM
Led by Antonio Cangiano, IBM’s AI specialist, this 5-hour beginner-level course offers a comprehensive introduction to prompt engineering. It uses notes, audio recordings, and hands-on labs to teach the art of crafting compelling prompts. The course covers foundational techniques, such as Persona and Interview Patterns, and advanced approaches like Chain-of-Thought and Tree-of-Thought prompting.
Learners will also explore bias mitigation, verbosity control, and IBM's Watsonx Prompt Lab. An optional final project allows participants to apply their knowledge. This free course, offering a certificate, is perfect for professionals aiming to revolutionize their interactions with AI systems.
Course Link: https://community.ibm.com/community/user/watsonx/blogs/nickolus-plowden/2023/10/15/learn-to-build-with-ai-series
Read more: 10 Best Free AI Infographic Generators for 2025: Transform Ideas into Stunning Visuals
Essentials of Prompt Engineering by Coursera
This 1-hour beginner-level course by Amazon Web Services introduces the foundational concepts of prompt engineering. It covers crafting, refining, and optimizing prompts, with techniques such as zero-shot, few-shot, and chain-of-thought prompting. Participants will also learn to identify and mitigate potential risks in prompt engineering.
A hands-on assignment allows learners to apply the skills acquired. Offered via a free trial with an optional $49/month subscription, the course includes a certificate upon completion. Updated in July 2024, this course is ideal for those interested in AI/ML and generative AI, providing in-demand skills for a competitive edge.
Course Link: https://www.coursera.org/learn/essentials-of-prompt-engineering
Advanced Prompt Engineering by Learn Prompting
Designed for intermediate to advanced learners, this 1-week course led by Sander Schulhoff provides in-depth training on advanced prompt engineering techniques. It explores concepts like in-context learning, chain-of-thought (CoT) prompting, problem decomposition, and self-criticism methods to craft effective prompts for complex AI applications.
Read more: How to Detect an AI-generated Image
Learners will enhance their understanding of AI tools like ChatGPT, DALL·E 3, GPT 3.5, and GPT 4. Taught by a renowned AI expert, the course combines theory with practical strategies, offering a certificate upon completion. Available with a free trial, access to all paid courses is $39/month via Learn Prompting Plus.
Course Link: https://learnprompting.org/courses/advanced-prompt-engineering
Prompt Engineering Specialization by Vanderbilt University on Coursera
Led by Dr. Jules White, this beginner to intermediate-level specialization spans 1 month (10 hours/week) and teaches participants to use generative AI for automation, productivity, and intelligence augmentation. The course includes three modules: composing queries for ChatGPT, advanced data analysis, and trusted generative AI.
Participants will gain hands-on experience in crafting prompts, automating tasks, and applying AI tools to real-world scenarios like social media content creation, data visualization from Excel, and PDF information extraction. The course is free with a trial and offers a certificate from Vanderbilt University upon completion.
Read more: Best Text-to-Speech Software
Course Link: https://www.coursera.org/specializations/prompt-engineering
Prompt Engineering and Advanced ChatGPT on edX
The Advanced ChatGPT course is an intermediate-level program designed to teach advanced techniques for using ChatGPT effectively. Spanning one week with 1-2 hours of learning per week, the course covers critical areas such as advanced prompting methods to generate accurate and engaging responses.
Learners explore how ChatGPT can be applied across various industries like healthcare, finance, education, and customer service. The course also addresses the integration of ChatGPT with tools like NLP and ML for developing sophisticated chatbot applications. Additionally, it discusses ChatGPT's limitations and how to mitigate them to build more robust applications. This self-paced course is free with limited access, but a certificate can be earned for $40.
Course Link: https://www.edx.org/learn/computer-programming/edx-advanced-chatgpt
Read more: Free Online AI Courses by Harvard University from Basic to Advanced Levels
Takeaways
These free online prompt engineering courses offer excellent opportunities to master AI tools like ChatGPT and enhance your skills in crafting effective prompts. With courses catering to different levels, from beginners to advanced learners, they provide valuable insights, hands-on exercises, and certification options to help you excel in AI applications and improve productivity in various industries.
10 months ago
Apple to update AI news feature after BBC raises concerns
Apple has announced that it will update, rather than suspend, its new artificial intelligence (AI) feature that generated inaccurate news alerts on its latest iPhones.
In its first response to concerns, the company confirmed on Monday that it is working on a software update to "further clarify" when notifications are summaries generated by Apple’s AI system, the BBC reports.
What to know about Apple's $95 million settlement of the snooping Siri case
The BBC raised concerns last month after an AI-generated summary of its headline mistakenly informed readers that Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself.
Recently, Apple’s AI inaccurately summarised BBC app notifications, claiming that Luke Littler had won the PDC World Darts Championship hours before it started, and that Spanish tennis player Rafael Nadal had come out as gay.
This is the first time Apple has formally acknowledged the issues raised by the BBC, who pointed out that these errors appeared as though they originated from the BBC’s own app.
The BBC said that these AI summaries by Apple do not reflect – and in some cases completely contradict – the original BBC content.
Apple to pay $95 million to settle lawsuit accusing Siri of eavesdropping
"It is critical that Apple urgently addresses these issues as the accuracy of our news is vital for maintaining trust."
Apple said that the update would be available "in the coming weeks."
The company had previously explained that its notification summaries aim to allow users to "scan for key details" by combining and rewriting multiple recent app notifications into a single alert on the lock screen.
"Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback," the company said in a statement on Monday. It also emphasised that receiving the summaries is optional.
The feature, along with other AI tools, is only available on iPhone 16 models, iPhone 15 Pro and Pro Max handsets running iOS 18.1 and above, and some iPads and Macs.
RSF urges Apple to scrap AI feature after misleading headline incident
Reporters Without Borders, an organisation representing journalists, urged Apple to disable the feature in December, saying that the false headline about Mangione showed that "generative AI services are still too immature to produce reliable information for the public."
Apple is not the only company that has launched generative AI tools capable of creating text, images, and more. Google’s AI summary feature, which provides written summaries of search results, also faced criticism last year for delivering some erratic responses. A Google spokesperson said these were "isolated examples" and that the feature was generally working well.
10 months ago
AI pioneers Geoffrey Hinton and John Hopfield win Nobel Prize in Physics
The Royal Swedish Academy of Sciences has awarded the prestigious 2024 Nobel Prize in Physics to two visionary researchers, Dr Geoffrey Hinton and Dr John Hopfield, whose pioneering work in artificial intelligence has redefined the boundaries of human knowledge and technological innovation.
This remarkable honour is a testament to their trailblazing contributions that laid the groundwork for the development of machine learning, a field that is reshaping the future of humanity with unprecedented promise and peril.
Dr Hinton, often heralded as the “Godfather of AI,” and Dr Hopfield, a towering figure in both physics and computational neuroscience, have been celebrated for their foundational contributions to artificial neural networks — intricate computational systems inspired by the human brain.
This milestone accolade places AI’s influence on par with the monumental discoveries of classical physics, underscoring the transformative power of interdisciplinary research.
Dr Geoffrey Hinton, a dual citizen of Canada and the United Kingdom, currently affiliated with the University of Toronto, is renowned for his innovative work in deep learning and backpropagation — a learning mechanism that enables computers to self-improve by repeatedly fine-tuning their processes until perfection is achieved.
His groundbreaking research in the 1980s not only changed the trajectory of AI but also served as a beacon for countless researchers and innovators across the globe.
The Nobel Committee recognised Dr John Hopfield’s equally pivotal role in the 1980s, particularly his development of associative memory models capable of storing and retrieving complex data patterns.
The Nobel Prize in physics is being awarded, a day after 2 Americans won the medicine prize
Dr Hopfield, now an emeritus professor at Princeton University, has long been celebrated for bridging the realms of physics, biology, and computer science to unravel some of the most intricate puzzles of the human mind.
“The work of these two gentlemen has not only paved the way for the current explosion in artificial intelligence but has also challenged and expanded our understanding of what it means to learn, to know, and to reason,” said Nobel Committee member Mark Pearce. “They have built the very bedrock upon which the modern era of AI stands.”
Shaping the Future
While the announcement has brought jubilation to the global scientific community, it also arrives at a time of heightened introspection.
The rapid advancement of artificial intelligence has sparked fervent debate about its implications for society, with even the laureates themselves voicing deep concern over the unintended consequences of these powerful systems.
In his acceptance remarks, Dr Hinton, who recently stepped down from a high-profile position at Google to speak more openly about his concerns, warned that AI’s meteoric rise poses profound challenges for the future.
“We have no experience of what it’s like to have something smarter than us. And it’s going to be wonderful in many respects,” Hinton observed with a mixture of awe and caution. “But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control.”
Nobel Prize in medicine honors American duo for their discovery of microRNA
Dr Hopfield echoed these sentiments, drawing parallels between AI’s disruptive potential and past revolutionary scientific breakthroughs such as nuclear energy and virology. “With great power comes great responsibility,” he said somberly, invoking imagery from George Orwell’s dystopian masterpiece 1984 and Kurt Vonnegut’s cautionary tale Cat’s Cradle.
The laureates stressed the need for ethical guidelines and societal dialogue to harness AI’s vast benefits without compromising human values and autonomy. Dr Hinton, in particular, has long advocated for greater scrutiny and oversight, predicting that AI’s impact could be comparable to the Industrial Revolution in scale and scope.
A Celebration and a Call to Action
The Nobel Prize, which includes a cash award of 11 million Swedish kronor (£900,000), will be formally presented to Dr Hinton and Dr Hopfield at a ceremony in Stockholm on December 10, the anniversary of Alfred Nobel’s death.
As the world applauds their monumental achievements, the laureates’ reflections serve as a powerful reminder that while technology can elevate society to new heights, it can also bring forth profound ethical dilemmas that demand our collective wisdom and vigilance.
Dr Hinton’s words to the younger generation of researchers were tinged with both inspiration and caution: “Don’t be put off if everyone tells you what you are doing is silly. But remember, in the rush to build, we must also take the time to think.”
With this year’s Nobel Prize, the Royal Swedish Academy of Sciences has not merely recognised two individuals but has also ushered in a new era where the blurred lines between science, technology, and philosophy are explored with the hope of shaping a brighter, safer, and more enlightened future for all.
Source: With inputs from AP
1 year ago
How to Detect an AI-generated Image
The surge in AI (artificial intelligence) has revolutionized content creation, blurring the lines between what is genuine and what is computer-generated. As this technology becomes more sophisticated, the challenge of distinguishing between real and AI-generated content intensifies. This distinction is increasingly vital for maintaining credibility in the media. This article presents proven and reliable techniques for accurately identifying AI-made images.
Proven Strategies to Identify an AI-generated Image
Analyzing Image Details
AI-crafted visuals often exhibit subtle, yet telling inconsistencies. For instance, textures might appear unnatural or overly smooth, while object alignments can seem off, forming a sense of visual distortion. Common issues include strange artifacts around edges or repetitive patterns uncommon in real-world photography.
Additionally, aspects like hair, hands, or reflections are often areas where AI struggles to replicate natural accuracy. Discrepancies that hint at artificial creation can be detected more readily by closely analyzing these details.
Read more: How Can Artificial Intelligence Improve Healthcare?
Metadata Examination
Genuine photographs typically contain detailed metadata, including camera model, lens type, exposure settings, and even GPS coordinates. Algorithmically designed images, however, often need more comprehensive data. Instead, metadata might show signs of image-editing software or specific AI tools used in the design process.
For example, metadata may include software names or unusual data entries that deviate from standard photographic metadata. Scrutinizing these elements often discerns whether the image was taken with a camera or generated through AI.
Understanding AI Art Styles
Recognizing digitally fabricated images often involves understanding their distinct artistic styles. These computer-programmed arts often follow distinct patterns, including surreal elements, exaggerated forms, and strikingly vibrant color schemes. The configurations always stand apart from conventional photography.
Read more: Ai and Future of Content Writing: Will Artificial Intelligence replace writers?
For instance, AI might produce images with inconsistent lighting or shadow effects, or details that appear overly smooth or enhanced. Familiarity with these stylistic traits and digital quirks are poised as major indicators for simulated fabrication.
Reverse Image Search
Submitting an image to a reverse search engine allows users to uncover its online appearances, including potential sources and related visuals. This method can reveal if a picture is linked to known AI databases or if it has been flagged as computer-generated in other contexts.
Furthermore, reverse searches can uncover if the image has been used or modified in various locations. This assessment assists in verifying if the work is original or not.
Read more: Top Home Electronics in Bangladesh for 2024: Simplify Your Life with These Smart Devices
Checking for AI-made Elements
Examining specific elements within an image can expose a lack of authenticity. For example, unnatural lighting and shadow effects generally do not match the real-world light sources in the scene. AI may struggle with accurate light source placement, leading to inconsistent or unrealistic shadowing.
In addition, synthesized imagery might feature unusual combinations of objects or scenarios that defy logical consistency, such as items appearing in impractical or improbable arrangements. Spotting these discrepancies helps determine if an image is created by AI rather than being a genuine photograph.
Summing Up
Detecting AI-generated images requires a keen eye and a strategic approach. Analyzing subtle inconsistencies in visual details and verifying metadata can reveal digital origins. Distinctive AI styles and reverse image searches help track image sources. Finally, assessing lighting and object placement ensures natural accuracy. With advancing AI technology, identifying these features is essential for ensuring the authenticity and credibility of visual media.
Read more: AI & Future of Jobs: Will Artificial Intelligence or Robots Take Your Job?
1 year ago