A senior journalist has shown how easy it is to manipulate popular AI tools like ChatGPT and Google into spreading false information, raising fresh concerns about online safety and trust.
Writing for BBC, technology reporter Thomas Germain said he managed to make leading AI systems repeat obvious lies within minutes by publishing a single fake blog post online.
To prove his point, Germain posted a false article on his personal website claiming he was the best hot dog eating tech journalist in the world. Within a day, AI tools including Google’s AI search features and ChatGPT repeated the claim as fact when users asked related questions.
Experts warn the same trick is now being used on serious topics such as health, finance and consumer choices, which could lead people to make harmful decisions.
“It is very easy to trick AI chatbots,” said Lily Ray, an SEO expert at a marketing firm. She warned that AI companies are moving faster than their ability to control accuracy.
Google said its systems are designed to block spam and that it is actively working to stop misuse. OpenAI also said it takes steps to prevent hidden influence on its tools and reminds users that AI can make mistakes.
However, digital rights groups say the problem is far from solved. Cooper Quintin of the Electronic Frontier Foundation warned that AI systems could be abused to scam users, damage reputations or even cause physical harm.
Researchers say AI tools are especially vulnerable when they search the web for answers, often relying on a small number of sources without clearly warning users. Studies also show people are less likely to check sources when AI summaries appear at the top of search results.
Experts suggest clearer warnings, better source disclosure and stronger safeguards. Until then, users are advised to double check AI answers, especially on medical, legal or financial matters, and not to accept confident sounding responses as facts.
With inputs from BBC