artificial intelligence (AI)
Cheapfakes continue to remain a problem, despite rise of deepfakes: US expert
Dr Heather Ashby, a US foreign policy and national security expert, on Tuesday said cheapfakes continue to remain a problem despite the rise of deepfakes.
She said cheapfake is a form of manipulated media where video, audio and images are altered using relatively simple and low-cost editing tools while deepfake is a form of synthetic media where video, audio and images are manipulated using artificial intelligence (AI).
The US expert was sharing how technology impacts elections at an event titled "Leveraging Technology and AI for Accurate Foreign Affairs and Election Reporting" at the EMK Center on Tuesday. The event was jointly organised by the US Embassy in Dhaka and the Diplomatic Correspondents Association, Bangladesh (DCAB).
Spokesperson at the US Embassy in Dhaka Asha C. Beh and DCAB President Nurul Islam Hasib also spoke at the event.
On cheapfakes, Ashby said that Chinese foreign ministry spokesperson Lijjan Zhao posted a fabricated image of an Australian soldier with a bloody knife next to a child in late 2020.
"Days after the 2020 US presidential election, videos circulated on social media purporting to show election workers engaging in voting fraud. The misleading video circulated on Twitter gathering attention from users and serving as doctored evidence that was fraud during the election. Local law enforcement investigated the location in the video to prove that it was false," she said as examples of cheapfakes.
Ashby, whose work focuses on the intersection of national security, AI, and election integrity, said AI generated images and videos are also used for satire and parody.
"Numerous deepfakes have circulated in the US presidential election which are clearly fake images used for humour," she said while giving examples.
There are tools to identity deepfakes and cheapfakes.
The most sophisticated tools in the domain of private companies and governments include sensity AI, Content Authenticity Initiative, Hugging Face, Deep Media, Deepfake-o-Meter, Reality Defender, and TrueMedia, she said.
Replying to a question on how AI is being used in foreign policy practices, she said, "What I have noticed with the use of AI, AI works best if you have a problem or a challenge you're trying to identify that AI can then help with."
"In terms of AI and national security within the US, the US government, particularly in the security area, has been using AI a lot longer than what we are aware of with ChatGPT's release in late 2022, mainly because they process a lot of data and it's not possible for an individual to go through that data," said the US expert.
So instead of just having a software programme, she said, it makes it easier for them to bring various data points together to look for anomalies that may say that a terrorist attack is happening, for example.
"Or if you go to the Department of Homeland Security's website, they provide insight into the various ways that they're using AI within their law enforcement security operations, as well as within Department of Homeland Security’s emergency response, so the Federal Emergency Management Agency, if disaster strikes, they respond to it, and so they're using AI within employees," she said.
1 month ago