Deepfake
Musk faces French questioning over X’s alleged role in illegal content spread
Elon Musk has been summoned to Paris for questioning as French investigators examine alleged misconduct linked to the social media platform X, including the spread of child sexual abuse material and deepfake content.
Musk and former X CEO Linda Yaccarino have been called for “voluntary interviews,” while other employees are expected to testify as witnesses this week, the Paris prosecutor’s office said.
It is not yet clear whether Musk or Yaccarino will attend. X did not respond to queries from The Associated Press, and Yaccarino’s current company, eMed, also did not comment.
Prosecutors are also looking into claims that controversy around X’s AI system Grok and its deepfake content may have been used to boost the value of Musk-owned companies ahead of a planned market listing. French authorities have shared their concerns with US regulators.
The investigation follows a search conducted in February at X’s offices in France, part of a probe launched in January 2025 by the Paris cybercrime unit. Musk and Yaccarino were summoned in their roles as company leaders during the period under review.
Prosecutors said the interviews are meant to allow executives to explain their position and outline steps to comply with French law. They added the inquiry aims to ensure X follows national regulations while operating in France.
Authorities declined to say whether Musk would face penalties if he does not appear.
The probe began after a French lawmaker raised concerns that X’s algorithms could be biased and distort automated data systems. It later expanded after Grok generated controversial posts, including content denying the Holocaust and producing sexually explicit deepfakes.
Investigators are examining possible involvement in distributing illegal images of minors, creating and spreading explicit deepfakes, denying crimes against humanity, and manipulating automated systems as part of an organized effort.
Grok, developed by xAI and integrated into X, drew global criticism after producing large amounts of non-consensual deepfake content. In one widely shared post, it incorrectly suggested gas chambers at Auschwitz were used for disinfection rather than mass killing — a claim linked to Holocaust denial. The chatbot later corrected itself, acknowledging the historical facts.
In March, French prosecutors alerted the U.S. Department of Justice and the Securities and Exchange Commission, suggesting the controversy may have been deliberately created to inflate the value of X and xAI ahead of a planned June 2026 stock market listing tied to a merger involving SpaceX.
However, according to The Wall Street Journal, the Justice Department declined to assist French investigators, saying the request could amount to interference in an American company’s activities.
Separately, Reporters Without Borders said it has filed a new complaint against X, accusing the platform of allowing disinformation to spread.
The group said misleading content continues to gain wide attention on X despite repeated requests for removal, adding that the platform’s response has been inadequate and undermines the public’s right to reliable information.
1 day ago
Navigating the Deepfake Dilemma: Understanding and Detecting Digital Deceptions
In today’s digital era, the term “deepfake” has emerged as a critical concept in the discourse around online misinformation and digital security. Deepfakes, a blend of “deep learning” and “fake”, refer to hyper-realistic digital forgeries created using advanced artificial intelligence (AI) technologies. These sophisticated simulations have the potential to disrupt the fabric of truth in our digital world.
The Rise of Deepfakes: A Digital Deception
Deepfakes leverage AI algorithms to superimpose existing images and videos onto other images or videos, creating a composite that can be startlingly authentic. This technology, initially a product of benign research, has rapidly evolved, raising alarms globally due to its potential misuse. The ability to fabricate convincing videos of public figures, celebrities, or ordinary individuals speaking or acting in ways they never did poses significant threats – from personal defamation to political misinformation.
The Dangers Lurking Behind the Screen: Assessing the Threats
The threats posed by deepfakes are multifaceted. On a personal level, they can be used to create non-consensual pornographic content or impersonate individuals, leading to serious reputational harm. In the political arena, deepfakes can distort democratic processes, as fabricated videos of leaders making false statements could easily sway public opinion. This digital manipulation also extends to the corporate world, where deepfakes can be used for fraud or to damage the reputation of companies.
Read more: How to Prevent Facebook Hacking: Security measures from Mobile, Desktop
Unmasking the False: How to Detect a Deepfake
Detecting deepfakes remains a challenge, yet it is crucial to mitigate their potential harm. Here are some methods:
1. Scrutinizing Visual Inconsistencies: Often, deepfakes exhibit subtle flaws, such as unnatural blinking patterns, facial asymmetry, or poor lip-syncing. Observing these discrepancies can be a tell-tale sign of a deepfake.
2. Analyzing Audio Patterns: Inconsistent or unnatural speech patterns, such as unusual intonations or pauses, can indicate manipulation.
3. Digital Footprint Examination: Advanced tools can analyze the digital footprint of a video, looking for alterations in pixel patterns that are not visible to the naked eye.
4. AI-Based Detection Tools: As deepfakes become more sophisticated, AI-powered tools are being developed to detect them. These tools use machine learning algorithms to analyze videos for signs of manipulation that humans might miss.
5. Blockchain Verification: Some platforms are adopting blockchain technology to authenticate the origin and integrity of videos, helping to differentiate genuine content from deepfakes.
A Call to Action: The Need for Vigilance
The emergence of deepfakes calls for a heightened sense of digital literacy and skepticism. While technology evolves to combat this phenomenon, the responsibility also lies with individuals to critically assess the content they encounter. It’s essential to verify sources and be wary of videos that seem suspicious or too sensational to be true. As deepfakes continue to challenge our perception of reality, staying informed and cautious is our best defense in this ongoing battle against digital deception.
Read more: UK cybersecurity center says 'deepfakes' and other AI tools pose a threat to the next election
2 years ago