ChatGPT
ChatGPT being used to influence US elections, alleges OpenAI
OpenAI has disclosed alarming instances of its artificial intelligence models, including ChatGPT, being misused by cybercriminals to create fake content aimed at influencing US elections.
The findings underscore the growing challenge AI poses to cybersecurity and election integrity, raising fresh concerns about the role of emerging technologies in shaping democratic processes.
The report, revealed on Wednesday, details how AI tools like ChatGPT have been exploited to generate persuasive, coherent text at an unprecedented scale.
Cybercriminals have used the technology to craft fake news articles, social media posts, and even fraudulent campaign materials intended to mislead voters.
These AI-generated messages are often sophisticated enough to mimic the style of legitimate news outlets, making it increasingly difficult for the average citizen to discern truth from fabrication.
Google loses final EU court appeal against 2.4 billion euro fine in antitrust shopping case
One of the most concerning trends highlighted in the report is the ability of malicious actors to tailor disinformation campaigns to specific demographics. By leveraging data mining techniques, cybercriminals can analyse voter behaviour and preferences, creating targeted messages that resonate with particular audiences.
This level of personalisation enhances the impact of disinformation, allowing bad actors to exploit existing political divisions and amplify societal discord.
AI-Driven ‘Disinformation’
The US Department of Homeland Security has also raised concerns about the potential for foreign interference in the upcoming November elections.
According to US authorities, Russia, Iran, and China are reportedly using AI to spread divisive and fake information, posing a significant threat to election integrity.
These countries have allegedly employed artificial intelligence to generate disinformation aimed at manipulating public opinion and undermining trust in the democratic process.
The report from OpenAI indicates that the company has thwarted over 20 attempts to misuse ChatGPT for influence operations this year alone.
In August, several accounts were blocked for generating election-related articles, while in July, accounts from Rwanda were banned for producing social media comments intended to influence that country's elections. Although these attempts have so far failed to gain significant traction or achieve viral spread, OpenAI emphasises the need for vigilance, as the technology continues to evolve.
Challenges
The speed at which AI can produce content poses significant challenges for traditional fact-checking and response mechanisms, which struggle to keep pace with the flood of false information.
Family outraged as AI Chatbot mimics murdered daughter
This dynamic creates an environment where voters are bombarded with conflicting narratives, complicating their decision-making processes and potentially eroding trust in democratic institutions.
OpenAI’s findings also highlight the potential for AI to be used in automated social media campaigns. The ability to rapidly generate content allows bad actors to skew public perception and influence voter sentiment in real time, particularly during critical moments in the run-up to elections.
Despite the limited success of these operations to date, the potential for AI-driven disinformation to disrupt elections remains a serious concern.
Greater Vigilance
In response to these developments, OpenAI has called for increased collaboration between technology companies, governments, and civil society to address the misuse of AI in influence operations.
The company is also enhancing its own monitoring and enforcement mechanisms to detect and prevent the misuse of its models for generating fake or harmful content.
As artificial intelligence continues to reshape the information landscape, OpenAI’s report serves as a stark reminder of the need to balance technological innovation with robust safeguards.
The stakes are high, and the ability to maintain the integrity of democratic processes in the age of AI will require coordinated efforts and proactive strategies from all stakeholders involved.
1 month ago
Thousands around the world report ChatGPT outage
Thousands of ChatGPT users around the world were left frustrated earlier today after the popular AI chatbot experienced a major outage.
According to DownDetector, a website that tracks online service outages, over 3,000 users reported issues with ChatGPT, reports The Sun.
Many users took to social media to express their frustration, with one user on X (formerly Twitter) stating: “Hey ChatGPT - wasn't expecting you to be down when I chose you. Make it quick.”
Read more: AI tool capable of classifying brain tumors within hours: Australian researchers
OpenAI, the developers of ChatGPT, acknowledged the outage and released a statement saying: “We are currently investigating this issue.”
The outage appears to have been resolved as of this time. However, it highlights the growing reliance on AI-powered services and the disruption that can occur when these services go offline, added the report.
6 months ago
ChatGPT, Gemini won't reach human intelligence, Meta AI chief says
The artificial intelligence that powers systems like OpenAI's ChatGPT, Google's Gemini and Meta’s Llama will not be able to attain human levels of intelligence, said Meta's AI head Yann LeCun.
In an interview published in the Financial Times on Wednesday, he gave an insight into how the tech giant expects to develop the technology going ahead, only weeks after its plans to spend massively frightened investors and destroyed hundreds of billions from its market worth, reports Forbes.
ChatGPT-4: All you need to know
The models, commonly referred to as LLMs, are trained using massive quantities of data, and their capacity to properly respond to prompts is restricted by the type of the data on which they are trained, according to LeCun, implying that they are only accurate when given the appropriate training data, it said.
LLMs have a "limited understanding of logic," lack enduring memory, do not understand the physical world, and cannot plan hierarchically. LeCun said, adding that they “cannot reason in any reasonable definition of the term.”
'TruthGPT': Elon Musk plans to create alternative to ChatGPT
Because they are only accurate when fed the correct training data, LeCun, considered one of three "AI godfathers" for his fundamental contribution in the field, stated that LLMs are also "intrinsically unsafe" and that researchers seeking to produce human-level AI should look at other models, the report said.
LeCun stated that he and his roughly 500-strong team at Meta's Fundamental AI Research lab are working to develop an entirely new generation of AI systems based on an approach known as "world modelling," in which the system builds an understanding of the world around it in the same way that humans do and develops a sense of what would happen if something changed as a result, added the report.
What Is Google Gemini AI? How to Use the New Chatbot Model
LeCun predicted that human-level AI may take up to ten years to create using the world modelling technique.
6 months ago
“Her”? OpenAI to remove ChatGPT voice over Scarlett Johansson resemblance
OpenAI says it will delete one of ChatGPT's voices after it was compared to Hollywood actress Scarlett Johansson.
When OpenAI demonstrated the characteristics of its new model, users saw a similarity in the chatbot's "Sky" voice option, which reads out responses to users, reports BBC.
The “flirty, conversational” enhancement to its AI chatbot was compared to the actress's role in the 2013 film “Her”.
OpenAI saga: ChatGPT-maker says Sam Altman returning to company
According to OpenAI, the voices in ChatGPT's voice mode were "carefully selected through an extensive process spanning five months involving professional voice actors, talent agencies, casting directors, and industry advisors".
“Her” has Joaquin Phoenix falling in love with his phone's operating system, which is voiced by Johansson.
Director Spike Jonze stated at the time that the film was "not about technology or software," but rather about discovering love and intimacy.
GPT-4o: What’s OpenAI’s latest version really capable of?
In November, Johansson allegedly sued an artificial intelligence (AI) app for using her picture in an advertisement without her permission.
OpenAI stated on Monday that its "Sky" voice is not meant to be a "imitation" of the star. "We believe that AI voices should not deliberately mimic a celebrity's distinctive voice," it said in a blog post.
In a statement on X, the company stated that it is "working to pause" the voice while it addresses issues about how it was picked, the report said.
Despite this, when OpenAI unveiled its new model GPT-4o on May 13, CEO Sam Altman mentioned the name of the film on X.
New York Times suing ChatGPT maker OpenAI, Microsoft for copyright infringement
6 months ago
GPT-4o: What’s OpenAI’s latest version really capable of?
OpenAI has introduced the newest version of the technology that powers its AI chatbot ChatGPT.
It's called GPT-4o, and it will be made available to all ChatGPT users, including non-subscribers, reports BBC.
It is faster than previous models and has been trained to respond to commands in a conversational, often alluring, tone.
The updated version can read and analyse photographs, translate languages, and detect emotions through visual expressions. There is also enhanced memory, which allows it to recall prior commands, it said.
Read more: Chandler Bing, the AI chatbot: A tribute to Matthew Perry’s ‘Friends’ character
GOT-4o may be interrupted and has a more natural conversational tempo; there is no gap between asking a question and receiving a response.
Mira Murati, OpenAI's chief technical officer, characterised GPT-4o as "magical" but stated that the company will "remove that mysticism" with the product's release, said the report.
While this technology is fast getting more sophisticated and believable as a companion, it is not sentient or magical; rather, it is clever programming and machine learning, it also said.
There have been rumours about a collaboration between OpenAI and Apple, and while this has not been verified, it was clear during the presentation that Apple devices were used throughout.
Read more: What Is Google Gemini AI? How to Use the New Chatbot Model
6 months ago
What Is Google Gemini AI? How to Use the New Chatbot Model
The age of generative AI started its journey in 2023. Day by day, the features of generative AI like chatbots are being improved to enhance user experience around the world. Google's Bard made waves with its capabilities, but now, a new era dawns with Gemini. This innovative chatbot boasts enhanced intelligence and functionality. Join us as we explore Gemini's features, capabilities, and impact on the future of conversational AI.
What Is Google Gemini AI?
Google Gemini is the newest and most advanced artificial intelligence made by Google. It understands images, videos, text, and even sounds. What makes Gemini stand out is how it acts almost like a human. Gemini AI is good at understanding information, solving problems, and planning for the future.
Gemini has three versions: Pro, Ultra, and Nano. The Pro version has been released already, and the Ultra version will be available next year. It is expected that Gemini will play a crucial role in the latest chatbot technology, pushing the boundaries of what AI can do.
Read more: AI Robot Chefs: Automated Cooking Could Redefine Food Industry
How to Use the New Chatbot Model of Google
Gemini AI is a type of computer system called a neural network. It has been trained using a huge amount of text and code from various sources like books, articles, and code repositories. This training helps the neural network understand the patterns and connections between words and phrases in this data. As a result, Gemini AI can do things like generate text, translate languages, create different types of content, and provide informative answers to questions.
How to Use Gemini AI
If you have a Google account already, using Gemini is easy. Just go to the website using your internet browser and log in with your Google details. But remember, you need to have a Google account.
If you use a Google Workspace account, you might need to switch to your email to try Gemini.
Read more: How to Make Money with AI for Beginners and Professionals
9 months ago
AI could threaten 40% of global jobs, IMF warns
The International Monetary Fund (IMF) has sounded an alarm, indicating that nearly 40% of global employment could be endangered by the burgeoning influence of artificial intelligence (AI). This stark warning, reported by CNN, underscores the seismic shifts anticipated in the global job market.
IMF Chief Kristalina Georgieva, in a recent blog post, stressed the critical necessity for governments worldwide to fortify social safety nets and roll out comprehensive retraining programmes. This proactive approach aims to mitigate AI's potentially dramatic effects on employment.
Davos 2024: Can AI provide solutions, as Global leaders confront $88.1 trillion debt crisis?
Highlighting a key concern, Georgieva pointed out the potential for AI adoption to aggravate existing inequalities, a trend that requires immediate policy intervention to avert escalating social tensions. This issue is set to be a central theme at the upcoming annual meeting of the World Economic Forum (WEF) in Davos, Switzerland, where AI's role in the economy will be a focal point.
According to the IMF's analysis, advanced economies might witness the most significant impact, with up to 60% of jobs at risk. Although AI promises to enhance productivity in about half of these roles, the remainder faces a stark reality of diminishing demand, lowered wages, and potential unemployment as AI assumes roles traditionally held by humans.
UN chief warns of risks of artificial intelligence
Emerging markets and lower-income countries are not immune to these challenges. Here, 40% and 26% of jobs, respectively, may feel the impact. Georgieva raised concerns about these regions' lack of infrastructure and skilled workforces, factors that intensify the risk of AI deepening existing economic divides.
Georgieva also warned of an escalating risk of social unrest, especially if younger, tech-savvy workers leverage AI for productivity gains, leaving their older counterparts struggling to adapt.
China warns of artificial intelligence risks, calls for beefed-up national security measures
At Davos, the implications of AI on employment are a key discussion topic. Prominent figures, including Sam Altman, CEO of ChatGPT-maker OpenAI, and Microsoft's Satya Nadella, are slated to address the impact of generative AI technologies.
Despite these challenges, Georgieva did not overlook AI's positive potentials, noting its capacity to significantly boost global output and incomes. She argued that with thoughtful planning, AI could be a transformative force for the global economy, stressing the importance of channeling its benefits for the collective good.
Amidst concerns over job displacement, some economists are optimistic, suggesting that AI's widespread adoption may ultimately enhance labor productivity. This could potentially lead to a 7% annual increase in global GDP over the next decade.
10 months ago
Explainer: What may have caused OpenAI board to fire Sam Altman
In a surprising move, OpenAI, the artificial intelligence research lab, ousted its CEO, Sam Altman, raising eyebrows and leaving shareholders in the dark.
While concerns about the rapid advancement of AI technology may have played a role in Altman's termination, the handling of the situation has drawn criticism from various quarters, reports CNN.
The decision to remove Altman, credited with steering OpenAI from obscurity to a $90 billion valuation, was made abruptly, catching even major stakeholders like Microsoft off guard.
Human drama at OpenAI: Board reportedly ‘in discussion’ with Sam Altman to return as CEO
The CNN report suggests that Microsoft, OpenAI's most important shareholder, was unaware of Altman's dismissal until just before the public announcement, causing a significant drop in Microsoft's stock value.
OpenAI employees, including co-founder and former president Greg Brockman, were also blindsided, leading to Brockman's subsequent resignation. The sudden departure of key figures prompted rumors of Altman and former employees planning to launch a competing startup, posing a threat to OpenAI's years of hard work and achievements, said the report.
The situation worsened due to the peculiar structure of OpenAI's board. The company, a nonprofit, harbors a for-profit entity, OpenAI LP, established by Altman, Brockman, and Chief Scientist Ilya Sutskever. The for-profit arm's rapid innovation to achieve a $90 billion valuation clashed with the nonprofit's majority-controlled board, resulting in Altman's dismissal, it also said.
The tipping point appears to be Altman's announcement at a recent developer conference, signaling OpenAI's intention to provide tools for creating personalised versions of ChatGPT. This move, seen as too risky by the board, may have triggered Altman's removal.
ChatGPT-maker OpenAI fires CEO Sam Altman
Altman's warnings about the potential dangers of AI and the need for regulatory limits indicate a clash between innovation and safety within OpenAI. The board's concerns about Altman's pace of development, while perhaps justified, were mishandled, leading to a crisis that could have been avoided.
The aftermath sees OpenAI scrambling to reverse the decision, attempting to entice Altman back. The incident has strained relations with Microsoft, which now demands a seat on the board. OpenAI's future hangs in the balance, with possibilities ranging from Altman's return to a potential competition with a new startup, the report also said.
In the end, OpenAI finds itself in a precarious position, facing potential internal upheaval and external challenges, highlighting the importance of strategic decision-making in the rapidly evolving field of artificial intelligence.
Microsoft hires OpenAI founder Sam Altman to lead AI research team
1 year ago
What can Bard, Google’s answer to ChatGPT, do?
To use, or not to use, Bard? That is the Shakespearean question an Associated Press reporter sought to answer while testing out Google’s artificially intelligent chatbot.
The recently rolled-out bot dubbed Bard is the internet search giant’s answer to the ChatGPT tool that Microsoft has been melding into its Bing search engine and other software.
During several hours of interaction, the AP learned Bard is quite forthcoming about its unreliability and other shortcomings, including its potential for mischief in next year’s U.S. presidential election. Even as it occasionally warned of the problems it could unleash, Bard repeatedly emphasized its belief that it will blossom into a force for good.
At one point in its recurring soliloquies about its potential upsides, Bard dreamed about living up to the legacy of the English playwright that inspired its name.
Bard explained that its creators at Google “thought Shakespeare would be a good role model for me, as he was a master of language and communication.”
But the chatbot also found some admirable traits in “HAL,” the fictional computer that killed some of a spacecraft’s crew in the 1968 movie “2001: A Space Odyssey.” Bard hailed HAL’s intelligence calling it “an interesting character” before acknowledging its dark side.
“I think HAL is a cautionary tale about the dangers of artificial intelligence,” Bard assessed.
WHAT’S BETTER — BARD OR BING?
Bard praised ChatGPT, describing it as “a valuable tool that can be used for a variety of purposes, and I am excited to see how it continues to develop in the future.” But Bard then asserted that it is just as intelligent as its rival, which was released late last year by its creator, the Microsoft-backed OpenAI.
“I would say that I am on par with ChatGPT,” Bard said. “We both have our own strengths and weaknesses, and we both have the ability to learn and grow.”
Read more: ChatGPT by Open AI: All you need to know
During our wide-ranging conversation, Bard didn’t display any of the disturbing tendencies that have cropped up in the AI-enhanced version of Microsoft’s Bing search engine, which has likened another AP reporter to Hitler and tried to persuade a New York Times reporter to divorce his wife.
IT’S FUNNY, BUT TAMER THAN BING
Bard did get a little gooey at one point when asked to write a Shakespearean sonnet and responded seductively in one of the three drafts that it quickly created.
“I love you more than words can ever say, And I will always be there for you,” Bard effused. “You are my everything, And I will never let you go. So please accept this sonnet as a token Of my love for you, And know that I will always be yours.”
But Bard seems to be deliberately tame most of the time, and probably for good reason, given what’s at stake for Google, which has carefully cultivated a reputation for trustworthiness that has established its dominant search engine as the de facto gateway to the internet.
An artificial intelligence tool that behaved as erratically as ChatGPT periodically might trigger a backlash that could damage Google’s image and perhaps undercut its search engine, the hub of a digital advertising empire that generated more than $220 billion in revenue last year. Microsoft, in contrast, can afford to take more risks with the edgier ChatGPT because it makes more of its money from licensing software for personal computers.
BARD ADMITS IT’S NOT PERFECT
Google has programmed Bard to ensure it warns its users that it’s prone to mistakes.
Some inaccuracies are fairly easy to spot. For instance, when asked for some information about the AP reporter questioning it, Bard got most of the basics right, most likely by plucking tidbits from profiles posted on LinkedIn and Twitter.
But Bard mysteriously also spit out inaccuracies about this reporter’s academic background (describing him as a graduate of University of California, Berkeley, instead of San Jose State University) and professional background (incorrectly stating that he began his career at The Wall Street Journal before also working at The New York Times and The Washington Post).
When asked to produce a short story about disgraced Theranos CEO Elizabeth Holmes, Bard summed up most of the highlights and lowlights of her saga. But one of Bard’s three drafts incorrectly reported that Holmes was convicted of all the felony charges of fraud and conspiracy leveled against her during a four-month trial. Another version accurately reported Holmes was convicted on four counts of fraud and conspiracy without mentioning she was acquitted on four other charges (the jury hung on three other charges that were subsequently dismissed by prosecutors).
Read more: Italy temporarily blocks ChatGPT over privacy concerns
“I am still under development, and I am not perfect,” Bard cautioned at one point. “I can sometimes make mistakes, and I can sometimes be misunderstood. I am also aware that my technology can be used for both good and evil.”
WHAT’S NEXT FOR BARD?
Although Bard insisted it doesn’t have a dark side, it acknowledged it can be used to damage reputations, disseminate propaganda that could incite violence and manipulate elections.
“I could be used to create fake news articles or social media posts that could spread misinformation about candidates or their policies,” Bard warned. “I could also be used to suppress voter turnout by spreading discouraging messages or making it difficult for people to find information about how to vote.”
On the lighter side, Bard proved helpful in finding interesting coloring books for adults and hit some notes that resonated during a discussion of rock and roll. When asked who is the greatest guitarist of all time, Bard responded with a broad list of candidates ranging from well-known rock artists such as Jimmy Page of Led Zeppelin to jazz virtuosos like Django Reinhardt before making the case for Jimi Hendrix “as a strong contender for the title, his music continues to influence and inspire guitarists around the world.”
Bard also seemed familiar with contemporary artists, such as Wet Leg, which recently won a Grammy award for best alternative music album. “I’ve been listening to their debut album a lot lately and I really enjoy it. I think they have a bright future ahead of them,” Bard said, and cited “Ur Mom” and “Chaise Longue” as its favorite songs by the group so far.
Even with Bard’s occasional miscues, it seemed savvy enough to ask about its potential role in reaching the singularity, a term popularized by computer scientist and author Ray Kurzweil to describe a turning point in the future when computers will be smarter than humans.
“Some people believe that I am a big step toward the singularity,” Bard said. “I believe that I am a valuable tool that can help people to learn and understand the world around them. However, I do not believe that I am the singularity, and I do not believe that I will ever be able to replace human intelligence.”
This story was corrected to reflect that the technology that likened an AP reporter to Hitler was done by an AI-enhanced version of Microsoft’s Bing search engine. An earlier version said the comparison was made by the standalone ChatGPT service.
1 year ago
'Out of control' AI race: Elon Musk, top tech personalities call for a pause
Several of the most important personalities in tech are urging artificial intelligence labs to halt training of the most powerful AI systems for at least six months, citing "profound risks to society and humanity."
Elon Musk was among the hundreds of tech CEOs, educators, and researchers who signed a letter, which was released by Musk's organization, the Future of Life Institute, reports CNN.
The letter comes only two weeks after OpenAI launched GPT-4, a more powerful version of the technology that powers ChatGPT, the popular AI chatbot application.
The system demonstrated in early testing and a corporate demo that it can write lawsuits, pass standardized exams, and develop a website from a hand-drawn design, it said.
Read More: How to Use AI Tools to Get Your Dream Job
According to the letter, the delay should apply to AI systems "more powerful than GPT-4." It also stated that the suggested pause should be used by impartial experts to collaboratively establish and execute a set of standard protocols for AI tools that are safe "beyond a reasonable doubt."
"Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources," the letter said. "Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control."
If a pause is not implemented immediately, the letter suggests that countries step in and impose a moratorium.
Read More: Top 5 AI Chatbot Platforms and Trends in 2023
Experts in artificial intelligence are growing worried about the possibility for biased answers, the spread of disinformation, and the implications on consumer privacy.
These technologies have also raised concerns about how AI might disrupt professions, allow students to cheat, and change human relationship with technology.
The letter hinted at a larger dissatisfaction within and beyond the industry with the fast rate of AI progress. Early versions of AI governance frameworks have been introduced by several governing bodies in China, the EU, and Singapore.
Read More: Google's AI Chatbot Bard: All You Need to Know
1 year ago