OpenAI
OpenAI’s Whisper invents speech creating phrases no one said
Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”
But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.
Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.
More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”
The full extent of the problem is difficult to discern, but researchers and engineers said they frequently have come across Whisper’s hallucinations in their work. A University of Michigan researcher conducting a study of public meetings, for example, said he found hallucinations in 8 out of every 10 audio transcriptions he inspected, before he started trying to improve the model.
A machine learning engineer said he initially discovered hallucinations in about half of the over 100 hours of Whisper transcriptions he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper.
The problems persist even in well-recorded, short audio samples. A recent study by computer scientists uncovered 187 hallucinations in more than 13,000 clear audio snippets they examined.
That trend would lead to tens of thousands of faulty transcriptions over millions of recordings, researchers said.
Such mistakes could have “really grave consequences,” particularly in hospital settings, said Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year.
“Nobody wants a misdiagnosis,” said Nelson, a professor at the Institute for Advanced Study in Princeton, New Jersey. “There should be a higher bar.”
Whisper also is used to create closed captioning for the Deaf and hard of hearing — a population at particular risk for faulty transcriptions. That's because the Deaf and hard of hearing have no way of identifying fabrications are “hidden amongst all this other text," said Christian Vogler, who is deaf and directs Gallaudet University’s Technology Access Program.
OpenAI urged to address problemThe prevalence of such hallucinations has led experts, advocates and former OpenAI employees to call for the federal government to consider AI regulations. At minimum, they said, OpenAI needs to address the flaw.
“This seems solvable if the company is willing to prioritize it,” said William Saunders, a San Francisco-based research engineer who quit OpenAI in February over concerns with the company's direction. “It’s problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems.”
An OpenAI spokesperson said the company continually studies how to reduce hallucinations and appreciated the researchers' findings, adding that OpenAI incorporates feedback in model updates.
While most developers assume that transcription tools misspell words or make other errors, engineers and researchers said they had never seen another AI-powered transcription tool hallucinate as much as Whisper.
Whisper hallucinationsThe tool is integrated into some versions of OpenAI’s flagship chatbot ChatGPT, and is a built-in offering in Oracle and Microsoft’s cloud computing platforms, which service thousands of companies worldwide. It is also used to transcribe and translate text into multiple languages.
In the last month alone, one recent version of Whisper was downloaded over 4.2 million times from open-source AI platform HuggingFace. Sanchit Gandhi, a machine-learning engineer there, said Whisper is the most popular open-source speech recognition model and is built into everything from call centers to voice assistants.
Professors Allison Koenecke of Cornell University and Mona Sloane of the University of Virginia examined thousands of short snippets they obtained from TalkBank, a research repository hosted at Carnegie Mellon University. They determined that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.
In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”
But the transcription software added: “He took a big piece of a cross, a teeny, small piece ... I’m sure he didn’t have a terror knife so he killed a number of people.”
A speaker in another recording described “two other girls and one lady.” Whisper invented extra commentary on race, adding "two other girls and one lady, um, which were Black.”
In a third transcription, Whisper invented a non-existent medication called “hyperactivated antibiotics.”
Researchers aren’t certain why Whisper and similar tools hallucinate, but software developers said the fabrications tend to occur amid pauses, background sounds or music playing.
OpenAI recommended in its online disclosures against using Whisper in “decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes.”
Transcribing doctor appointmentsThat warning hasn’t stopped hospitals or medical centers from using speech-to-text models, including Whisper, to transcribe what’s said during doctor’s visits to free up medical providers to spend less time on note-taking or report writing.
Over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children’s Hospital Los Angeles, have started using a Whisper-based tool built by Nabla, which has offices in France and the U.S.
That tool was fine tuned on medical language to transcribe and summarize patients’ interactions, said Nabla’s chief technology officer Martin Raison.
Company officials said they are aware that Whisper can hallucinate and are mitigating the problem.
It’s impossible to compare Nabla’s AI-generated transcript to the original recording because Nabla’s tool erases the original audio for “data safety reasons,” Raison said.
Nabla said the tool has been used to transcribe an estimated 7 million medical visits.
Saunders, the former OpenAI engineer, said erasing the original audio could be worrisome if transcripts aren't double checked or clinicians can't access the recording to verify they are correct.
“You can't catch errors if you take away the ground truth,” he said.
Nabla said that no model is perfect, and that theirs currently requires medical providers to quickly edit and approve transcribed notes, but that could change.
Privacy concernsBecause patient meetings with their doctors are confidential, it is hard to know how AI-generated transcripts are affecting them.
A California state lawmaker, Rebecca Bauer-Kahan, said she took one of her children to the doctor earlier this year, and refused to sign a form the health network provided that sought her permission to share the consultation audio with vendors that included Microsoft Azure, the cloud computing system run by OpenAI’s largest investor. Bauer-Kahan didn't want such intimate medical conversations being shared with tech companies, she said.
“The release was very specific that for-profit companies would have the right to have this,” said Bauer-Kahan, a Democrat who represents part of the San Francisco suburbs in the state Assembly. “I was like ‘absolutely not.’”
John Muir Health spokesman Ben Drew said the health system complies with state and federal privacy laws.
1 month ago
OpenAI ready to launch Orion AI Model by Dec 2024
OpenAI, the developer of the widely-used ChatGPT platform, has revealed plans to launch its latest AI model, codenamed Orion, by the end of 2024.
According to a report from The Verge, the company aims to initially make the model available exclusively to select business partners.
Following the launch of OpenAI o1, Orion is expected to be a significant step forward in artificial intelligence, building upon the advancements of previous models.
Orion promises enhancements in reasoning, problem-solving, and language processing, addressing key challenges like AI hallucinations through advanced synthetic data techniques. While OpenAI’s new model is internally viewed as the successor to GPT-4, there has been no confirmation on whether it will be labelled as GPT-5 upon release.
In keeping with its phased rollout strategy, OpenAI will initially provide Orion to its close business partners rather than releasing it broadly via ChatGPT. This limited-access approach will enable these partners to develop specialised products and features using the cutting-edge platform before a broader public release.
Apple set to introduce Truecaller-like ID service for businesses
Microsoft Collaboration on Azure
Microsoft, OpenAI’s primary partner in AI model deployment, is expected to host Orion on its Azure platform as early as November. Microsoft engineers have reportedly been preparing for the rollout, which is anticipated to cater to industries where accuracy and reliability are paramount, such as healthcare and finance.
This strategic collaboration allows OpenAI to strengthen its presence in the rapidly advancing AI sector, competing with other tech giants like Google DeepMind and Meta.
OpenAI has been developing Orion for several months, utilising synthetic data generated by the recently launched OpenAI o1—an advanced model designed to approach human-like AI capabilities.
OpenAI o1 has demonstrated substantial improvements in handling complex, multistep challenges and generating code. Notably, the model is said to perform at a level similar to PhD students in benchmark tasks within the fields of physics, chemistry, and biology.
As OpenAI continues to evolve its AI offerings, the introduction of Orion aims to further push boundaries in artificial intelligence applications across various industries. Although the launch date remains tentative, with the potential for adjustments, Orion’s release is set to mark a major milestone in AI development, reflecting OpenAI’s ambitions to lead the AI landscape amid growing competition.
1 month ago
OpenAI Unveils 'Swarm': A flexible AI-driven framework for multi-agent research
OpenAI has quietly launched Swarm, a new experimental framework designed to advance the collaboration and interaction of multiple AI agents.
This innovative initiative offers developers a comprehensive toolkit to create AI systems that can operate autonomously, performing complex tasks with minimal human intervention.
Despite the low-key release, the introduction of Swarm has significant implications for the future of AI.
OpenAI positions Swarm as a research and educational experiment, similar to the early days of ChatGPT when it was released in 2022.
The framework is now available on GitHub, enabling developers to explore its potential for building multi-agent AI systems.
A Glimpse into the Future of AI Collaboration
Swarm provides an insight into a future where AI systems can autonomously collaborate across different tasks and sources of information.
The framework allows developers to create AI agents that can work together in networks, tackling sophisticated tasks. These agents can potentially perform activities across multiple websites or even act on behalf of users in real-world situations.
OpenAI emphasises that Swarm is not a commercial product but rather a "cookbook" for experimental code.
ChatGPT being used to influence US elections, alleges OpenAI
According to Shyamal Anadkat, an OpenAI researcher, "Swarm is not an official OpenAI product. Think of it more like a cookbook—experimental code for building simple agents. It’s not intended for production use and won’t be maintained."
How Swarm Works: Agents and Handoffs
At the heart of Swarm lies its focus on two key components: Agents and Handoffs. Agents in the Swarm system are AI entities equipped with specific instructions and tools, enabling them to autonomously perform tasks.
When needed, these agents can "hand off" their responsibilities to other agents, facilitating smooth task delegation.
This design allows for the breakdown of complex tasks into smaller, manageable steps, distributed among multiple agents. For example, an agent might retrieve and process data, then hand off the task of data transformation to another agent. This flexibility makes Swarm particularly useful for workflows and operations requiring intricate, multi-step processes.
Concerns Over Jobs
While Swarm offers exciting possibilities, it has also sparked concerns over its potential impact on the job market and the risks of autonomous AI.
One of the primary concerns is job displacement. With AI systems like Swarm becoming more autonomous and efficient, some fear that human roles, particularly in white-collar jobs, could be replaced by automated networks of AI agents.
Others argue that rather than eliminating jobs, such technologies may lead to job reshaping, where human workers collaborate with AI systems.
Security risks and biases in AI-driven decisions are also major points of concern. Autonomous systems operating without human oversight could malfunction or make biased decisions, potentially posing serious security threats.
OpenAI is aware of these risks and encourages developers to use custom evaluation tools to assess the performance of their agents thoroughly. The company underscores the need for responsible AI development as the conversation around balancing innovation with ethical concerns continues.
A Research Tool with Far-Reaching Potential
Though Swarm is experimental, its release marks a significant step in the development of multi-agent AI systems.
As developers explore its capabilities, the framework is expected to play an important role in shaping the future of AI, particularly in terms of collaboration and autonomy.
For now, OpenAI's Swarm stands as a powerful research tool, offering a glimpse into what AI could achieve while also highlighting the importance of careful oversight and responsible innovation.
Source: Agencies
1 month ago
ChatGPT being used to influence US elections, alleges OpenAI
OpenAI has disclosed alarming instances of its artificial intelligence models, including ChatGPT, being misused by cybercriminals to create fake content aimed at influencing US elections.
The findings underscore the growing challenge AI poses to cybersecurity and election integrity, raising fresh concerns about the role of emerging technologies in shaping democratic processes.
The report, revealed on Wednesday, details how AI tools like ChatGPT have been exploited to generate persuasive, coherent text at an unprecedented scale.
Cybercriminals have used the technology to craft fake news articles, social media posts, and even fraudulent campaign materials intended to mislead voters.
These AI-generated messages are often sophisticated enough to mimic the style of legitimate news outlets, making it increasingly difficult for the average citizen to discern truth from fabrication.
Google loses final EU court appeal against 2.4 billion euro fine in antitrust shopping case
One of the most concerning trends highlighted in the report is the ability of malicious actors to tailor disinformation campaigns to specific demographics. By leveraging data mining techniques, cybercriminals can analyse voter behaviour and preferences, creating targeted messages that resonate with particular audiences.
This level of personalisation enhances the impact of disinformation, allowing bad actors to exploit existing political divisions and amplify societal discord.
AI-Driven ‘Disinformation’
The US Department of Homeland Security has also raised concerns about the potential for foreign interference in the upcoming November elections.
According to US authorities, Russia, Iran, and China are reportedly using AI to spread divisive and fake information, posing a significant threat to election integrity.
These countries have allegedly employed artificial intelligence to generate disinformation aimed at manipulating public opinion and undermining trust in the democratic process.
The report from OpenAI indicates that the company has thwarted over 20 attempts to misuse ChatGPT for influence operations this year alone.
In August, several accounts were blocked for generating election-related articles, while in July, accounts from Rwanda were banned for producing social media comments intended to influence that country's elections. Although these attempts have so far failed to gain significant traction or achieve viral spread, OpenAI emphasises the need for vigilance, as the technology continues to evolve.
Challenges
The speed at which AI can produce content poses significant challenges for traditional fact-checking and response mechanisms, which struggle to keep pace with the flood of false information.
Family outraged as AI Chatbot mimics murdered daughter
This dynamic creates an environment where voters are bombarded with conflicting narratives, complicating their decision-making processes and potentially eroding trust in democratic institutions.
OpenAI’s findings also highlight the potential for AI to be used in automated social media campaigns. The ability to rapidly generate content allows bad actors to skew public perception and influence voter sentiment in real time, particularly during critical moments in the run-up to elections.
Despite the limited success of these operations to date, the potential for AI-driven disinformation to disrupt elections remains a serious concern.
Greater Vigilance
In response to these developments, OpenAI has called for increased collaboration between technology companies, governments, and civil society to address the misuse of AI in influence operations.
The company is also enhancing its own monitoring and enforcement mechanisms to detect and prevent the misuse of its models for generating fake or harmful content.
As artificial intelligence continues to reshape the information landscape, OpenAI’s report serves as a stark reminder of the need to balance technological innovation with robust safeguards.
The stakes are high, and the ability to maintain the integrity of democratic processes in the age of AI will require coordinated efforts and proactive strategies from all stakeholders involved.
1 month ago
ChatGPT, Gemini won't reach human intelligence, Meta AI chief says
The artificial intelligence that powers systems like OpenAI's ChatGPT, Google's Gemini and Meta’s Llama will not be able to attain human levels of intelligence, said Meta's AI head Yann LeCun.
In an interview published in the Financial Times on Wednesday, he gave an insight into how the tech giant expects to develop the technology going ahead, only weeks after its plans to spend massively frightened investors and destroyed hundreds of billions from its market worth, reports Forbes.
ChatGPT-4: All you need to know
The models, commonly referred to as LLMs, are trained using massive quantities of data, and their capacity to properly respond to prompts is restricted by the type of the data on which they are trained, according to LeCun, implying that they are only accurate when given the appropriate training data, it said.
LLMs have a "limited understanding of logic," lack enduring memory, do not understand the physical world, and cannot plan hierarchically. LeCun said, adding that they “cannot reason in any reasonable definition of the term.”
'TruthGPT': Elon Musk plans to create alternative to ChatGPT
Because they are only accurate when fed the correct training data, LeCun, considered one of three "AI godfathers" for his fundamental contribution in the field, stated that LLMs are also "intrinsically unsafe" and that researchers seeking to produce human-level AI should look at other models, the report said.
LeCun stated that he and his roughly 500-strong team at Meta's Fundamental AI Research lab are working to develop an entirely new generation of AI systems based on an approach known as "world modelling," in which the system builds an understanding of the world around it in the same way that humans do and develops a sense of what would happen if something changed as a result, added the report.
What Is Google Gemini AI? How to Use the New Chatbot Model
LeCun predicted that human-level AI may take up to ten years to create using the world modelling technique.
6 months ago
“Her”? OpenAI to remove ChatGPT voice over Scarlett Johansson resemblance
OpenAI says it will delete one of ChatGPT's voices after it was compared to Hollywood actress Scarlett Johansson.
When OpenAI demonstrated the characteristics of its new model, users saw a similarity in the chatbot's "Sky" voice option, which reads out responses to users, reports BBC.
The “flirty, conversational” enhancement to its AI chatbot was compared to the actress's role in the 2013 film “Her”.
OpenAI saga: ChatGPT-maker says Sam Altman returning to company
According to OpenAI, the voices in ChatGPT's voice mode were "carefully selected through an extensive process spanning five months involving professional voice actors, talent agencies, casting directors, and industry advisors".
“Her” has Joaquin Phoenix falling in love with his phone's operating system, which is voiced by Johansson.
Director Spike Jonze stated at the time that the film was "not about technology or software," but rather about discovering love and intimacy.
GPT-4o: What’s OpenAI’s latest version really capable of?
In November, Johansson allegedly sued an artificial intelligence (AI) app for using her picture in an advertisement without her permission.
OpenAI stated on Monday that its "Sky" voice is not meant to be a "imitation" of the star. "We believe that AI voices should not deliberately mimic a celebrity's distinctive voice," it said in a blog post.
In a statement on X, the company stated that it is "working to pause" the voice while it addresses issues about how it was picked, the report said.
Despite this, when OpenAI unveiled its new model GPT-4o on May 13, CEO Sam Altman mentioned the name of the film on X.
New York Times suing ChatGPT maker OpenAI, Microsoft for copyright infringement
6 months ago
GPT-4o: What’s OpenAI’s latest version really capable of?
OpenAI has introduced the newest version of the technology that powers its AI chatbot ChatGPT.
It's called GPT-4o, and it will be made available to all ChatGPT users, including non-subscribers, reports BBC.
It is faster than previous models and has been trained to respond to commands in a conversational, often alluring, tone.
The updated version can read and analyse photographs, translate languages, and detect emotions through visual expressions. There is also enhanced memory, which allows it to recall prior commands, it said.
Read more: Chandler Bing, the AI chatbot: A tribute to Matthew Perry’s ‘Friends’ character
GOT-4o may be interrupted and has a more natural conversational tempo; there is no gap between asking a question and receiving a response.
Mira Murati, OpenAI's chief technical officer, characterised GPT-4o as "magical" but stated that the company will "remove that mysticism" with the product's release, said the report.
While this technology is fast getting more sophisticated and believable as a companion, it is not sentient or magical; rather, it is clever programming and machine learning, it also said.
There have been rumours about a collaboration between OpenAI and Apple, and while this has not been verified, it was clear during the presentation that Apple devices were used throughout.
Read more: What Is Google Gemini AI? How to Use the New Chatbot Model
6 months ago
Explainer: What may have caused OpenAI board to fire Sam Altman
In a surprising move, OpenAI, the artificial intelligence research lab, ousted its CEO, Sam Altman, raising eyebrows and leaving shareholders in the dark.
While concerns about the rapid advancement of AI technology may have played a role in Altman's termination, the handling of the situation has drawn criticism from various quarters, reports CNN.
The decision to remove Altman, credited with steering OpenAI from obscurity to a $90 billion valuation, was made abruptly, catching even major stakeholders like Microsoft off guard.
Human drama at OpenAI: Board reportedly ‘in discussion’ with Sam Altman to return as CEO
The CNN report suggests that Microsoft, OpenAI's most important shareholder, was unaware of Altman's dismissal until just before the public announcement, causing a significant drop in Microsoft's stock value.
OpenAI employees, including co-founder and former president Greg Brockman, were also blindsided, leading to Brockman's subsequent resignation. The sudden departure of key figures prompted rumors of Altman and former employees planning to launch a competing startup, posing a threat to OpenAI's years of hard work and achievements, said the report.
The situation worsened due to the peculiar structure of OpenAI's board. The company, a nonprofit, harbors a for-profit entity, OpenAI LP, established by Altman, Brockman, and Chief Scientist Ilya Sutskever. The for-profit arm's rapid innovation to achieve a $90 billion valuation clashed with the nonprofit's majority-controlled board, resulting in Altman's dismissal, it also said.
The tipping point appears to be Altman's announcement at a recent developer conference, signaling OpenAI's intention to provide tools for creating personalised versions of ChatGPT. This move, seen as too risky by the board, may have triggered Altman's removal.
ChatGPT-maker OpenAI fires CEO Sam Altman
Altman's warnings about the potential dangers of AI and the need for regulatory limits indicate a clash between innovation and safety within OpenAI. The board's concerns about Altman's pace of development, while perhaps justified, were mishandled, leading to a crisis that could have been avoided.
The aftermath sees OpenAI scrambling to reverse the decision, attempting to entice Altman back. The incident has strained relations with Microsoft, which now demands a seat on the board. OpenAI's future hangs in the balance, with possibilities ranging from Altman's return to a potential competition with a new startup, the report also said.
In the end, OpenAI finds itself in a precarious position, facing potential internal upheaval and external challenges, highlighting the importance of strategic decision-making in the rapidly evolving field of artificial intelligence.
Microsoft hires OpenAI founder Sam Altman to lead AI research team
1 year ago
Human drama at OpenAI: Board reportedly ‘in discussion’ with Sam Altman to return as CEO
The OpenAI board is reportedly "in discussion" with Sam Altman regarding his potential return as Chief Executive Officer (CEO) after he was suddenly fired on Friday (November 17, 2023), according to The Verge.
Quoting sources close to the matter, The Verge reported that Altman is “ambivalent” about coming back and would want significant governance changes.
Also read: ChatGPT-maker OpenAI fires CEO Sam Altman
Earlier, many staffers of OpenAI, the US-based AI research and deployment company that developed ChatGPT, gave an ultimatum to the OpenAI board to resign and bring back Sam Altman and Greg Brockman, chairman of OpenAI who resigned in protest of firing Altman.
As per The Verge, the board has initially reached an agreement to resign, making way for Altman and Brockman to return. However, there seems to have been a change in their stance since then.
A source close to Altman told Verge that if he decides to start a new company, those staffers will go with him, which could lead OpenAI towards a state of free-fall.
Also read: ChatGPT-4: All you need to know
Following Altman’s termination as CEO, a string of senior researchers of the organisation have resigned from their posts at OpenAI.
Meanwhile, in a memo sent to OpenAI staffers, one executive member of the company has reportedly said "we remain optimistic" about bringing back Sam Altman, The Verge reports, quoting The Information. The Verge, however, couldn't confirm whom the executive was referring to with the term "we."
1 year ago
ChatGPT-4: All you need to know
OpenAI’s ChatGPT-4 is the latest iteration of the groundbreaking Generative Pre-trained Transformer (GPT) series. Building on the success of its predecessors, GPT-4 offers enhanced capabilities, improved performance, and a more user-friendly experience. GPT-4 was publicly released on March 14, 2023, making it accessible to users worldwide. Let’s explore how to use ChatGPT-4, its new features, and more.
New Features of OpenAI's ChatGPT-4
OpenAI highlights three significant advancements in this next-generation language model: creativity, visual input, and longer context. According to OpenAI, GPT-4 demonstrates substantial improvements in creativity, excelling in both generating and collaborating with users on creative endeavors. Let’s see some of the top new features of ChatGPT-4.
Can Understand More Advanced Inputs
One of the major breakthroughs of GPT-4 lies in its enhanced capacity to comprehend intricate and nuanced prompts. OpenAI reports that GPT-4 showcases performance at equivalence with human-level expertise on diverse professional and academic benchmarks.
Read more: 7 Ways to Earn Money with ChatGpt
This achievement was demonstrated by subjecting GPT-4 to numerous human-level exams and standardized tests, including the SAT, BAR, and GRE, without any specific training. Remarkably, GPT-4 not only grasped and successfully tackled these tests, but it also consistently outperformed its predecessor, GPT-3.5, across all assessments.
GPT-4 boasts support for more than 26 languages, including less widely spoken ones like Latvian, Welsh, and Swahili. When assessed based on three-shot accuracy using the MMLU benchmark, GPT-4 surpassed not only GPT-3.5 but also other prominent LLMs such as PaLM and Chinchilla in terms of English-language proficiency across 24 languages.
Multimodal Functionality
In contrast to the previous version, ChatGPT, GPT-4 introduces a remarkable advancement in its range of multimodal capabilities. This latest model can now process not only text prompts but also image prompts.
Read more: How to Use AI Tools to Get Your Dream Job
This groundbreaking feature enables the AI to accept an image as input, interpret it, and explain it as effectively as a text prompt. The model seamlessly handles images of varying sizes and types, including documents that combine text and images, hand-drawn sketches, and even screenshots.
Enhanced Steerability
OpenAI further claims that GPT-4 exhibits a remarkable level of steerability. Notably, it has become stronger in staying true to its assigned character, reducing the likelihood of deviations when deployed in character-based applications.
Developers now have the ability to prescribe the AI’s style and task by providing specific instructions within the system message. These messages enable API users to customize the user experience extensively while operating within defined parameters. To ensure model integrity, OpenAI is also actively working on enhancing the security of these messages, as they represent the most common method for potential misuse.
Read more: ChatGPT ‘passed’ BCS exam, according to Science Bee’s experiment
1 year ago