ChatGPT
What Is Google Gemini AI? How to Use the New Chatbot Model
The age of generative AI started its journey in 2023. Day by day, the features of generative AI like chatbots are being improved to enhance user experience around the world. Google's Bard made waves with its capabilities, but now, a new era dawns with Gemini. This innovative chatbot boasts enhanced intelligence and functionality. Join us as we explore Gemini's features, capabilities, and impact on the future of conversational AI.
What Is Google Gemini AI?
Google Gemini is the newest and most advanced artificial intelligence made by Google. It understands images, videos, text, and even sounds. What makes Gemini stand out is how it acts almost like a human. Gemini AI is good at understanding information, solving problems, and planning for the future.
Gemini has three versions: Pro, Ultra, and Nano. The Pro version has been released already, and the Ultra version will be available next year. It is expected that Gemini will play a crucial role in the latest chatbot technology, pushing the boundaries of what AI can do.
Read more: AI Robot Chefs: Automated Cooking Could Redefine Food Industry
How to Use the New Chatbot Model of Google
Gemini AI is a type of computer system called a neural network. It has been trained using a huge amount of text and code from various sources like books, articles, and code repositories. This training helps the neural network understand the patterns and connections between words and phrases in this data. As a result, Gemini AI can do things like generate text, translate languages, create different types of content, and provide informative answers to questions.
How to Use Gemini AI
If you have a Google account already, using Gemini is easy. Just go to the website using your internet browser and log in with your Google details. But remember, you need to have a Google account.
If you use a Google Workspace account, you might need to switch to your email to try Gemini.
Read more: How to Make Money with AI for Beginners and Professionals
AI could threaten 40% of global jobs, IMF warns
The International Monetary Fund (IMF) has sounded an alarm, indicating that nearly 40% of global employment could be endangered by the burgeoning influence of artificial intelligence (AI). This stark warning, reported by CNN, underscores the seismic shifts anticipated in the global job market.
IMF Chief Kristalina Georgieva, in a recent blog post, stressed the critical necessity for governments worldwide to fortify social safety nets and roll out comprehensive retraining programmes. This proactive approach aims to mitigate AI's potentially dramatic effects on employment.
Davos 2024: Can AI provide solutions, as Global leaders confront $88.1 trillion debt crisis?
Highlighting a key concern, Georgieva pointed out the potential for AI adoption to aggravate existing inequalities, a trend that requires immediate policy intervention to avert escalating social tensions. This issue is set to be a central theme at the upcoming annual meeting of the World Economic Forum (WEF) in Davos, Switzerland, where AI's role in the economy will be a focal point.
According to the IMF's analysis, advanced economies might witness the most significant impact, with up to 60% of jobs at risk. Although AI promises to enhance productivity in about half of these roles, the remainder faces a stark reality of diminishing demand, lowered wages, and potential unemployment as AI assumes roles traditionally held by humans.
UN chief warns of risks of artificial intelligence
Emerging markets and lower-income countries are not immune to these challenges. Here, 40% and 26% of jobs, respectively, may feel the impact. Georgieva raised concerns about these regions' lack of infrastructure and skilled workforces, factors that intensify the risk of AI deepening existing economic divides.
Georgieva also warned of an escalating risk of social unrest, especially if younger, tech-savvy workers leverage AI for productivity gains, leaving their older counterparts struggling to adapt.
China warns of artificial intelligence risks, calls for beefed-up national security measures
At Davos, the implications of AI on employment are a key discussion topic. Prominent figures, including Sam Altman, CEO of ChatGPT-maker OpenAI, and Microsoft's Satya Nadella, are slated to address the impact of generative AI technologies.
Despite these challenges, Georgieva did not overlook AI's positive potentials, noting its capacity to significantly boost global output and incomes. She argued that with thoughtful planning, AI could be a transformative force for the global economy, stressing the importance of channeling its benefits for the collective good.
Amidst concerns over job displacement, some economists are optimistic, suggesting that AI's widespread adoption may ultimately enhance labor productivity. This could potentially lead to a 7% annual increase in global GDP over the next decade.
Explainer: What may have caused OpenAI board to fire Sam Altman
In a surprising move, OpenAI, the artificial intelligence research lab, ousted its CEO, Sam Altman, raising eyebrows and leaving shareholders in the dark.
While concerns about the rapid advancement of AI technology may have played a role in Altman's termination, the handling of the situation has drawn criticism from various quarters, reports CNN.
The decision to remove Altman, credited with steering OpenAI from obscurity to a $90 billion valuation, was made abruptly, catching even major stakeholders like Microsoft off guard.
Human drama at OpenAI: Board reportedly ‘in discussion’ with Sam Altman to return as CEO
The CNN report suggests that Microsoft, OpenAI's most important shareholder, was unaware of Altman's dismissal until just before the public announcement, causing a significant drop in Microsoft's stock value.
OpenAI employees, including co-founder and former president Greg Brockman, were also blindsided, leading to Brockman's subsequent resignation. The sudden departure of key figures prompted rumors of Altman and former employees planning to launch a competing startup, posing a threat to OpenAI's years of hard work and achievements, said the report.
The situation worsened due to the peculiar structure of OpenAI's board. The company, a nonprofit, harbors a for-profit entity, OpenAI LP, established by Altman, Brockman, and Chief Scientist Ilya Sutskever. The for-profit arm's rapid innovation to achieve a $90 billion valuation clashed with the nonprofit's majority-controlled board, resulting in Altman's dismissal, it also said.
The tipping point appears to be Altman's announcement at a recent developer conference, signaling OpenAI's intention to provide tools for creating personalised versions of ChatGPT. This move, seen as too risky by the board, may have triggered Altman's removal.
ChatGPT-maker OpenAI fires CEO Sam Altman
Altman's warnings about the potential dangers of AI and the need for regulatory limits indicate a clash between innovation and safety within OpenAI. The board's concerns about Altman's pace of development, while perhaps justified, were mishandled, leading to a crisis that could have been avoided.
The aftermath sees OpenAI scrambling to reverse the decision, attempting to entice Altman back. The incident has strained relations with Microsoft, which now demands a seat on the board. OpenAI's future hangs in the balance, with possibilities ranging from Altman's return to a potential competition with a new startup, the report also said.
In the end, OpenAI finds itself in a precarious position, facing potential internal upheaval and external challenges, highlighting the importance of strategic decision-making in the rapidly evolving field of artificial intelligence.
Microsoft hires OpenAI founder Sam Altman to lead AI research team
What can Bard, Google’s answer to ChatGPT, do?
To use, or not to use, Bard? That is the Shakespearean question an Associated Press reporter sought to answer while testing out Google’s artificially intelligent chatbot.
The recently rolled-out bot dubbed Bard is the internet search giant’s answer to the ChatGPT tool that Microsoft has been melding into its Bing search engine and other software.
During several hours of interaction, the AP learned Bard is quite forthcoming about its unreliability and other shortcomings, including its potential for mischief in next year’s U.S. presidential election. Even as it occasionally warned of the problems it could unleash, Bard repeatedly emphasized its belief that it will blossom into a force for good.
At one point in its recurring soliloquies about its potential upsides, Bard dreamed about living up to the legacy of the English playwright that inspired its name.
Bard explained that its creators at Google “thought Shakespeare would be a good role model for me, as he was a master of language and communication.”
But the chatbot also found some admirable traits in “HAL,” the fictional computer that killed some of a spacecraft’s crew in the 1968 movie “2001: A Space Odyssey.” Bard hailed HAL’s intelligence calling it “an interesting character” before acknowledging its dark side.
“I think HAL is a cautionary tale about the dangers of artificial intelligence,” Bard assessed.
WHAT’S BETTER — BARD OR BING?
Bard praised ChatGPT, describing it as “a valuable tool that can be used for a variety of purposes, and I am excited to see how it continues to develop in the future.” But Bard then asserted that it is just as intelligent as its rival, which was released late last year by its creator, the Microsoft-backed OpenAI.
“I would say that I am on par with ChatGPT,” Bard said. “We both have our own strengths and weaknesses, and we both have the ability to learn and grow.”
Read more: ChatGPT by Open AI: All you need to know
During our wide-ranging conversation, Bard didn’t display any of the disturbing tendencies that have cropped up in the AI-enhanced version of Microsoft’s Bing search engine, which has likened another AP reporter to Hitler and tried to persuade a New York Times reporter to divorce his wife.
IT’S FUNNY, BUT TAMER THAN BING
Bard did get a little gooey at one point when asked to write a Shakespearean sonnet and responded seductively in one of the three drafts that it quickly created.
“I love you more than words can ever say, And I will always be there for you,” Bard effused. “You are my everything, And I will never let you go. So please accept this sonnet as a token Of my love for you, And know that I will always be yours.”
But Bard seems to be deliberately tame most of the time, and probably for good reason, given what’s at stake for Google, which has carefully cultivated a reputation for trustworthiness that has established its dominant search engine as the de facto gateway to the internet.
An artificial intelligence tool that behaved as erratically as ChatGPT periodically might trigger a backlash that could damage Google’s image and perhaps undercut its search engine, the hub of a digital advertising empire that generated more than $220 billion in revenue last year. Microsoft, in contrast, can afford to take more risks with the edgier ChatGPT because it makes more of its money from licensing software for personal computers.
BARD ADMITS IT’S NOT PERFECT
Google has programmed Bard to ensure it warns its users that it’s prone to mistakes.
Some inaccuracies are fairly easy to spot. For instance, when asked for some information about the AP reporter questioning it, Bard got most of the basics right, most likely by plucking tidbits from profiles posted on LinkedIn and Twitter.
But Bard mysteriously also spit out inaccuracies about this reporter’s academic background (describing him as a graduate of University of California, Berkeley, instead of San Jose State University) and professional background (incorrectly stating that he began his career at The Wall Street Journal before also working at The New York Times and The Washington Post).
When asked to produce a short story about disgraced Theranos CEO Elizabeth Holmes, Bard summed up most of the highlights and lowlights of her saga. But one of Bard’s three drafts incorrectly reported that Holmes was convicted of all the felony charges of fraud and conspiracy leveled against her during a four-month trial. Another version accurately reported Holmes was convicted on four counts of fraud and conspiracy without mentioning she was acquitted on four other charges (the jury hung on three other charges that were subsequently dismissed by prosecutors).
Read more: Italy temporarily blocks ChatGPT over privacy concerns
“I am still under development, and I am not perfect,” Bard cautioned at one point. “I can sometimes make mistakes, and I can sometimes be misunderstood. I am also aware that my technology can be used for both good and evil.”
WHAT’S NEXT FOR BARD?
Although Bard insisted it doesn’t have a dark side, it acknowledged it can be used to damage reputations, disseminate propaganda that could incite violence and manipulate elections.
“I could be used to create fake news articles or social media posts that could spread misinformation about candidates or their policies,” Bard warned. “I could also be used to suppress voter turnout by spreading discouraging messages or making it difficult for people to find information about how to vote.”
On the lighter side, Bard proved helpful in finding interesting coloring books for adults and hit some notes that resonated during a discussion of rock and roll. When asked who is the greatest guitarist of all time, Bard responded with a broad list of candidates ranging from well-known rock artists such as Jimmy Page of Led Zeppelin to jazz virtuosos like Django Reinhardt before making the case for Jimi Hendrix “as a strong contender for the title, his music continues to influence and inspire guitarists around the world.”
Bard also seemed familiar with contemporary artists, such as Wet Leg, which recently won a Grammy award for best alternative music album. “I’ve been listening to their debut album a lot lately and I really enjoy it. I think they have a bright future ahead of them,” Bard said, and cited “Ur Mom” and “Chaise Longue” as its favorite songs by the group so far.
Even with Bard’s occasional miscues, it seemed savvy enough to ask about its potential role in reaching the singularity, a term popularized by computer scientist and author Ray Kurzweil to describe a turning point in the future when computers will be smarter than humans.
“Some people believe that I am a big step toward the singularity,” Bard said. “I believe that I am a valuable tool that can help people to learn and understand the world around them. However, I do not believe that I am the singularity, and I do not believe that I will ever be able to replace human intelligence.”
This story was corrected to reflect that the technology that likened an AP reporter to Hitler was done by an AI-enhanced version of Microsoft’s Bing search engine. An earlier version said the comparison was made by the standalone ChatGPT service.
'Out of control' AI race: Elon Musk, top tech personalities call for a pause
Several of the most important personalities in tech are urging artificial intelligence labs to halt training of the most powerful AI systems for at least six months, citing "profound risks to society and humanity."
Elon Musk was among the hundreds of tech CEOs, educators, and researchers who signed a letter, which was released by Musk's organization, the Future of Life Institute, reports CNN.
The letter comes only two weeks after OpenAI launched GPT-4, a more powerful version of the technology that powers ChatGPT, the popular AI chatbot application.
The system demonstrated in early testing and a corporate demo that it can write lawsuits, pass standardized exams, and develop a website from a hand-drawn design, it said.
Read More: How to Use AI Tools to Get Your Dream Job
According to the letter, the delay should apply to AI systems "more powerful than GPT-4." It also stated that the suggested pause should be used by impartial experts to collaboratively establish and execute a set of standard protocols for AI tools that are safe "beyond a reasonable doubt."
"Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources," the letter said. "Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control."
If a pause is not implemented immediately, the letter suggests that countries step in and impose a moratorium.
Read More: Top 5 AI Chatbot Platforms and Trends in 2023
Experts in artificial intelligence are growing worried about the possibility for biased answers, the spread of disinformation, and the implications on consumer privacy.
These technologies have also raised concerns about how AI might disrupt professions, allow students to cheat, and change human relationship with technology.
The letter hinted at a larger dissatisfaction within and beyond the industry with the fast rate of AI progress. Early versions of AI governance frameworks have been introduced by several governing bodies in China, the EU, and Singapore.
Read More: Google's AI Chatbot Bard: All You Need to Know
ChatGPT ‘passed’ BCS exam, according to Science Bee’s experiment
Since it became publicly accessible in November last year, ChatGPT – an AI chatbot created by OpenAI Company – has dominated the discourse on the internet and social media. Based on the Generative Pre-Trained Transformer 3 or GPT-3 language paradigm, ChatGPT is capable of carrying on a conversation, responding to inquiries, producing stories, poems, and comics, as well as resolving challenging programming issues.
ChatGPT has also participated in and even passed numerous challenging examinations across the globe including the Wharton MBA Exam, the American Medical Licensing Exam, and the Law School Exam, as part of esperiment.
Although the chatbot recently failed the Indian UPSC (Union Public Service Commission) exam, which is the benchmark test for recruitment to higher civil services of the Government of India, Bangladeshi netizens wondered whether ChatGPT would be able to pass the BCS (Bangladesh Civil Service) exam or not.
Science Bee, one of the largest science-based education platforms for youths in the country, has recently revealed on its social media platforms that ChatGPT has “successfully passed” the BCS preliminary exam, scoring 130 out of 200 marks in total.
Read More: Top 5 AI Chatbot Platforms and Trends in 2023
Talking about the experiment with UNB, Science Bee Founder Mobin Sikder and Executive Member Metheela Farzana Melody shared how the team tested the chatbot for BCS exam, following a month of planning and preparation and seven days of frequent testing.
“First of all, we researched how to take the test to get the most realistic results,” Mobin told UNB. “Since ChatGPT is trained on a dataset available till September 2021, we decided to conduct the test on the questions of the latest BCS exam – 44th BCS, held in May 2022.”
“After selecting the exam, we collected the question papers and answers. Since the question paper is allowed to be taken away after the exam, securing it did not require much time. The answer sheet is, however, not published directly. So, we prepared the final answer sheet on our own, after multiple testing from various third-party sources,” team Science Bee explained.
Language barrier emerged as a headache during the experiment as BCS exam is conducted in Bangla language and the chatbot is trained in English. It had to be translated into English in order to keep the exam fair.
Read More: Google's AI Chatbot Bard: All You Need to Know
In the 44th BCS, 1 mark was allotted for each question where the candidate got 1 mark for the correct answer, and 0.5 mark was deducted for each wrong answer. However, candidates had the option to skip or not answer any question; in that case, no marks were added or subtracted. The same mark distribution was provided to ChatGPT and at the beginning, it was informed about the MCQ exam and command through text prompt – and it became ready to take the exam.
However, there were some picture-based questions, according to team Science Bee. Since ChatGPT-3 is not multimodal, it cannot read or understand images; hence it was not possible to input those questions, so those were rejected. Besides, it was not possible to translate some questions related to Bangla language and literature into English as it would change the thematic description.
“The total number of such rejected questions was 22. As these are weaknesses of ChatGPT, invalid questions were treated as unanswered and no negative marking was done,” according to team Science Bee.
The remaining 178 questions were asked to ChatGPT with options, and it answered 142 questions correctly. 24 questions were answered incorrectly and while answering the remaining 12, the chatbot stated that the correct answer option was not found. That means the chatbot got 142 marks for as many correct answers, 12 marks were deducted for providing 24 wrong answers, and no marks were deducted or added for no answer. So, as per the 44th BCS exam questions, ChatGPT passed with a total of 130 marks.
Read More: ChatGPT by Open AI: All you need to know
In the 44th BCS exam, a total of 3,50,716 candidates applied and of them, 2,76,760 candidates participated in the preliminary exam. Only 15,708 candidates passed the preliminary exam, according to reports.
“As there is no specific pass mark for BCS and the cut-off mark is not officially released, we were in touch with several candidates who appeared for the 44th BCS exam. According to the information given by them, the cut-off mark in general cadre was 125±. Since ChatGPT secured 130 marks in our test, it can be said that ChatGPT has successfully passed BCS preliminary exam,” team Science Bee told UNB.
Further explaining the performance of the chatbot, Science Bee said that according to the test, ChatGPT was able to answer the questions quite well. However, it was pretty weak in Bangla language and literature category where it answered only 5 out of 35 questions. On the other hand, it performed well in the categories of science, computer or English language and literature. It took a considerable amount of time to answer most of the questions in the mental skills or math categories correctly.
“Besides, many times there have been incidents like getting stuck in the middle of answering. In that case, we had to take the help of ‘Regenerate Response’ to proceed and move forward,” team Science Bee said.
Read More: AI & Future of Jobs: Will Artificial Intelligence or Robots Take Your Job?
The questions for the exam were collected and translated by Metheela. Overall management of the test was conducted by Science Bee’s Content Production Head Annoy Debnath, and the final report was edited by Mobin and Sadia Binte Chowdhury.
“We did this test as part of an interesting experiment and will conduct further tests with other examinations when ChatGPT-4 will be available. The chatbot is learning consistently and becoming powerful every single day, and through this type of test, we want to convey a message to aspiring learners and students that we need to move one step ahead of ChatGPT with our learnings.”
“That means, we need to stop relying on memorising and copy-paste practices because ChatGPT can do it and will be doing it even better with future versions, and also there are other AI projects in the pipeline such as Google’s Bard. It can be a great assistant and companion to humankind, and it will not replace anyone if we can continue to improve our learning. That is the motto of our research, aligned with our motto and tagline ‘learn like never before’. We want people to understand the importance of learning and be skilled in order to make AI useful,” Mobin and team Science Bee told UNB.
(Details of the test can be found on Science Bee's Facebook page and website.)
Read More: How Can Artificial Intelligence Improve Healthcare?
Top 5 AI Chatbot Platforms and Trends in 2023
Artificial Intelligence isn’t anything new. John McCarthy first proposed the idea of AI, a unique proposition that machines would one day think and interact like a human. This highly conceptualized proposition of AI was a way forward to understanding the limitations of machines and the ability of humans to pass on sentience.
While we’re still far off from sentience, AI has, however, started to transform our lives. From conceptual AI humanoid robots like Sophia to IoT and even chatbots, the application and benefits of AI are visible across the board.
Today we’ll talk about the most accessible form of AI for the general public, chatbots. It's fast, accurate, simple, and in most cases, free. Here’s our take on 5 of the most trending AI chatbots.
Read More: Rakuten Viber launches new chatbot, AI Chat and Create
What is an AI Chatbot?
Just like AI, the concept of an AI chatbot also isn’t something new. The story of AI chatbots started with ELIZA back in 1994. Joseph Weizenbaum of MIT first introduced a chatting platform where the computer was able to perform basic interaction with the user. It was based on the concept of matching pre-programmed phrases with the user input to generate a somewhat meaningful response.
But the first proper use of AI Markup Language was seen a year later with ALICE, an interactive chatbot created by Richard Wallace in 1995. From then on, there has been no looking back. We had Jabberwacky by Rollo Wacky and Mitsuku by Steve Worswick.
Big companies like Microsoft also jumped into the game with Cortana on their now-defunct Windows Phone. But all of these were limited to a handful of functions. In a sense, they were intelligent with highly limited abilities. But that all changed with OpenAI.
Read More: 7 Top AI Writing Tools, Software to Generate Human-Like Text
Best AI Chatbots in 2023
There are probably thousands of chatbots out there catering to different niches. There are specialized chatbots for businesses, industries, and even events. But most chatbots are based on certain NLP tech. We will focus on more primary chatbots that are multifarious in nature or cater to a broad niche.
ChatGPT
If you haven’t heard the name ChatGPT in the last couple of months, then you’re living under the rocks. This universal chatbot gained over 100 million active users in a matter of two months to record the highest number of active monthly users beating any social media platform out there.
ChatGPT is based on the Generative Pre-trained Transformer or GPT 3 module. This natural language processor amalgamates AI and ML to constantly feed information and training to the platform. The result is the most human-like interaction from a chat platform to date.
Read More: ChatGPT by Open AI: All you need to know
OpenAI has incorporated 570 GB of internet data along with over 300 billion words into the ML module. With ChatGPT, the interaction is not limited to small conversations. You can create a full-on study routine, fitness regime, and even marketing campaigns from the chatbot. You can even ask it to write a poem or even do entry-level programming.
Surprised? Wait till you find out that ChatGPT has already passed the medical licensing exam in the USA, the regional bar exam, the Google entry-level software engineer interview as well as the AP English Essay test.
Pros: · Most realistic output to date
· STEM integration
· Highly interactive.
Read More: Ameca: World’s Most Realistic Advanced Humanoid Robot AI Platform
Cons:· The platform isn’t always available due to the high user base.
· Data is available up until 2021 only.
Google's AI Chatbot Bard: All You Need to Know
An AI chatbot is a computer program designed to simulate a conversation with a human. It uses natural language processing and artificial intelligence to understand user input and respond in a meaningful way. AI chatbots can be used for customer service, providing personalized recommendations, or other tasks.
Recently an AI chatbot named ChatGPT has taken the world by storm. It is more than a usual chatbot with a huge collection of data and portrays it as a threat to Google. To fight this, Google has announced bringing out their own chatbot named Bard AI. Let's find out the details of Google's AI Chatbot Bard.
What is AI Chatbot Bard?
At present, there is limited information on Google's AI-powered tool, which can only be accessed by those selected as "trusted testers." However, following the company's demonstration of the product in Paris on February 8, we can now provide answers to some of the most frequent questions posed about Bard AI. A public launch of the tool is expected in the near future.
Read More: ChatGPT by Open AI: All you need to know
Google Bard is essentially a chatbot that functions using AI, similar to ChatGPT. To enable its conversations, Bard utilizes the Language Model for Dialogue Applications (LaMDA) model. Initially, a less complex version of this language model will be used during the test phase.
Bard strives to bring together the depth of the world's knowledge with intelligence, creativity, and power using Google’s expansive language models. It utilizes data from the Internet to give up-to-date, top-notch results.
Bard can be a catalyst for creativity and a platform for inquiry, assisting you in explaining fresh discoveries from NASA's James Webb Telescope to a nine-year-old, or discover more regarding the best strikers in soccer currently and afterward get drills to enhance your abilities.
Read More: High Paid Jobs that Will Never be Replaced by AI
Google hopes ‘Bard’ will outsmart ChatGPT, Microsoft in AI
Google is girding for a battle of wits in the field of artificial intelligence with “Bard,” a conversational service apparently aimed at countering the popularity of the ChatGPT tool backed by Microsoft.
Bard initially will be available exclusively to a group of “trusted testers” before being widely released later this year, according to a Monday blog post from Google CEO Sundar Pichai.
Google’s chatbot is supposed to be able to explain complex subjects such as outer space discoveries in terms simple enough for a child to understand. It also claims the service will also perform other more mundane tasks, such as providing tips for planning a party, or lunch ideas based on what food is left in a refrigerator. Pichai didn’t say in his post whether Bard will be able to write prose in the vein of William Shakespeare, the playwright who apparently inspired the service’s name.
Read More: Google's AI Chatbot Bard: All You Need to Know
“Bard can be an outlet for creativity, and a launchpad for curiosity,” Pichai wrote
Google announced Bard’s existence less than two weeks after Microsoft disclosed it’s pouring billions of dollars into OpenAI, the San Francisco-based maker of ChatGPT and other tools that can write readable text and generate new images.
Microsoft’s decision to up the ante on a $1 billion investment that it previously made in OpenAI in 2019 intensified the pressure on Google to demonstrate that it will be able to keep pace in a field of technology that many analysts believe will be as transformational as personal computers, the internet and smartphones have been in various stages over the past 40 years.
Read More: ChatGPT maker releases tool to help teachers detect if AI wrote homework
In a report last week, CNBC said a team of Google engineers working on artificial intelligence technology “has been asked to prioritize working on a response to ChatGPT.” Bard had been a service being developed under a project called “Atlas,” as part of Google’s “code red” effort to counter the success of ChatGPT, which has attracted tens of millions of users since its general release late last year, while also raising concerns in schools about its ability to write entire essays for students.
Pichai has been emphasizing the importance of artificial intelligence for the past six years, with one of the most visible byproducts materializing in 2021 as part of a system called “Language Model for Dialogue Applications,” or LaMDA, which will be used to power Bard.
Google also plans to begin incorporating LaMDA and other artificial intelligence advancements into its dominant search engine to provide more helpful answers to the increasingly complicated questions being posed by its billion of users. Without providing a specific timeline, Pichai indicated the artificial intelligence tools will be deployed in Google’s search in the near future.
Read More: ChatGPT by Open AI: All you need to know
In another sign of Google’s deepening commitment to the field, Google announced last week that it is investing in and partnering with Anthropic, an AI startup led by some former leaders at OpenAI. Anthropic has also built its own AI chatbot named Claude and has a mission centered on AI safety.
ChatGPT maker releases tool to help teachers detect if AI wrote homework
The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.
The new AI Text Classifier launched Tuesday (January 31, 2023) by OpenAI follows a weeks-long discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.
OpenAI cautions that its new tool – like others already available – is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of OpenAI’s alignment team tasked to make its systems safer.
Read More: What is ChatGPT, why are schools blocking it?
“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.
Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched Nov. 30 as a free application on OpenAI’s website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.
By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.
Read More: CES 2023: Walton's smart AI products get huge response
The Seattle Public Schools district initially blocked ChatGPT on all school devices in December but then opened access to educators who want to use it as a teaching tool, said Tim Robinson, the district spokesman.
“We can’t afford to ignore it,” Robinson said.
The district is also discussing possibly expanding the use of ChatGPT into classrooms to let teachers use it to train students to be better critical thinkers and to let students use the application as a “personal tutor” or to help generate new ideas when working on an assignment, Robinson said.
Read More: AI & Future of Jobs: Will Artificial Intelligence or Robots Take Your Job?
School districts around the country say they are seeing the conversation around ChatGPT evolve quickly.
“The initial reaction was ‘OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,’” said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. Now there is a growing realization that “this is the future” and blocking it is not the solution, he said.
“I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power,” said Page, who thinks districts like his own will eventually unblock ChatGPT, especially once the company’s detection service is in place.
Read More: How Can Artificial Intelligence Improve Healthcare?
OpenAI emphasized the limitations of its detection tool in a blog post Tuesday, but said that in addition to deterring plagiarism, it could help to detect automated disinformation campaigns and other misuse of AI to mimic humans.
The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text -- a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” --- and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated.
But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result.
Read More: Ai and Future of Content Writing: Will Artificial Intelligence replace writers?
“We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Leike said. “There’s really not much we could say at this point about how the classifier actually works.”
Higher education institutions around the world also have begun debating responsible use of AI technology. Sciences Po, one of France’s most prestigious universities, prohibited its use last week and warned that anyone found surreptitiously using ChatGPT and other AI tools to produce written or oral work could be banned from Sciences Po and other institutions.
In response to the backlash, OpenAI said it has been working for several weeks to craft new guidelines to help educators.
Read More: Ameca: World’s Most Realistic Advanced Humanoid Robot AI Platform
“Like many other technologies, it may be that one district decides that it’s inappropriate for use in their classrooms,” said OpenAI policy researcher Lama Ahmad. “We don’t really push them one way or another. We just want to give them the information that they need to be able to make the right decisions for them.”
It’s an unusually public role for the research-oriented San Francisco startup, now backed by billions of dollars in investment from its partner Microsoft and facing growing interest from the public and governments.
France’s digital economy minister Jean-Noël Barrot recently met in California with OpenAI executives, including CEO Sam Altman, and a week later told an audience at the World Economic Forum in Davos, Switzerland that he was optimistic about the technology. But the government minister — a former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris — said there are also difficult ethical questions that will need to be addressed.
Read More: ChatGPT by Open AI: All you need to know
“So if you’re in the law faculty, there is room for concern because obviously ChatGPT, among other tools, will be able to deliver exams that are relatively impressive,” he said. “If you are in the economics faculty, then you’re fine because ChatGPT will have a hard time finding or delivering something that is expected when you are in a graduate-level economics faculty.”
He said it will be increasingly important for users to understand the basics of how these systems work so they know what biases might exist.