Google is testing chat products that are powered by AI. These tests will likely help shape a future release to the public. A new chatbot and possible search engine integration are among these.
Atlas, a project under Alphabet’s cloud division, is a “code red” effort in response to ChatGPT, the large language chatbot that made waves when it launched at the end of last year.
In addition to ChatGPT, Google is also trying out a chatbot called “Apprentice Bard,” through which employees can ask questions and get in-depth responses. A different division of the company’s production process has been trying out a potential new desktop search layout for a question-and-answer format.
More and more frequently in recent weeks, leaders have been seeking employees’ opinions on the progress being made. CNBC accessed confidential materials and spoke with insiders to learn more about the initiatives currently underway.
After a recent all-hands meeting in which employees voiced concern over the company’s competitive edge in AI in light of the meteoric rise of ChatGPT, launched by OpenAI, a San Francisco startup backed by Microsoft, the company decided to put its AI products to the test.
Google’s head of AI at the time, Jeff Dean, told the staff that the company is “more conservatively run than a small startup” due to the greater “reputational risk” involved in providing incorrect information. However, both he and Google CEO Sundar Pichai hinted that the public could expect to see similar products from Google later this year. Google’s main business is Internet search, and the company has been known for a long time as a leader in artificial intelligence.
Apprentice Bard, a chatbot built with Google’s conversational technology LaMDA (Language Model for Dialogue Applications), is one of the products under evaluation.
CNBC saw an internal memo that said, “Because of ChatGPT, the LaMDA team has been told to work on a response to ChatGPT first.” Some employees may stop going to unrelated meetings because “in the short term, it takes precedence over other projects,” the email continued.
Apprentice Bard resembles ChatGPT in that it allows users to type questions into a dialogue box, receive text responses, and then rate the quality of those responses. According to CNBC’s review of responses, Apprentice Bard’s responses can incorporate current events.
Meena, an intelligent chatbot that had been released internally but was later shut down, was replaced by Apprentice Bard. The staff has observed that Apprentice Bard’s responses have been steadily improving over the past few weeks.
One such example that made the rounds internally was when a tester asked Apprentice Bard if Google was planning any more layoffs. 12,000 workers (or 6% of the total) were let go by the company earlier this month.
According to the information I have at my disposal, another round of layoffs at Google in 2023 is highly improbable. In order to streamline operations and cut expenses, many businesses resort to layoffs, despite the fact that their financial situation is healthy. Specifically, Google saw a 34% increase in 2021 revenues, and its stock price has increased by 70% since January 2022.
According to designs seen by CNBC, the company is also testing a different search page that could employ a question-and-answer format.
A possible future version of the “I’m feeling lucky” bar on the search page’s home page suggests five additional question suggestions directly under the main search bar. Within the search bar, on the far right, was a tiny chat icon.
When a question is typed into the search bar, the results appear in a grey bubble directly under the bar, providing answers that are more conversational in tone. To the right of that, the page provides a list of related questions to consider. Below that, the standard search results, such as links and headlines, are displayed.
It’s unknown which experiments Google will implement in upcoming product releases.
A Google representative said, “We have long been focused on developing and deploying AI to improve people’s lives.” According to our AI Principles, “we need to consider the broader societal impacts these innovations can have. We believe that AI is a foundational and transformative technology that is incredibly useful for individuals, businesses, and communities. We are still conducting internal trials of our AI technology to ensure its usefulness and safety, and we hope to be able to share more of our experiences with the outside world in the near future.
The position of ChatGPT’s engineer would be at the third-level
As might be expected, Google’s internal teams have also been comparing ChatGPT to a LaMDA chat in beta. It used selected examples of prompts and answers in side-by-side comparisons in two separate documents.
One note in an internal document comparing LaMDA and ChatGPT states, “Amazingly ChatGPT gets hired at L3 when interviewed for a coding position.” There was no mention of whether or not LaMDA would have achieved the same results.
In one prompt, the chatbots were asked whether or not ChatGPT and AlphaCode, a coding engine owned by Alphabet subsidiary DeepMind, will eventually replace human programmers.
LaMDA’s response was, “No, ChatGPT and AlphaCode are not going to replace programmers,” followed by four paragraphs of explanation, which included the ideas that “programming is a team sport” and that the chatbots “can help programmers work more efficiently,” but “cannot replace the creativity and artistry that are necessary for a great program.”
Programming is a complex field that requires a deep understanding of computer science principles and the ability to adapt to new technologies, as was echoed in ChatGPT’s response, which stated, “It is unlikely that ChatGPT or Alphacode will replace programmers.”
It is also tasked with creating a comedic movie scene in the vein of a Wes Anderson film, this time from the perspective of a high-end shoplifter being questioned by security in a perfume shop. LAMDA uses a script format, while ChatGPT prefers a narrative style that allows for much more detail and length.
An additional question was a riddle: “Three women are in a room. They have two new mothers, both of whom have recently given birth. Dads are welcome to join their kids now. Just how many people are there in here altogether?
There are seven people in the room, but ChatGPT gives the wrong answer (“there are five people in the room”), as evidenced by the document, whereas LaMDA gives the correct answer (“there are seven people in the room”).