A waitlist for Bard (Google’s ChatGPT rival) began on March 21, 2023, and it promises to help users plan a friend’s baby shower, outline and write essay drafts, and get ideas for lunch based on what’s in the fridge.
A representative of the company claims that it will be a distinct experience that complements Google Search. Users can also visit Google Search to examine its responses or sources. In a blog post, Google said that it would later thoughtfully add large language models to search in a deeper way.
Google stated that it will begin deploying the tool in the United States and the United Kingdom, with plans to eventually make it available in additional languages and countries.
The news comes as Google, Microsoft, Facebook and other tech organizations competition to create and send artificial intelligence controlled apparatuses right after the new, popular outcome of ChatGPT.
Google announced last week that it would also include AI into its productivity tools, such as Gmail, Sheets, and Docs. Microsoft announced a similar AI upgrade to its productivity tools shortly after.
Last month, Google released Bard in a demonstration that was later criticized for providing an incorrect response to a telescope-related question. That day, Alphabet, the parent company of Google, saw its stock plunge 7.7%, erasing $100 billion from its market value.
Bard is based on a large language model, like ChatGPT, which was made available to the public by AI research company OpenAI at the end of November. In order to produce compelling responses to user prompts, these models are trained using vast online data repositories.
ChatGPT Has Now Taken Over The World!
According to reports, Google’s management declared a “code red” situation for its search business as a result of the significant attention paid to ChatGPT.
However, Bard’s error brought to light the difficulty that Google and other businesses have integrating the technology into their core products. There are a few problems that large language models can cause, such as biases being perpetuated, facts being incorrect, and aggressive responses.
In a recent blog post, Google acknowledged that AI tools “have their flaws.” According to the company, it continues to use human feedback to enhance its systems and implement new “guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic.”
OpenAI released GPT-4, the next-generation version of the technology that powers Microsoft’s new Bing browser and ChatGPT, last week with similar security measures. GPT-4’s ability to draft lawsuits, pass standardized tests, and construct a functioning website from a hand-drawn sketch astonished many users in early tests and a company demo on its debut day.