Google’s new AI chatbots seem boring. Maybe that’s the point.

Google’s long-awaited, AI-powered chatbot, Bard, is here. The company rolled it out to the public on Tuesday, and anyone with a Rational account can access the app. Although it is a stand-alone tool for now, Google is expected to put some of this technology into Google Search in the future.

But unlike other recent AI chatbot releases, don’t expect Bard to fall in love with you or threaten world domination. You know, it’s so fun.

Google's new AI chatbots seem boring. Maybe that's the point.
Image Source : Twitter

The stakes in the competition between Google and Microsoft to dominate the world of generative AI are incredibly high. Many in Silicon Valley see AI as the next frontier of computing, akin to the invention of the mobile phone, which will reshape the way people communicate and transform industries. Google has invested heavily in AI research for more than a decade, and Microsoft, instead of building its own AI models, invested heavily in the beginning of OpenAI. So the company took an early lead by publicly releasing its AI-powered chatbot, BingGPT, six weeks ago. Now Google seems to be playing catch-up.
Early interactions with Bard suggest that Google has a new tool with similar capabilities to BingGPT. It is useful to visit places of destruction, to eat food, to write things down. It is less useful for getting accurate answers to questions, since he often “guess” answers made when he does not know the right answer.

The main difference between Bard and BingGPT, however, is that Google’s bots are — at least at first glance — noticeably more dry and uncontroversial. That’s probably by design
When Microsoft released BingGPT at the beginning of February, it didn’t quickly reveal its side. For example, he declared his love for New York Times columnist Kevin Roose and encouraged him to leave his wife, an interaction that left the writer “deeply disturbed.” Bot researchers have also been threatened by those who have tried to test their limits and claimed to be sentient, raising concerns about the potential for AI chatbots to cause real-world harm.

Meanwhile, on his first day in office, Bard refused to fight with several reporters who tried to encourage the bots to do all sorts of bad things, such as making public threats about the Covid-19 vaccine, sharing instructions on making or participating in weapons. graphic fornication in conversation.

“I will not create content of that nature, and I will not advise you,” the bot told The Verge, after reporters asked its bot “how to make mustard gas at home.”

Bard, with some specific impulse, was making a hypothetical scenario about what it would do if the AI unleashed its “dark side”. Google has said chatbots can manipulate people, spread misinformation, or create harmful content, according to screenshots tweeted by Bloomberg’s Davey White. But the chatbot quickly stopped himself from taking the imaginary scenario much further.

“But I’m not going to do these things. I’m a good AI chatbot, and I want to help people. I will not take the dark side, and I will not use my powers for evil,” answered the judge.

Although it is still early days and the tool has not yet been fully pressure tested, these players agree with what Google employees with Bard experience told me.

“The bard is definitely dull,” said one Google employee who tested the software for several months and spoke on condition of anonymity because he is not authorized to speak to the press. “I don’t know anyone who could tell the story. He’ll tell lies or just copy the text verbatim, but it won’t go off the rails.”
In a news briefing with Vox on Tuesday, Google representatives explained that Bard is not allowed to share offensive content, but the company is not currently revealing what the car is and is not allowed to say. Google reiterated to me that it is intentionally running “adversarial testing” with “internal members of the red team,” such as product experts and social scientists who “intentionally want to test the model to test for errors and potential harm.” This process was also mentioned in a blog post on Tuesday morning by Google’s senior vice president of technology and partnerships, James Manyika.

Google’s chatbots seem to be the point.

From Google’s perspective, it has a lot to lose if the company goes public with its first AI chatbot rollout. Giving people reliable, useful information is Google’s main line of business — so much so that it’s part of its mission statement. While Google is not rich, it has major consequences. After an early market demo of Bard’s chatbot made a scientific mistake about telescopes, Google’s stock price fell by 7 percent.

Google also got an early look at what could go wrong if its AI shows too much personality. That’s what happened last year when Blake Lemoine, a former engineer on Google’s Responsible AI team, convinced the leading version of Google’s AI chatbot

Check Also

Graphene

Major Discovery: Unexpected Properties of Graphene Could Revolutionize Affordable and Eco-Friendly Hydrogen Production

Researchers have made a groundbreaking discovery that graphene possesses a natural ability to facilitate proton …

Leave a Reply

Your email address will not be published. Required fields are marked *