According to a Bloomberg report, Google employees called Bard "a pathological liar" and frequently gives users bad advice on a variety of topics during internal tests. Despite the concerns raised by testers, Google released Bard publicly, changing the trusted search engine to now give unreliable information in order to keep up with competition, according to Bloomberg.
One employee said in February in an internal message viewed by Bloomberg, "Bard is worse than useless: please do not launch."
The report argues that Google had good intentions behind the development of Bard. First pledging in 2021 to double its research team on the ethics of AI and assessing potential harms associated with the technology. All that changed when OpenAI launched the infamous ChatGPT in November and Microsoft
Before ChatGPT, Google was considered a leader in the "large language model" industry, the Washington Post reports, using these complex computer programs to better optimize search engine results and powering language tools like Google Translate.
Now, Google is pushing its Bard large language model in a way that seems to have left its AI ethics behind as the company instead focuses on evolving its search business around the generative AI. The Bloomberg report says the company "overruled a risk evaluation" submitted by an internal safety team stated that the system was not ready and could cause harm to the general user in order to speed up the pace of its product launches.
While Google's launch of Bard was lackluster -- with the chatbot notably providing false information in promotional content -- other AI chatbots available have also given unreliable or harmful answers to user queries. However, those shortcomings have not slowed the pace of their popularity, with Microsoft's Bing surpassing 100 million daily active users during its first month post launch.
CEO Sundar Pichai recently went on a press tour to talk about AI competition, telling CBS News' 60 Minutes on Sunday that AI's latest popularity boom is an opportunity for the company and that society needs to quickly adapt with regulations for the technology that "align with human values including morality."
"It's not for a company to decide," Pichai said. "This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers and so on."
When asked if society is prepared for the new technology, Pichai answered: "There are two ways I think about it. One the one hand I feel, no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology's evolving, there seems to be a mismatch. On the other hand, compared to any other technology, I've seen more people worried about it earlier in its life cycle. So I feel optimistic."