Google launched its AI chatbot Bard despite internal warnings from employees that it was a “pathological liar” prone to spewing out false information that could lead to “serious injury or death.”
Current and former employees accuse Google of ignoring AI ethics in its effort to catch up to competitors, like Microsoft-backed OpenAI’s popular ChatGPT.
Google’s push to develop Bard reportedly ramped up late last year after ChatGPT’s success prompted top brass to declare a “competitive code red,” according to the outlet.
Microsoft’s planned integration of ChatGPT into its Bing search engine is widely seen as a threat to Google’s dominant online search business.
However, many Google workers voiced concerns before the rollout when the company tasked them with testing out Bard to identify potential bugs or issues – a process known in tech circles as “dogfooding.”
Bard testers flagged concerns that the chatbot was spitting out information ranging from inaccurate to potentially dangerous.
Full Link ( Here )
© CopyRights RawNews1st