Google's rush to win the artificial intelligence (AI) race has
led to ethical lapses on the company's part. Eighteen current and former workers at the Mountain View, California-based tech firm, alongside internal documents reviewed by
Bloomberg, have attested to this.
According to the employees, the management asked them to test out Google's AI chatbot Bard in March, shortly before its public launch. One worker said the chatbot was "a pathological liar," while another dubbed it "cringe-worthy." A third employee posted his thoughts about Bard in an internal message group in February: "Bard is worse than useless; please do not launch."
The said note was viewed by nearly 7,000 people, with many agreeing that the chatbot's answers were contradictory or even egregiously wrong on simple factual queries. One employee recounted how Bard's advice about how to land a plane would lead to a crash. Another said Bard gave advice "which would likely result in serious injury or death" when asked about scuba diving.
The 18 current and former employees said the termination of AI researchers Timnit Gebru in December 202o and Margaret Mitchell in February 2021 was a turning point for Google's foray into AI – which culminated with Bard. Gebru and Mitchell, who co-led Google's AI team, left the search engine giant over a dispute regarding fairness in the company's AI research. Several other researchers subsequently left Google to join competitors – including Samy Bengio, who oversaw the work of Gebru and Mitchell.
Since then, employees have found it difficult to
work on ethical AI at Google. According to one former Google staffer who asked to work for fairness in machine learning, they were routinely discouraged to the point that it affected their performance review. The employee's managers protested that the employee's work in AI was getting in the way of their "real work." (Related:
Ex-Google engineer warns Microsoft's AI-powered Bing chatbot could be sentient.)
Google unit responsible for ethical AI disempowered and demoralized
Google reiterated that responsible AI remains a top priority at the company. Company spokesman Brian Gabriel told
Bloomberg: "We are continuing to invest in the teams that work on applying our AI principles to our technology."
The 18 current and former employees begged to differ, however. They said the Google unit that had been working on ethical AI, first established in 2018, is now disempowered and demoralized. Back in 2021, the Big Tech firm had pledged to double the headcount of the ethical AI team and pour more resources into it.
Now, staffers responsible for the safety and ethical implications of Bard and other new products have been admonished against impeding the development of generative AI tools in development. At least three members of the team were also included in the January mass layoff.
Google's pivot toward Bard at the cost of potential ethical issues was mainly spurred by the release of rival chatbot ChatGPT. Microsoft invested in the AI platform with a view to incorporating ChatGPT into its products, specifically Microsoft Office. Thus, Google is racing to beat Microsoft by incorporating Bard into Google Workspace.
A minority of employees are of the belief that the Mountain View, California-based tech firm has conducted sufficient safety checks with its new generative AI products, and that Bard is safer than competing chatbots. But now that Google has deemed the release of generative AI products as the No. 1 priority, ethics employees said it has become futile to speak up.
Signal Foundation President Meredith Whittaker, herself a former manager at Google, lamented how "AI ethics has taken a back seat. She warned: "If ethics aren't positioned to take precedence over profit and growth, they will not ultimately work."
Visit
EvilGoogle.news for more stories about Google's Bard AI chatbot.
Listen to Elon Musk's warning about how Bard and other AI chatbots
will be far more dangerous than nuclear weapons.
This video is from
DaKey2Eternity channel on Brighteon.com.
More related stories:
Google shares lose $100 billion after new AI chatbot gives an incorrect answer in demo.
Google unveils new AI medicine robots: Are HUMAN doctors about to become obsolete?
Google CEO admits he DOESN'T UNDERSTAND how his company's AI chatbot Bard works.
Sources include:
Finance.Yahoo.com
Brighteon.com