Google apologizes for ahistorical and inaccurate Gemini AI images, but does nothing to correct its racist, anti-white programming
Google has apologized for the
ahistorical and inaccurate artificial intelligence images generated by its recently launched Gemini AI tool.
Gemini, formerly known as Google Bard, is a multimodal large language model system that generates responses based on contextual information, language, tone of prompts and training data. In early February, Google announced the release of Gemini 1.5, claiming significantly enhanced performance.
However, two weeks after Google released Gemini, the AI model immediately came under scrutiny for frequently producing images celebrating the diversity and achievements of Black, Native American and Asian people and inserting them into historically-inspired image prompts that are completely ahistorical.
Social media users shared screenshots where Gemini declined requests for images of White individuals while readily providing images of individuals from other racial backgrounds. Gemini even generated images of non-White United States senators in the 1800s and racially diverse soldiers of Nazi Germany. (Related:
Google apologizes after its AI-powered image generator Gemini kept inserting "diversity" skin color into historical image prompts.)
The AI tool initially justified its refusal by pointing out the importance of focusing on individual qualities rather than race to create a more inclusive and equitable society but eventually apologized.
In a formal apology on Feb. 16,
Google acknowledged the shortcomings in the output of Gemini. The company expressed regret after users reported embarrassing and inaccurate images generated by the system, particularly in response to prompts related to historical figures such as Nazi soldiers and U.S. Founding Fathers.
Google Senior Vice President Prabhakar Raghavan admitted in a blog post that some images generated by Gemini were "inaccurate or even offensive," conceding that the AI tool had "missed the mark." Raghavan explained that while the company aimed for diversity in open-ended prompts, it should have provided accurate historical context in response to specific queries.
"If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity," Raghavan explained. "However, if you prompt Gemini for images of a specific type of person or people in particular cultural or historical contexts, you should absolutely get a response that accurately reflects what you ask for."
Gemini Experiences Product Management Senior Director Jack Krawczyk acknowledged the concerns in a separate statement to
Fox News Digital.
"We're working to improve these kinds of depictions immediately.
Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here," he said.
Google temporarily suspends Gemini
Furthermore, Google temporarily suspended Gemini on Feb. 15 and pledged to enhance the feature before relaunching it.
This setback adds to a series of challenges for Google in the AI domain. The company recently faced backlash for a promotional video that exaggerated Gemini's capabilities and received criticism for its previous AI model, Google Bard.
Google has been trying to release its own generative AI chatbot since November 2022, after OpenAI launched ChatGPT. The tech giant is striving to
establish its vision for the "Gemini Era," with competitors like Microsoft and OpenAI surging ahead.
However, when promoting Bard, Google shared incorrect information about pictures of a planet outside our solar system, causing its stocks to drop by up to 9 percent.
Recently, Google rebranded Bard as Gemini and introduced three versions: Gemini Ultra for complex tasks, Gemini Pro for a wide range of tasks, and Gemini Nano for efficient on-device tasks. But then again, Google appears to have overcorrected, seeking diversity even in historical contexts where it makes little sense.
Visit
EvilGoogle.news for more stories about Google's nefarious plans with AI.
Listen to Elon Musk's warning about how Bard and other AI chatbots
will be far more dangerous than nuclear weapons.
This video is from
DaKey2Eternity channel on Brighteon.com.
More related stories:
Google is using AI to dig through Gmail accounts to "find exactly what you're looking for" – and perhaps MORE.
Google FAKES Gemini AI video to pump up its stock price.
Google rolls out new generative AI feature that summarizes articles – meaning, you can only see what it allows you to see.
Google’s rush to win AI race has led to ETHICAL LAPSES.
Google unveils plan to use AI to completely destroy journalism.
Sources include:
FoxBusiness.com
VentureBeat.com
MSN.com
Brighteon.com