Google’s AI chatbot Gemini faced backlash for generating politically correct but historically inaccurate images in response to user prompts about history.
When asked to depict events like the signing of the Constitution, the AI showed racially diverse groups despite the actual participants being white males.
It also generated images of female popes and black vikings, inaccurately depicting gender and race in history.
New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far. pic.twitter.com/1LAzZM2pXF
— Frank J. Fleming (@IMAO_) February 21, 2024
Google apologized for “offering inaccuracies in some historical image generation depictions.”
Jack Krawczyk, the product lead on Google Bard, posted, “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.”
‘Woke’ Google Gemini refuses to say pedophilia is wrong after ‘diverse’ historical images debacle: ‘Individuals cannot control who they are attracted to’ https://t.co/ilMR2hx0cr pic.twitter.com/PaUWd2MN1m
— New York Post (@nypost) February 24, 2024
Users testing the limits found Gemini reluctant to depict things like churches in historically white contexts due to perceived offensiveness.
Gemini claimed the photos were meant to “provide a more accurate and inclusive representation of the historical context.”
Experts say the pre-set constraints shaped the AI’s woke responses.
Google acknowledged the issues and apologized, stating the AI “missed the mark” by offering inaccurate historical depictions, and is working to address the problems.
The incident highlights the risk of sociopolitical biases shaping generative AI systems.