Connect with us

Hi, what are you looking for?

Media

Google’s Gemini AI Goes Rogue, Runs Amok

via Youtube
This article was originally published at StateOfUnion.org. Publications approved for syndication have permission to republish this article, such as Microsoft News, Yahoo News, Newsbreak, UltimateNewswire and others. To learn more about syndication opportunities, visit About Us.

Google’s AI chatbot Gemini faced backlash for generating politically correct but historically inaccurate images in response to user prompts about history.

When asked to depict events like the signing of the Constitution, the AI showed racially diverse groups despite the actual participants being white males.

It also generated images of female popes and black vikings, inaccurately depicting gender and race in history.

Google apologized for “offering inaccuracies in some historical image generation depictions.”

Jack Krawczyk, the product lead on Google Bard, posted, “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.”

Users testing the limits found Gemini reluctant to depict things like churches in historically white contexts due to perceived offensiveness.

Gemini claimed the photos were meant to “provide a more accurate and inclusive representation of the historical context.”

Experts say the pre-set constraints shaped the AI’s woke responses.

Google acknowledged the issues and apologized, stating the AI “missed the mark” by offering inaccurate historical depictions, and is working to address the problems.

The incident highlights the risk of sociopolitical biases shaping generative AI systems.

You May Also Like

Trending