A former high-level Google employee alleged that the company took shortcuts and abandoned its fairness principles in launching the Gemini chatbot to quickly compete with ChatGPT.
Google’s RESIN team, which reviewed projects for AI principles compliance, had found Gemini unsafe but their director pushed for release anyway.
“They said no, [expletive] it. We got to get to market because we are losing to ChatGPT,” the source stated.
Some employees wanted to examine Gemini’s data and model to understand what it had learned, but were denied.
The company prioritized speed to market over thorough reviews due to seeing ChatGPT as an existential business threat.
“These are old claims that reflect a single former employee’s view and are an inaccurate representation of how Google launches its AI products. We are committed to developing this new technology in a responsible way,” a Google spokesperson said.
“The thing that has sustained them and made them who they are–they are no longer the market leader in. [ChatGPT] is a real, genuine threat to their whole business,” the source said.
“They basically made a strategy decision to say, generative AI; we have to get on it. We don’t care about fairness anymore. We don’t care about bias, ethics. As long as it’s not producing child sexual abuse material or doing something harmful to a politician that could potentially affect our image, we’re going to throw [expletive] out there,” the source said.
Google reoriented strategies and teams, shifting priorities from fairness to speed and minimizing risks to the company.
The source claimed Google’s culture over-emphasized product launches and new initiatives over ensuring existing offerings worked properly.
“Even the AI principles review shifted. Even the risk assessment that we were using shifted from looking at how might these models impact our users to what is the business risk on us for these models?” the source noted.
“You want to get promoted, you have to launch and you have to land,” the former employee said.
Combined with leadership changes and lack of coordination, this led to insufficient reviews and issues with Gemini’s public release, though Google maintained its processes were appropriate.
“First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive,” Google senior vice president Prabhakar Raghavan wrote.
Most Popular:
Hidden Camera Exposes Biden Official
Border Patrol Marksman Takes Matters Into His Own Hands