PR Scramble: Google CEO Blasts "Unacceptable" AI Gemini Amid Boycott Calls

Tyler Durden's Photo
by Tyler Durden
Wednesday, Feb 28, 2024 - 03:05 PM

This week, the boycott of Google products is accelerating as Americans were stunned to discover last week that the left-leaning big tech company's premier artificial intelligence bot, Gemini, previously known as Bard, was deeply infected with the woke mind virus by its purple-hair creators. 

"I've been reading Google's Gemini damage control posts. I think they're simply not telling the truth. For one, their text-only product has the same (if not worse) issues. And second, if you know a bit about how these models are built, you know you don't get these "incorrect" answers through one-off innocent mistakes," one X user stated in a post that has more than 5 million views. He further mentioned that he's "done with Google." 

Gemini's inaccuracies were so egregious that they appeared not to be mistakes but instead a possible deliberate effort by its woke creators to rewrite history. Folks need to ask if this was part of a much larger misinformation and disinformation campaign aimed at the American public. 

Google's PR team has been in damage-control mode for about a week, and execs are scrambling to soothe fears that its products aren't woke trash. 

In an email to staff, which Bloomberg obtained, Alphabet's Sundar Pichai described the Gemini issue as "completely unacceptable." He informed the staff that teams actively address these problems, emphasizing the importance of providing unbiased and accurate information.

Here's the full email: 

I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias – to be clear, that's completely unacceptable and we got it wrong.

Our teams have been working around the clock to address these issues. We're already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes. And we'll review what happened and make sure we fix it at scale.

Our mission to organize the world's information and make it universally accessible and useful is sacrosanct. We've always sought to give users helpful, accurate, and unbiased information in our products. That's why people trust them. This has to be our approach for all our products, including our emerging AI products.

We'll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements we've made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long- context window breakthrough and our open models, both of which have been well received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let's focus on what matters most: building helpful products that are deserving of our users' trust.

"It's a search engine controlled by hopelessly woke/ liberal management. With AI, it's crazy bias is revealed more clearly," one X user said. 

Gemini is the tip of the woke iceberg. Folks should reevaluate their use of Gmail, Google Maps, Google Docs - and any other Google product.