Megan McArdle once again joins EconTalk host Russ Roberts to discuss the dangers of the left-leaning bias of Google’s AI to speech and democracy, if such a thing as unbiased information can exist, and how answers without regard for social compliance create nuance and facilitate healthy debate and interaction. McArdle is a columnist for The Washington Post, and is the author of The Upside of Down: Why Failing Well is the Key to Success.
Often times when the dangers of AI are discussed, apocalyptic scenarios of human subjugation or extermination, such as Terminator, I Have No Mouth and I Must Scream, and 2001: A Space Odyssey are thought of. More realistically, some are concerned about the potential of AI to destroy jobs and dilute the meaning of art or make plagiarism easier. AI chatbots and companions are becoming mainstream- for instance, the main topic of this podcast: Google’s Gemini. McArdle’s concern surrounds how the left-wing bias of the companies at the forefront of AI development has bled into their creations, and the tangential impact on American free speech and democracy.
McArdle’s initial examples of this are the factual inaccuracies of Gemini in the name of affirming a socially respectable position. The first case discussed involves Gemini’s artistic portrayal of the founding fathers, and other ethnically European historical figures, as nonwhite, which she admits is trivial. However, Gemini’s responses to text queries about gender affirming care were far more notable to McArdle.
So, I asked it about gender affirming care, and in short order, Gemini was telling me that mastectomies were partially reversible. When I said, my aunt had a mastectomy for breast cancer is that reversible? It said, no, and I said, well I don’t understand. It seemed to understand that these were the same surgery, but then it delivered a stirring lecture on the importance of affirming and respecting trans people. It had clearly internalized part of our social rules and how we talk about this subject, but not all of them. The errors leaned in one direction, it was not making errors telling people conservative things that aren’t true, and to be clear no activist wants Gemini to tell people that mastectomies are reversible. It was acting like the dumbest possible parody of a progressive activist.
McArdle notes that there are reasons for this that don’t involve bias, such as training the chatbot on social media sites such as Reddit, whose moderation leans left. Additionally, the AI doesn’t have the ability to detect certain limits to logical positions, and social rules. But this is an issue for McArdle as well. She sees this as indicative of speech suppression, where only one side of the political spectrum is allowed to be praised, and the other is only allowed to be demonized. Her example of this Gemini’s refusal to giving praise to right-wing figures like Brian Kemp, while doing the opposite for more controversial left-wing figures like Ilhan Omar. The danger in this to McArdle is that AI will teach people not to think in a complex manner and will answer queries analytically to keep the questioner in their ideological bubble.
We have these really subtle social rules, and we apply them differently in different situations, we code switch. If I’m with my liberal friends, there are some issues where I’m like, you know what let’s just not have a conversation about that. The AI can’t do that, it’s like a toddler. That can go in one of two directions. The good direction is that Google understands it cannot enforce the subtle social rules of the Harvard faculty lounge, which is effectively what it had done. Google can just say we’re going to be willing to say that Mao is also bad, but we’re just not going to say, Donald Trump who is elected by half of the population is too awful, and you’re only allowed to say awful things about him. That’s a more open equilibrium. It’s a place that allows people to be more confronted with queries and the complexity of the world. Gemini often does a good job with that. I have been dumping on it for an hour, but it actually often does a good job of outlining where the nuance is. My nightmare is that instead, Google teaches Gemini to code switch, to know the person who’s asking the query, what bubble they want to live in, and will give them an answer that will please them. That is a genuinely disturbing future.
In response to this, Roberts asks a fantastic question. Since search engines, in order to be useful, are discriminatory or biased by their very definition, what could an unbiased Google, or Gemini possibly mean? This question prompts Roberts to proclaim his pessimism, as the problem is larger than AI chatbots, and is centered around the very ideal of an unbiased search engine. This teaches people to not to decipher truth from varying information, instead people rely on the results they’re given, particularly those which align with their biases. This has a further detrimental trickle-down effect to democracy. To summarize, since search engines are biased by their nature, Roberts’ solution comes from users of search engines behaving more carefully and attentively.
This is a conversation about AI, at least nominally, but it’s really a much deeper set of issues related to how we think about our past. History has been taught as great men do great things and let’s learn about what they were. The modern historical trend is partly a reaction against that, and I have no problem with that. The problem I have is the whole idea of unbiased history. What the heck would that possibly be? You can’t create unbiased history; you cannot create an unbiased search engine. Almost by definition if it’s to be useful it’s discriminatory. It’s by definition the result of an algorithm that had to make decisions…the problem for me culturally is that we have this ideal of unbiased search engine. That can’t happen, so we should be teaching people how to read thoughtfully…you start to get people not knowing what the facts are… they assume most things are true if they agree with them. This infantilization of the modern mind is the road to hell, this is going to be difficult for democracy. I don’t think it’s a coincidence that the two candidates we have in the United States are not what most people would call the two most qualified people. It points to something more fundamental.
McArdle proposes a similar solution: A proliferation of people focusing less on social appeasement when difficult social questions are being answered, and more on finding nuance. Understanding the complexity within addressing problems like racial inequality is the root to finding solutions.
A great new concept I just learned is people who are high decoupling versus low decoupling. High decoupling people abstract all questions from context and low decouplers answers questions in the social context in which they occur. What you need is a high decoupling system instead of one attempting to produce a socially desirable answer. No one is a perfectly high decoupler, but it gives you as much nuance as possible.
To contrast Roberts’ pessimism, McArdle gives reason for the best-case scenario. She believes that negatives are sure to come from AI, similarly to how social media led to cancel culture. But human decency will triumph over this challenge. McArdle’s point is one defending liberal society. She views the attempts to fundamentally shift the social order away from enlightenment principles as failures, and the new social order attempting to shift the window of acceptable views seen by Google’s left-wing bias will fail as well. The spirit of human connection and conversation is strong enough to maintain productive discourse.
So, I think that the long-term reason to be optimistic: is that these technological challenges are going to create a bunch of bad stuff. I can’t even imagine all of it. You can’t either. If you would ask me in 2012 to predict cancel culture from Twitter, I definitely would not have. But we are also actually fundamentally decent to each other over and over and over again, and we look for ways to be decent to each other…I actually believe that enough people want the things that really matter–which are the people you love, and creating a better world and free inquiry and science and all of those amazing human values…And so, I think at the end of the day, that will probably win if the AI’s don’t turn us into paperclips.
Although this was a fascinating conversation, I finished the podcast unconvinced by McArdle that AI bias was a meaningful issue. At multiple points throughout the podcast, she would discuss a response from Gemini that displayed clear left-wing bias, and then go on to state that Google fixed the issue very quickly, even the same day. For example, she mentioned that Gemini does not say mastectomies are partially reversible anymore.
The AI that told me that mastectomies were reversible, now doesn’t say that. It’s actually interesting how fast Google is patching these holes.
Drawing from this, it seems like Google has a set of values it wants their AI to embody, and they’re just working out the kinks. Furthermore, the leap McArdle takes from this is drastic to say the least, “We are now saying that you can’t have arguments about the most contentious and central issues that society is facing.” Social media bias against right-wing people has been shown to be an unfounded claim and is by far not the biggest threat to free speech. There is a far better argument for social media companies failing to adequately regulate disinformation and false claims about vaccines and the 2020 election or inability to take action against harassment or right-wing extremism coming from their platforms.
Similarly, to cancel culture, this is an overblown concern. The better place to focus in the pursuit of preserving free speech and expression does not come from social media companies banning people for hate speech. It comes in state legislatures banning forms of LGBTQ+ expression, such as drag, and Project 2025’s totalitarian and Christian nationalist aims to restrict speech contrary to conservative principles. This is far more important than Gemini refusing to write a love poem for Brian Kemp. Freedom is under attack in America, but it predominately comes from the far right, not Silicon Valley.
McArdle’s argument also begs the question of to what extent corporations are responsible to entities other than their shareholders. Are fossil fuel companies obligated to shift their energy production to green sources in order to slow climate change? What about corporations’ responsibility to pay their workers a living wage even if it’s above equilibrium? Are building developers, such as those of the Grenfell Tower, responsible for installing sprinkler systems or building with safer materials, even if it is more expensive? If Silicon Valley is socially responsible to uphold the public square and the spirit of free speech, even if it negatively impacts their shareholders, then this principle of social responsibility should be expanded to all areas of corporate activities.
Related EconTalk Episodes:
Megan McArdle on Internet Shaming and Online Mobs
Ian Leslie on Being Human in the Age of AI
Can Artificial Intelligence be Moral? With Paul Bloom
Zvi Mowshowitz on AI and the Dial of Progress
Marc Andreesen on Why AI Will Save the World
Related Content:
Megan McArdle on Catastrophes and the Pandemic, EconTalk
Megan McArdle on the Oedipus Trap, EconTalk
Megan McArdle on Belonging, Home, and National Identity, EconTalk
Akshaya Kamalnath’s Social Movements, Diversity, and Corporate Short-termism, at Econlib
Jonathan Rauch on Cancel Culture and Free Speech, The Great Antidote Podcast
Lilla Nora Kiss’ Monitoring Social Media at Law & Liberty
(0 COMMENTS)