This is my archive

bar

EconLog Price Theory: Let Them Eat Steak

We’re bringing back price theory with our series on Price Theory problems with Professor Bryan Cutsinger. You can see all of Cutsinger’s problems and solutions by subscribing to his EconLog RSS feed. Share your proposed solutions in the Comments. Professor Cutsinger will be present in the comments for the next couple of weeks, and we’ll post his proposed solution shortly thereafter. May the graphs be ever in your favor, and long live price theory!   Question: Russ buys 5 sirloins per week. True or false: If the price of sirloin rises by $5 dollars apiece, and if Russ’ preferences and income remain constant, he will have $25 a year less to spend on other things. (0 COMMENTS)

/ Learn More

In (Sort of) Defense of (Something Like) Property Taxes

A revolt is building across the United States against property taxes. From Florida to North Dakota, states have attempted or are attempting to abolish them. The anger driving this movement comes from two sources. One is the belief that you are being taxed for living in your house. “Is the property yours or are you just renting from the government?” Florida governor Ron DeSantis asked. “Boiled down to its very essence, fulfilling the promise of personal liberty is impossible if you can’t actually own a piece of real property,” Pennsylvania state Rep. Russ Diamond argues. The second driving force is that property tax burdens are often tied to the notional market value of an asset—your house—rather than to the owner’s ability to pay or the cost of providing the services the tax finances. They function like a wealth tax, which isn’t good. “Seniors on Social Security in 2025 received a 2.5% cost of living adjustment,” a Minnesota resident notes, “yet my city property tax increased by 10% and 48% over the past five years.” The first of these points is based on a misapprehension (albeit an understandable one, given the second point). Property taxes are payments for locally provided and consumed goods and services. Property taxes are not a fee for living in your house, but a payment for locally provided and consumed goods and services, like schools, police, parks, the fire department, etc. If advocates of property tax abolition are willing to forego these goods and services, then there is no problem. But few of them are. The question then becomes: how will these goods and services be paid for? The ideal is to charge for a local park the same way we would a water park, or the fire department, the same way we would pest control. But “public goods” – though less ubiquitous than often claimed – do exist, so simply paying for services isn’t always possible. A squad car cruising the street deters criminals from burgling number 48 and number 50 (it is “nonrivalrous,” in the jargon), whether number 48 pays for it or not (it is “nonexcludable”)—and whether they are still paying their mortgage or not. In these cases, if you want the locally provided and consumed service, you must pay for it somehow. Local service fee burdens should be based on the cost of their provision The payment method commonly used for locally provided and consumed goods and services is commonly called “property taxes,”  and they are frequently driven by the value of your house. So the above misconceptions about property taxes are understandable.  If we deal with these misconceptions and genuine problems with property taxes, we can construct something fairer that might garner more support, or at least tolerance. As a first step towards reforming the system of paying for locally provided and consumed goods and services, they ought to be renamed. When Margaret Thatcher abolished the “rates” system – which was essentially a property tax – she called its replacement the Community Charge. While this was hugely controversial in its application, it was an accurate reflection of what the payment actually was. A second step would be to break the link between changes in the burden of these payments and changes in the notional value of the payer’s property. The burden should change as the cost of providing the goods and services changes. A local Taxpayer’s Bill of Rights (TABOR), which limits the growth of government spending to something like the growth rate of inflation plus population, for example, would help contain Community Charge burdens by containing local government spending. Finally, once the cost of these locally provided and consumed goods and services has been determined, there are a number of ways to apportion it between taxable units. One, closest to the current system, would be to allocate it according to each unit’s share of the total property value in the locality. Another, Thatcher’s idea, sought to approximate a private sector fee as closely as possible by apportioning the cost by the number of people in each unit. Some taxes are better than others Most people who want to abolish the property tax want to keep the locally provided and consumed goods and services that these taxes finance. There are several proposals for how to finance them, ranging from handouts from state governments to levies on migrants’ wires to foreign countries. While those pushing these schemes often present as “conservative” because they are pushing to abolish a tax, unless they are also pushing to abolish the spending, they are, in reality, merely seeking that free lunch which a wise man told us does not exist. There are notably few takers among the abolitionist ranks for the hefty sales tax hikes that could fill the gap. Those who consume goods and services, as far as possible, ought to be those who pay for them. (0 COMMENTS)

/ Learn More

The Liberal 19th Century

Many libertarians and classical liberals consider the 19th century in the West as the most liberal epoch in history. We can certainly see stains, notably slavery and later Jim Crow, as well as colonialism (think about the control of trade from the colonies, which Adam Smith criticized in his 1776 Wealth of Nations). In many countries, moreover, the liberal century started late (in France, for example) or ended early (in Germany). Even in the UK, the Corn Laws were only abolished in the middle of the century, and British libertarians were pessimistic as the end of the century approached (see Matt Zwolinski and John Tomasi’s The Individualists, which I reviewed in Regulation). Yet, for Anthony de Jasay, whose thought is strongly anchored in the “private forteresses” of private property, the 19th century was clearly the era of liberalism, even if fleeting. In his book Against Politics (see my Econlib review), he wrote: It is to history taking its time that we owe thanks for the brilliant but passing nineteenth-century interlude in Western Civilization, with limited government and assured-looking private sovereignty of everybody’s own decisions over crucial domains of economic and social life. The UK was among the countries where the advance of liberalism was most promising. In his English History 1914–1945 (Oxford University Press, 1965), historian, journalist and broadcaster A.J.P. Taylor described his country on the onset of World War I. Was he influenced by similar observations in John Maynard Keynes’s 1919 book The Economic Consequences of the Peace? In any event, the opening paragraph of Taylor’s book is memorable and worth quoting nearly in extenso; it suggests that the promises of liberalism have been seriously betrayed: Until August 1914, a sensible, law-abiding Englishman could pass through life and hardly notice the existence of the state beyond the post office and the policeman. He could live where he liked and as he liked. He had no official number or identity card. He could travel abroad or leave his country for ever without a passport or any sort of official permission. He could exchange his money for any other currency without restriction or limit. He could buy goods from any country in the world on the same terms as he bought goods at home. For that matter, a foreigner could spend his life in this country without permit and without informing the police. Unlike the countries of the European continent, the state did not require its citizens to perform military service. … Only those helped the state who wished to do so. The Englishman paid taxes on a modest scale … rather less than 8 per cent. of national income. The rest of the paragraph shows both the emergence of an interventionist trend and that the British were still generally freer than nearly everybody in the West—and even than everybody now. The interventionist trend was not so much apparent in elementary public education and in last-resort social assistance as in the fact that some adults (mainly women) were deemed incapable of liberty in certain areas of life: The state intervened to prevent the citizen from eating adulterated food or contracting certain infectious diseases. It imposed safety rules in factories, and prevented women, and adult males in some industries, from working excessive hours. The state saw to it that children received education up to the age of 13. Since 1 January 1909, it provided a meagre pension for the needy over the age of 70. Since 1911, it helped to ensure certain classes of workers against sickness and unemployment. This tendency towards more state action was increasing. Expenditure on social services had roughly doubled since the Liberals took office in 1905. Still, broadly speaking, the state acted only to help those who could not help themselves. It left the adult citizen alone. Taylor is a controversial figure. He had shortly been a member of the British Communist Party in his youth and remained a lifelong socialist. But is it possible that the quote above mainly reflects something that we still observe? I mean that socialists don’t understand that individual liberty is impossible without economic freedom, just as conservatives have problems understanding that economic freedom is inseparable from individual liberty. According to David Pryce-Jones writing in The New Criterion, though, it’s worse than that: Taylor was also a fellow-traveler of the Soviet regime and a Nazi sympathizer—anything but the opposite of individual sovereignty! He seems to have gone through a whole palette of collectivist ideologies. So, his description of English liberty before WWI was probably an incrimination. At any rate, we can read his description as close to what individual liberty should be against all forms of authoritarianism on the right or on the left. ****************************** A London Underground station in the late 19th century, as viewed by ChatGPT (0 COMMENTS)

/ Learn More

The Social Benefits of Iconoclasts

Years ago, my father offered me some advice. (Many such instances, but I have a specific case in mind.) When in class, he told me, never be afraid to raise your hand and ask questions or seek clarification on some point you don’t understand. People are often reluctant to do this, he said, because they’re afraid of seeming like they’re slower than their classmates. When a teacher pauses and asks “Are there any questions?” and nobody else around you has any, it’s easy to feel like everyone else is up to speed and you’ll stick out as falling behind. But, if everyone else in class also feels that way, then there can both be lots of people with lots of questions, but nobody raising their hand. Plus, there was an extra benefit, he told me. He asked, “Have you ever been in class and been confused by something, but someone else asked about it and you were glad that they did?” The answer, of course, was yes. And that was an extra reason to ask questions. Doing so would give me the chance to be that guy — by asking a question, I might also be helping other people who needed clarification but were too nervous to ask get the help they needed too. On that last point, my dad was speaking like an economist, albeit without the jargon. In economic jargon, asking questions in had the chance to create positive externalities. I might gain additional understanding for myself, but other people could benefit in the same way. Because of this, individually people might undervalue asking questions, leading to too few questions in class being asked. Pointing this out was a way to try to encourage me to internalize the externality — to consider that if I’m feeling confused on some point, it’s likely that at least a few others are as well, and that should increase my willingness to ask questions. The other point ties back to my earlier posting on preference falsification. The hesitance to ask questions in a classroom setting for fear of seeming like you’re not keeping up with everyone is another case where people might falsify their preferences. Publicly, students will express that they are up to speed and need no additional information, while privately desiring extra clarification. If each individual thinks they are the only one who is feeling confused, and is worried about seeming foolish compared to everyone else, then we can end up in a scenario where everyone privately wants extra explanation but publicly expresses a desire to keep moving ahead. An iconoclast is someone who loudly and boldly takes stances far outside of conventional (expressed) public opinion. Iconoclasts can attract a lot of criticism. On the other hand, in situations where there is widespread preference falsification, the only way to break out of that is for at least some people to be willing to noticeably make their private beliefs publicly known. Each person who does so makes it just a little bit easier for the next person to do so as well. The first people to do so may face heavy criticism — even attempts at cancelation — but iconoclasts often revel in the controversy rather than being deterred by it. There are upsides and downsides to this. In the worst case, we have trolls — people who say outrageous things simply for the purpose of causing outrage, and who revel in doing so. On the other hand, in at least some cases, people who are genuinely iconoclastic can start the process that breaks the spell of preference falsification. I have no doubt that trolls outnumber iconoclasts. But despite this, the value of open and free expression is not diminished. Even though most new ideas are terrible, some will be real breakthroughs. We don’t have a way of identifying in advance which will be which — because doing so would require us to know in advance what future experience will show. As Yogi Berra once said, prediction is hard, especially about the future. A parallel can be made with the work done by venture capitalists. They know that most of the ventures they support will turn out to be flops and will fail — but just a few here and there will turn out to be giant successes. There’s no way to know in advance which will be which — if they knew that, then they’d only invest their money in those rare few and not bother with all the rest. But because they don’t — and can’t — know which is which, they invest very broadly to make sure those few good ideas can be found and brought out. The same is true in the marketplace of ideas. Of all the ideas put forth that are drastically outside the (apparent) social consensus, most will probably just be duds and the people who advocate them likely trolls who just want to get a rise out of people. But some few will be different — and have the potential to make a commonly held but commonly hidden belief more freely expressed. We don’t know which ideas will be which, and most will probably be the former, but there only way to find the latter is to let all ideas out into the open. (0 COMMENTS)

/ Learn More

Steven Pinker on Common Knowledge

Why are Super Bowl ads so good for launching certain kinds of new products? Why do we all drive on the same side of the road? And why, despite laughing and crying together, do we often misread what others think? According to bestselling author and Harvard psychologist Steven Pinker, it all comes down to common knowledge, or the phenomenon that happens when everyone knows that everyone else knows […] The post Steven Pinker on Common Knowledge appeared first on Econlib.

/ Learn More

Evaluating We Have Never Been Woke Part 2: Bootleggers and Baptists

After spending ten posts (beginning here) outlining Musa al-Gharbi’s arguments in his book We Have Never Been Woke, it’s time to move on to my evaluation of those arguments. In my first post discussing this, I covered al-Gharbi’s claim that elite overproduction is an important cause of “Awokenings.” Today I want to explore how thinking about incentives and political coalitions might help us evaluate al-Gharbi’s explanations. Bootleggers and Baptists Another point in al-Gharbi’s argument is that, in the guise of social justice activism, woke activists promote policies that benefit themselves, but are harmful to the poor and vulnerable, as a means of protecting their own status. He shows that when many of the policies associated with progressivism (or wokeism) today were first introduced during the first Great Awokening. These included welfare and social aid programs, education requirements, increased and more rigorously enforced regulations, licensing and certification laws, zoning and development regulations, and technocratic economic management. As al-Gharbi notes, the early progressive movement originally pursued these policies as a means of ensuring high-status social positions would be kept out of reach of the “wrong” kind of people (women and racial and religious minorities in particular) and as a means of bringing about eugenicist goals. This creates an interesting situation. The goals and motivations of modern progressives are very different from the explicitly racist, classist, and eugenicist goals of the early 20th-century progressive movement. Yet in pursuit of outcomes that are the opposite of those intended by early progressives, modern progressives tend to advocate…basically the same set of policies. There are a few ways we might square this circle. The most uncharitable is to suggest that the goals of progressives never changed, and the movement is still intent on keeping the “deplorables” in their place. In other words, that modern progressives are deliberately dishonest about their goals. Another possible explanation is the bootleggers and Baptists approach: Some progressives are Baptists, and genuinely believe that, say, occupational licensing laws are beneficial on net and their absence would bring about all manner of terrible outcomes. Others, however, cynically use licensing laws to protect incumbents and shut people out of upward mobility, as in the case of Sandy Meadows, described here by George Will: Meadows was a Baton Rouge widow who had little education and no resources but was skillful at creating flower arrangements, which a grocery store hired her to do. Then Louisiana’s Horticulture Commission pounced. It threatened to close the store as punishment for hiring an unlicensed flower arranger. Meadows failed to get a license, which required a written test and the making of four flower arrangements in four hours, arrangements judged by licensed florists functioning as gatekeepers to their own profession, restricting the entry of competitors. Meadows, denied reentry into the profession from which the government had expelled her, died in poverty, but Louisianans were protected by their government from the menace of unlicensed flower arrangers. But Musa al-Gharbi’s explanation is that the proverbial bootlegger and Baptist are one and the same. The woke want to be upwardly socially mobile and protect their status — their inner bootlegger. But they also want to bring about egalitarian goals — their inner Baptist. When there’s a conflict between their inner bootlegger and Baptist, the woke behave like bootleggers and speak like Baptists – and construct narratives to convince others, but mostly themselves, that their behavior is also Baptist in its motivation as well. I think there some truth to this analysis. But, how much of the variance does it explain? I’m still skeptical that it explains much about why modern progressives support the policies of they do. Consider one particular policy that was originally, and for a long time, advocated for specifically on the grounds that it would serve as a barrier to entry to keep “undesirables” such as racial minorities and women unemployed: the minimum wage. As Thomas Leonard documented in his book Illiberal Reformers: Race, Eugenics, and American Economics in the Progressive Era, what many economists now cite as one of the most damaging results of the minimum wage – how it disproportionally drives the most vulnerable people out of work – was originally considered to be the minimum wage’s primary benefit by progressives. Progressives today continue to be particularly aggressive in their support for increasing the minimum wage – but it’s far from clear to me that their modern support for that policy is ultimately rooted in the initial justification. Though al-Gharbi isn’t quite explicit on this point, there are a handful of passages in the book that lead me to believe he’s in favor of increasing the minimum wage. Certainly, however, al-Gharbi does not desire to ensure the most vulnerable people be shut out of upward mobility. Supposing I’m right about al-Gharbi’s support for an increased minimum wage, it naturally raises the question – if al-Gharbi can support this particular policy today for reasons contrary to the initial gatekeeping purposes it was meant to serve, can’t the same be true today of progressive who favor, say, licensing, certification, and educational requirements? And even if I’m wrong about al-Gharbi’s support for minimum wage increases, surly it’s not hard to imagine why progressives today might support that policy even while opposing the goals for which it was originally instated. Indeed, I suspect the vast majority of progressive simply have no idea that displacing the poor and vulnerable was the original goal of so many of the policies they support. I can’t help but wonder if there is a potentially much simpler explanation underneath it. But first, a digression into a different Scott Alexander post. In the post I have in mind, Scott Alexander describes (without necessarily endorsing) “the theory that the fear of disease is the root of all conservativism.” This elaborate theory, he points out, actually has a lot of fancy research supporting it: There has been a lot of really good evolutionary psychology done on the extent to which pathogen stress influences political opinions. Some of this is done on the societal level, and finds that societies with higher germ loads are more authoritarian and conservative. This research can be followed arbitrarily far – like, isn’t it interesting that the most liberal societies in the world are the Scandinavian countries in the very far north where disease burden is low, and the most traditionalist-authoritarian ones usually in Africa or somewhere where disease burden is high? One even sees a similar effect within countries, with northern US states being very liberal and southern states being very conservative. Other studies have instead focused on differences between individuals within society – we know that religious conservatives are people with stronger disgust reactions and priming disgust reactions can increase self-reported conservative political beliefs – with most people agreeing disgust reactions are a measure of the “behavioral immune system” triggered by fear of germ contamination. He also proposes the idea of another “Grand Narrative” underlying conservative thinking on social policy: The Narrative is something like “We Americans are right-thinking folks with a perfectly nice culture. But there are also scary foreigners who hate our freedom and wish us ill. Unfortunately, there are also traitors in our ranks – in the form of the Blue Tribe – who in order to signal sophistication support foreigners over Americans and want to undermine our culture. They do this by supporting immigration, accusing anyone who is too pro-American and insufficiently pro-foreigner of “racism”, and demanding everyone conform to “multiculturalism” and “diversity”, as well as lionizing any group within America that tries to subvert the values of the dominant culture. Our goal is to minimize the subversive power of the Blue Tribe at home, then maintain isolation from foreigners abroad, enforced by a strong military if they refuse to stay isolated.” Both of these grand and complex theories Alexander was proposing were meant to explain a particular question – specifically, the difference between Republicans and Democrats on the issue of how to handle the possibility of an Ebola outbreak in 2014. At that time, the position among Republicans was that the disease should be contained through travel restrictions and strict quarantines of those who might have been potentially exposed. And the position among Democrats was that even suggesting the use of even very limited quarantines or lockdowns to contain the spread of disease was an unconscionable violation of civil liberties, was harmful to the poor and vulnerable, and was intrinsically racist. As Alexander put it, What’s more, everyone supporting the quarantine has been on the right, and everyone opposing on the left. Weird that so many people suddenly develop strong feelings about a complicated epidemiological issue, which can be exactly predicted by their feelings about everything else. What’s interesting is this was written in 2014, which, dear reader, means it was written about a half-decade BC (Before Covid). And when Covid came around, suddenly the partisan divide flipped, with Democrats being overwhelmingly likely to embrace even widespread lockdowns and quarantines, and Republicans taking the opposite view. (Libertarians, by contrast, were consistently on the “oppose quarantines” side for both occasions.) This is pretty difficult to square with either of Alexander’s Grand Theories. However, in the same post, he does suggest there might be a simpler explanation: Is it just random? A couple of Republicans were coincidentally the first people to support a quarantine, so other Republicans felt they had to stand by them, and then Democrats felt they had to oppose it, and then that spread to wider and wider circles? And if by chance a Democrat had proposed quarantines before a Republican, the situation would have reversed itself? Could be. I think this is ultimately a much stronger explanation than the fancy theories. And to put a bit more flesh on this – while there was a lot of screaming and yelling among the Extremely Online Crowd during 2014, the whole episode was fairly short-lived and had little impact on most people’s lives. (I suspect many people reading this post today forgot that there was ever an Ebola controversy in 2014.) As a result, neither position really “took” as being the “official position” for either party. However, Covid had an overwhelming social impact and left nobody’s life untouched. As a result, when that event occurred, many issues that were never politically valanced before became durably coded as the “conservative” or “progressive” view. In the same way, it seems to me that a simpler explanation is that progressives initially recommended a variety of social and economic policies for particular reasons at the time. But over time, those policy positions themselves became durably coded as “progressive.” And, over decades, people who thought of themselves as progressive would simply adopt whatever policies were coded with the proper political valance. They weren’t progressive because they supported those policies – they supported those policies because they considered themselves to be progressive. As Arnold Kling would say, we choose what to believe based on who we believe. I think in most cases people support the policies that are coded as favorable to their political ideology, rather than supporting an ideology because they deeply understand the history and impact of various policies associated with that ideology, or even an understanding of how the policy would impact them personally. To be clear, this is not to say I think al-Gharbi’s explanation is completely wrong. But I think it does explain at least some of the variance, and it represents a genuine contribution to understanding how the world works. I’m just not sure I’m convinced that the desire to protect one’s social class is a dominating factor compared to a desire to defend policies favorably coded by one’s political ideology. In my next post, I’ll be examining on some of al-Gharbi’s commentary on economics, and economic policy.   As an Amazon Associate, Econlib earns from qualifying purchases. (1 COMMENTS)

/ Learn More

AI Won’t Kill Work – It Will Reinvent It

It’s easy to doomscroll these days. AI, it appears, is coming for our jobs. Even occupations that were previously considered an easy path to a middle-class lifestyle, like lawyer and radiologist, may be subject to the AI chopping block. Yet these stories, despite their flashy headlines, are missing nuance. They examine the seen (and likely) consequences of the AI revolution, but are missing the unseen “what comes next” part of the story. Every historical episode of creative destruction involves both creativity and destruction. Yet current news stories are focusing only on the destruction.  We might not know how AI will revolutionize the American workforce, but past episodes of similar technological upheaval suggest that the future will be brighter than we can imagine.  Recent headlines are, indeed, scary. Consider the following:  May 12, 2025: “For Silicon Valley, AI isn’t just about replacing some jobs. It’s about replacing all of them” – The Guardian June 18, 2025: “AI Will Replace Amazon Jobs. CEO Andy Jassy Confirms Workers’ Worst Fears.” – Barrons July 3, 2025: “Ford’s CEO is the latest exec to warn that AI will wipe out half of white-collar jobs” – Business Insider  July 19, 2025: “AI will take your job in the next 18 months. Here’s your survival guide.” – Market Watch   These headlines aren’t from some alarmist blogger, sheltering in a tin-hat corner of the internet. These are from reputable news sources with large readerships. And they’re causing an artificial panic. Consider the Amazon headline. Amazon has been an industry leader in automation, yet employment at the company has continued to grow unabated. Currently, Amazon employs more than 1.5 million people. That’s up from 17,000 in 2007, and nearly double its 2019 employment figure. This employment growth has happened despite the fact that the company currently has more than a million robots in its workplaces. The jobs those robots have replaced are primarily those involving menial work or repetitive tasks, freeing up labor for more valuable pursuits. While CEO Andy Jassy recently announced that AI will likely lead to future job cuts at the company, similar claims were made in 2012 when Amazon acquired robotics company Kiva Systems. Employment grew unabated after this acquisition.  These headlines also sound suspiciously like those circulating during a previous public conversation in which technology threatened to take all the jobs away. In the mid-1990s, the internet began to move from the plaything of tech hobbyists to a central part of work and education. Jobs that had previously been done by human processors were increasingly outsourced to data processors.   In 1995, Jeremy Rifkin published his book The End of Work, which argued that the dawn of the information technology age would create a massive and structural decline in jobs. He suggested that as many as two-thirds of all existing jobs could eventually be eliminated by machines. Jobs in manufacturing, agriculture, and clerical work were particularly vulnerable to this type of technology-based outsourcing.  To be fair, machines did take over many of those jobs. But we didn’t have massive, enduring, structural unemployment as a result. Instead, new jobs emerged.  Because I’m writing a piece on how AI won’t replace all our jobs, I asked ChatGPT to help me figure out how to identify some jobs that didn’t exist in 1990 and now have a significant number of employees. It very helpfully pointed me to the Bureau of Labor Statistics’ Occupational Employment and Wage Statistics. Here are a handful of new job categories and their current employment figures from that database: Software and Web Developers, Programmers, and Testers: 2,154,370 employees Database and Network Administrators and Architects: 633,540 employees Computer and Information Analysts: 677,230 employees Indeed, the full set of “Computer and Mathematical Occupations” has exploded since internet adoption began accelerating in the late 1990s. The entire category of “Computer Occupations” currently has an employment figure of 4,786,660.  These broad categories include a range of fulfilling jobs and occupations, including app developer, social media manager, cloud architect, cybersecurity analyst, and influencer. In past eras, many of the individuals pursuing these opportunities would have been good candidates for once-stable jobs in law, accounting, or manufacturing.  In 1897, Mark Twain heard a rumor that he’d died. He sent a letter to the New York Journal to clear up the matter, stating that “the report of my death was an exaggeration.” Not only are the reports of AI’s employment “death toll” an exaggeration, but they’re missing information about the critical second act of the play. After the destruction comes the creativity, and the story of the internet can give us clues about the future of work in this technological episode as well.    As an Amazon Associate, Econlib earns from qualifying purchases. (0 COMMENTS)

/ Learn More

The Virtue of Dissent and Conversation

I have written a lot on dissent and how it serves in the truth-finding process (for selections, see my blog posts here and here, and some of my academic articles like the award-winning “Cascading Expert Failure” and “Expert Failure and Pandemics: On Adapting to Life with Pandemics,” coauthored with Abigail Deveraux of Wichita State University, Nathan Goodman of the Mercatus Center, and Roger Koppl of Syracuse University).  Indeed, I believe dissent is so important, I teach it in my classes.  I actively encourage my students to find information and challenge me.  Being able to question is vital to revealing more information and for both the expert and nonexpert to achieve their goals. Of course, dissent forces each party to shore up their arguments and reveal more information.  But dissent is also vital because it reveals the knowledge each party has.  Nelson and Winter discuss this effect in their 1982 book An Evolutionary Theory of Change.  Parties must make assumptions and those assumptions may not even be known to them. For example, a forecast of commodity prices requires assumptions about the major factors affecting supply and demand, expected weather conditions, the likelihood of major tail events, and so on. Additionally, since models are generalizations of observed phenomena, even the choice of models entails assumptions about conditions, margins the actors can adjust along, and the like. Dissent and the conversation it spurs help reveal these assumptions, as well as any potential biases the experts may have; we are all human, after all. Additionally, questions from the nonexpert can reveal what information the nonexpert values, which in turn helps shape the expert advice. Given much information is tacit, the dialogue between expert and nonexpert can reveal additional information to both parties as well as shape the interpretative frameworks of both expert and nonexpert. Democracy, at least in any reasonable sense of the term, is built on the idea of dissent and conversation: citizens discuss and dissent with each other.  Conversations and debate happen.  The expert in a democracy, therefore, must serve a similar role: as dissenter to and converser with the nonexpert (and to each other).  The expert’s ethical duty is to serve this role, not be a Yes-Man who simply acts at the behest of the nonexpert.  Dissenting may cause the nonexpert to reevaluate their desires, hopefully in a direction that is actually toward the nonexpert’s true goals. To that end, I propose a broader version of the Hippocratic Oath geared toward experts in general: the expert shall help, or at least do no harm.  That will often involve telling the nonexpert what they don’t want to hear.  But that conversation may ultimately lead to better outcomes.   As an Amazon Associate, Econlib earns from qualifying purchases. (0 COMMENTS)

/ Learn More

Changing Opinions on America

I have a memory of reading, sometime in the 1980s, a story in a French magazine about the American border patrol along the Mexican border. They don’t use police dogs, the reporter explained approvingly, “because of a certain idea of the rights of man.” I have tried to trace this story, but alas, to no avail. Whether the details of my memory are exact or not, I believe that, in general, and for a long time, many of those in the world who were critical of the American ideals and way of life, or even thought of themselves as anti-American, still had much respect and even admiration for the country and its traditions. Many secretly regretted not being American. How this has changed! Just consider the experience of the South Korean personnel who were the victims of a police raid at an LG-Hyundai plant in Georgia. They were arrested, shackled, and jailed for one week until they were released and allowed to return to their country. The Wall Street Journal reports on the wife of an engineer arrested there (“Confusion, Anger, Relief: Korean Engineer Tells of Week in U.S. ICE Detention,” September 12, 2025): Lee said she was heartbroken to hear her husband, an LG Energy employee, was in shackles. “Treating him like a felon—it made me so angry,” she said. The husband was among the 330 workers who, last Friday, landed near Seoul on a flight chartered by the South Korean government. His wife, who waited for him at the airport, emotionally declared: I don’t want him to go back there. “There” is America. A report by the Financial Times is even more damning (“South Korea Denounces ‘Shocking’ US Treatment of Detained Workers,” September 12, 2025): The workers’ flight was delayed on Wednesday after President Donald Trump made them a last-minute offer to remain in the US. But only one elected to stay, with many who returned to Korea vowing never to return to America. … Business groups and South Korean officials have admitted that Korean companies often used unsuitable visas for workers sent to the US to build multibillion-dollar advanced plants. But they insist Washington left them in an “impossible position” by refusing to facilitate short-term working visas that would allow projects to be completed on time. Another returning worker said that “we should have followed the rules properly”. Seoul should negotiate the visa issue with Washington, the worker said, but added that “I don’t want to go back to the US”. Just a few days ago, I found, on the website of a foreign university in a Western country, a list of countries with high cybersecurity risks, requiring faculty traveling there to borrow a specially configured device from the university. The countries listed (in this order): United States China Russia Iran India North Korea I suspect that, lurking under this list, there is still some anti-américanisme primaire (“crude anti-Americanism”) as we (well, some of us) used to say in French.* Perhaps many would now have some reason not to laugh at the list. American border agents have the power to inspect electronic devices at ports of entry. America is going through dangerous times. Those who love her most should be the most worried.   —— * In 1984, Georges Suffert, Deputy Editor-in-Chief of the magazine Le Point in Paris, published his book Les nouveaux cow-boys. Essai sur l’anti-américanisme primaire (The New Cowboys: Essay on Crude Anti-Americanism). At Le Point, he was a colleague of Maurice Roy, another Deputy Editor-in-Chief and also economics editor, who had published Vive le Capitalisme! (Long Live Capitalism!) a few years earlier. I was honored to count Roy among my friends. In France as in America, we seem to be living in another geological epoch. (0 COMMENTS)

/ Learn More

The Problem with Government-Run Grocery Stores

In 1989, Russian President Boris Yeltsin took a famous trip to a grocery store in Texas. The event lives on in popular history because of this famous photograph. Yeltsin was amazed by the food availability in the US, in contrast with the breadlines of the Soviet Union. Markets successfully catered to customers, whereas government central planning performed miserably by comparison. Despite this recent example, politicians in the US have begun to wonder whether centrally planned grocery stores are superior. New York City mayoral hopeful Zohran Mamdani has recently proposed a municipal grocery store. Previously, I wrote a story about Chicago’s plans to create a municipal grocery store. Luckily for residents of Chicago, the plan was scrapped, and the city has decided to focus on enabling private food vendors. Let’s examine why municipal grocery stores are a bad idea and consider the potential impact if Mamdami implements the system. The Power of Profit The major difference between a municipal grocery store and private grocery stores can be summarized in one word: profit. To make a profit, businesses must do two things: maximize revenue and minimize costs. Higher business revenue indicates customers are willing to spend more at the business. In other words, more revenue means more value provided. In order to minimize cost, businesses must cut back on the number of scarce resources used, and this frees up the resources to be used elsewhere in the economy. Profit represents the value a business creates for customers. If businesses make losses, the resources being used are worth more than the value being created. In other words, the business is destroying the value of resources. Thankfully, if a business makes losses for long enough, it must shut down, preventing further destruction. Government-run grocery stores, on the other hand, have no legal owner. That means no individual or group collects profits. If a state-run store has revenues greater than the costs, those revenues must be spent on something. Why does this matter? Profit is a means to evaluate decisions. For example, should a grocery store buy a new software system for more efficiently managing inventory and deliveries, or should it invest in a physical warehouse? Without profit and loss calculation, there is no rational way to make the decision. A for-profit store can calculate profits and losses and evaluate if the chosen option creates more value than cost. Without profit, there is no way of telling ex-post if the decision was value-creating. This insight was pioneered by economist Ludwig von Mises and has been dubbed “the calculation problem.” This is the major problem with Mamdami’s proposal. What Will Happen? You might think that this would mean an NYC municipal grocery store would go out of business, but the result would be worse. In state-run enterprises, value can still be lost. If the costs of a grocery store are higher than its revenues, value has been destroyed, but the money to make up for the loss must come from somewhere. Private businesses can run out of money, but governments can tax their way out. In the Soviet Union, where the economy was centralized, there wasn’t enough wealth sitting around to tax its way to success. In New York City, most businesses are private. That means there is plenty of money for the government to seize via taxation to keep inefficient operations afloat. It gets worse. Since politicians and bureaucrats are not personally responsible for the losses created by their policies, they have no incentive to ensure stores operate at reasonable prices.  If all food at grocery stores were given away for free, there would be an obvious problem. The shelves would clear out, and there’d be no incentive to restock them. Charging money is necessary to incentivize the people associated with producing food. Politicians and bureaucrats, contrarily, will have an incentive to manipulate prices to suit their political ends. If political desires drive prices too low, this could mean actual value-producing grocery stores will be unable to compete. I can already hear people ask, “Well, wouldn’t it be good if prices were being lowered?” No! Prices serve an important function. They compensate for work, they incentivize consumers to be conservative with consumption, and they communicate knowledge about the value of goods. Disturbing prices by government fiat ruins these functions and ultimately would require the city to increase taxes to make up for the losses. It’s possible to create a municipal grocery store that leeches off the healthy economy, but it comes at a cost to taxpayers. The larger the state-run program becomes, the smaller the value-producing economy becomes. You ultimately run into Margaret Thatcher’s final constraint on socialism: “The problem with socialism is that you eventually run out of other people’s money.”   Peter Jacobsen is an Assistant Professor of Economics at Ottawa University and the Gwartney Professor of Economic Education and Research at the Gwartney Institute. His research is at the intersection of political economy, development economics, and population economics.  (0 COMMENTS)

/ Learn More