This is my archive

bar

Why is demand so strong?

Today, we received another jobs report showing that the labor market remains red hot. Unemployment fell to 3.4%, a 54-year low. Job growth was 253,000, which is well above trend and well above pre-report estimates. By far the most important data point, however, is the growth rate of average hourly earnings. Nominal wages grew at a 6% annual rate in April, well above expectations. (The 12-month growth rate ticked up from 4.3% to 4.4%.) For a Fed that is trying to slow the growth in aggregate demand, this is bad news. For the purposes of monetary policy, wage inflation is the only inflation rate that matters.Why does the economy remain so hot, despite more than a year of “tight money”? Is it long and variable lags? No. A truly tight money policy reduces NGDP growth almost immediately. The actual problem is a misidentification of the stance of monetary policy.I’ve discussed this issue on numerous occasions, but people don’t seem to be paying attention. So perhaps a picture would help. In the two graphs below I provide typical examples of a tight money policy and an easy money policy. Note that what really matters is the gap between the policy rate (fed funds rate) and the natural interest rate.It’s not always true that a period of tight money is associated with falling interest rates, but that is usually the case. Does that mean the NeoFisherians are correct—that lower interest rates represent a tight money policy? No. For any given natural rate of interest, lowering the policy rate makes monetary policy more expansionary. That fact is clear from the way that asset markets respond to monetary policy surprises. But when the natural rate is falling (often due to a previous tight money policy), the policy rate usually falls more slowly. To use the lingo of Wall Street, the Fed “falls behind the curve.” The opposite happened during 2021-22, when the Fed raised rates more slowly than the increase in the natural interest rate. In this case, it wasn’t so much the pace of rate increases, which was fairly robust, it’s that they waited too long to raise rates, by which time the natural interest rate had already risen sharply. P.S. The natural rate cannot be directly measured; we infer its position by looking at NGDP growth. That’s why I ignore interest rates and focus on NGDP. (0 COMMENTS)

/ Learn More

Dan Klein on Hume on War

I am involved in a regular reading group, and at this time our text is David Hume’s Essays, which contains “Of the Balance of Power.” It ends with several paragraphs on Great Britain’s “imprudent vehemence” in its many wars against absolutist France. Those paragraphs are remarkably relevant to things today, as I see them. In entering into those paragraphs, one learns about Hume’s thoughts and a way to see events today. Hume presents France as a real threat to Britain. He speaks of it as “this ambitious power,” one that is “more formidable [than Charles V and the Habsburgs were] to the liberties of Europe.” He seemed to endorse Britain’s efforts to “guard us against universal monarchy, and preserve the world from so great an evil.” It is possible that those declarations were sincere, and it is possible that they were sound. But Hume was a cagey writer, and certainly wrote to persuade the ruling class. What is so notable about “Of the Balance of Power,” however, is how it concludes. Hume says that Britain has prosecuted war to “excess,” calls for “moderation,” and gives his reasons. In applying those paragraphs to today, we might think of the United States in place of Britain, and Russia or China (or both) in place of France. Today’s Ukraine, Germany, and other NATO countries would be in the place of the allies of Hume’s Britain. This is from Daniel Klein, “David Hume’s Warning on Our Future Wars,” Law and Liberty, May 3. Law and Liberty is our sister publication at Liberty Fund. In his article, Dan lays out how judicious David Hume was in his thinking about war with France. Dan suggests that we be as judicious in thinking about war with Russia. Another excerpt: Today, what is the realistic aim in Ukraine? Why put off negotiations and resolution? Hume writes that it is “owing more to our own imprudent vehemence, than to the ambition of our neighbours” that we have sustained “half of our wars with France, and all our public debts.” Second, Britain, being “so declared in our opposition to French power,” has displayed also that it is “so alert in defence of our allies.” How, then, do Britain’s allies respond? “[T]hey always reckon upon our force as upon their own; and expecting to carry on war at our expence, refuse all reasonable terms of accommodation.” I particularly like the part about government debt, which I think doesn’t get talked about enough with regard to U.S. wars and proxy wars. While the Ukraine war has been relatively cheap for the United States so far, the war against Afghans and Iraqis has cost us cumulatively trillions of dollars. Dan ends with the following: The White House and Pentagon are close to the Washington Monument but now far from George Washington. I recommend the whole thing. (0 COMMENTS)

/ Learn More

Introducing Following Their Leaders

Democracy has become widespread enough as a form of social organization to be assumed legitimate by default. To say a country is “not democratic” is implicitly understood as a rebuke. Even patently authoritarian regimes often go through the motions of holding elections and claiming their leaders are democratically supported. Democracy is so strongly supported that there is a cottage industry of people who get upset over massive charitable donations to good and worthy causes, because the money was given at the discretion of the donor rather than taken through the tax system and therefore used in a way that is “democratically accountable.” Given this massive presumption that “democratic” equals “legitimate”, or that democracy makes the government accountable to the people, it’s all the more important to place this presumption under careful scrutiny. Many scholars have examined this issue and have raised important questions. A recent book by Randall Holcombe, Following Their Leaders: Political Preferences and Public Policy, has joined this body of research and will be the focus of a series of posts where I examine and give thoughts on its central claims. Of course, this is no substitute for actually reading the book – even spreading the discussion out through several posts, large chunks of the case will be either left out or only described in a bare-bones way. With that disclaimer aside, let’s get a lay of the land. How, ideally, is democracy supposed to work? Holcombe gives a generalized, mathematical account of the democratic process, represented as follows: C = f(P1, P2, P3,…Pn) In the above equation, P represents the preference of a given voter, with P1 being the preferences of the first voter, P2 being the second voter, all the way through n number of votes cast. The method of aggregating these votes is represented by the function f. Different vote aggregation methods may produce different outputs – for example, the output might be different when f is majority rule compared to when f is an electoral college system, and both could be different from an f that uses rank-order voting or single transferable votes. Running all the inputs through a given function produces the output C, which represents the collective choice produced by the voting system. Or, as Holcombe more tersely puts it, “Voters vote, the votes are aggregated through the algorithm represented by f, and a collective choice is made.” In the standard, Civics 101 model, voters have preferences about political and social policies. Through the act of voting, voters make their preferences known and the totality of these preferences are aggregated into a social choice. This information is in turn take by policymakers who, in response to the results of elections, craft policies that represent and reflect the preferences of citizens. There are a few obvious issues worth looking at more closely. For example, is there a way to coherently map the idea of “a collective choice” onto reality, in a way that is analogous to the kinds of choices individuals make? This is one concern someone could express about C. There are concerns about f as well. There are many different forms of vote aggregation out there. The same inputs, run through different fs, can produce different and even diametrically opposite results. Is there an aggregation method that is clearly superior to the others, in that it produces more consistently optimal results, or more accurately reflects the preferences of voters? Holcombe touches on these concerns, but they are not his primary focus. Instead, he is interested in P – that is, the preferences themselves. How do voters form the preferences that serves as the input to the democratic process? As he explains it: Thinking about democratic institutions as a way of aggregating the policy preferences of individual citizens into some vision of the public interest requires an understanding of how those institutions aggregate individual preferences, which has been done extensively in the public choice analysis undertaken by political scientists and economists…But it also requires an undermining of how citizens form the preferences they express through democratic institutions, and this has seen much less development. Political preferences are often assumed as given and exogenous, and the primary interest of this volume is to examine in more detail how those preferences are formed, and as a result, the implications for public policy. Holcombe argues that preferences are neither given, nor static, nor equivalent. There are different kinds of preferences people have, which are formed in different ways, and some of these preferences can drive, dictate, or alter other preferences. Most importantly, Holcombe argues, economists and political scientists all too often treat preferences as exogenous to the system with which they interact. This gets things all wrong, Holcombe argues. Our preferences emerge through our interaction with our available choices and our judgments about how to best pursue our ends. Pre-existing preferences don’t create our choices – the choices we make under the constraints we face define and create our preferences, says Holcombe, quoting James Buchanan in support: Economists tend to describe individuals as utility maximizers. They have utility functions that constitute their preferences, and they refer to those utility functions to make choices that maximize their utility. In fact, the process works in the other direction, as James Buchanan explains. People make choices and the choices they make define their preferences. “Individuals do not act so as to maximize utility, described in independently-existing functions. They confront genuine choices, and the sequence of decisions taken may be conceptualized, ex post (after the choice), in terms of ‘as if’ functions that are maximized. But those ‘as if’ functions are, themselves, generated in the choosing process, not separately from such process.” So if preferences are generated in the choosing process, it’s worth examining what that process is and how it can influence preference formation. In the next post, I’ll describe the different kinds of preferences Holcombe identifies, and how they come to be formed.     (0 COMMENTS)

/ Learn More

Assume a Tesla

Comparing pollutants generated by EVs and gasoline-powered cars over the life cycle also leads to ambiguous results. Of course, EVs produce zero pollution but they do use electricity, and electricity production causes pollution. How does the EPA take account of this? It doesn’t. Go to page 203 of the EPA’s 728-page proposal for its new regulation and you will see this statement: EPA is proposing to make the current treatment of PEVs [plug-in electric vehicles] and FCEVs [fuel cell electric vehicles] through MY [model year] 2026 permanent. EPA proposes to include only emissions measured directly from the vehicle in the vehicle GHG [greenhouse gases] program for MYs 2027 and later (or until EPA changes the regulations through future rulemaking) consistent with the treatment of all other vehicles. Electric vehicle operation would therefore continue to be counted as 0 g/mile, based on tailpipe emissions only. In short, the EPA assumes something it knows to be false, namely that emissions from producing electricity to power EVs are zero. I’m tempted to call this the EPA’s “non-smoking gun.” How could the EPA justify such an extreme assumption? On the same page, it attempts to do so, writing, “The program has now been in place for a decade, since MY 2012, with no upstream accounting and has functioned as intended, encouraging the continued development and introduction of electric vehicle technology.” Did you catch that? The EPA justifies its explicit bias against gasoline-powered vehicles and in favor of EVs by arguing that doing so will encourage the continued development of EVs. Well, yes, just as ignoring the cost of anything will justify more of that thing. Call it the EPA’s new frontier in cost/benefit analysis. Or maybe call it the Bart Simpson justification: “I only lied because it was the easiest way to get what I wanted.”   The above is from David R. Henderson, “EV Mandates Are Taking Californians for a Ride,” Defining Ideas, May 4, 2023. The original title I gave the piece (the editor chose a different title) is “Assume a Tesla.” You’ll see why if you read the first few paragraphs of the piece. At the end, I give what I think are substantial grounds for hope, based in part on thoughts from a deep expert on regulation, Peter Van Doren. Read the whole thing. (0 COMMENTS)

/ Learn More

About that 2022 “recession”

One of the worst depressions in American history began in mid-1937. At the time, Keynesian ideas were becoming increasingly prominent and many Keynesians blamed fiscal austerity. In fact, there wasn’t all that much fiscal austerity in 1937, certainly not enough to cause a major depression: Between 2020 and 2022 we had roughly twice as much “austerity” as in 1937, at least in terms of the reduction in the budget deficit.  And yet not only did we not have a major depression, we saw some of the strongest job growth in American history.  Yes, we began 2022 with employment still a bit below normal, but that was even more true in 1937.  And yes, the austerity of 2022 mostly reflected the decision to end Covid relief programs, but much of the austerity of 1937 was the decision to not repeat the big 1936 “bonus” payments to WWI veterans.   I’m confident that observers can spot a few more differences, but do they actually explain such a dramatic difference in outcome?  Do they explain the difference between major depression and extraordinary job growth?  And why didn’t the sharp fiscal tightening after WWII lead to the major depression predicted by Keynesian economists at the time?  Why didn’t the big 1968 tax increase reduce inflation, as predicted by Keynesian economists?  Why didn’t the 2013 austerity produce a recession, as predicted by Keynesian economists? The answer to all of these questions is quite simple; it’s monetary policy that drives aggregate spending, not fiscal policy.  Tight money caused the 1937 depression.  It’s time to give up on the theory that fiscal policy drives aggregate demand.  The Fed takes fiscal policy into account when it makes its decisions.  It tries (not always successfully) to offset the effects.   The same concept applies to banking problems.  It is very possible that we’ll have a recession in late 2023 (recessions are almost impossible to forecast.)  But if we do, it won’t be caused by banking turmoil.  If the Fed thought credit problems were likely to lead to a recession, they would not be raising interest rates this week.  If there is a major recession it will be because the Fed raised rates too much—it misjudged the situation.  In contrast, a very small recession might in some sense be intentional—the Fed’s way of reducing inflation. PS.  Here’s the unemployment rate.  Notice that recessions (grey vertical bars) are easy to spot.  Do you see a recession in 2022?  Neither do I. (0 COMMENTS)

/ Learn More

Not Very Sophisticated Thinking About Inflation

A story in yesterday’s Wall Street Journal reminds us how even financial journalists may fail to go past common intuitions if not superstitions about inflation—or at least don’t ask all the questions that a familiarity with economic analysis suggests.  “Some economists,” we are told, think that businesses are using inflation to “opportunistically” boost their profits, thereby fueling inflation in return (“Why Is Inflation So Sticky? It Could Be Corporate Profits,” May 2, 2023). If inflation is caused by businesses raising their profits, why didn’t they do that before inflation? Because they did not expect their competitors to do the same, the story suggests. But if that is true, it means that businesses are not raising their profits now just because they suddenly want to (they were not greedy before!), but because it is increased market demand that is pushing up prices and short-term profits at the same time. Aren’t consumers as greedy as businesses? So why aren’t they forcing businesses to cut or cap their prices? Same answer: because markets don’t allow it, that is, consumers are the ones bidding up prices, just as employees are responding to the bidding up of wages on labor markets. But why are consumers suddenly bidding up prices? Why are businesses suddenly bidding up wages? Could it be that central banks (the Fed in the United States) have increased the money supply, in large part to finance the jump in government deficits? And why would a report in a financial newspaper not at least mention the existence of a respected monetary theory of inflation according to which the phenomenon is due to more money chasing the same quantity of goods? In early 2021, after three years during which the Fed had increased the money supply (M2) by about 50%, chairman Jerome Powell declared: Right now, I would say the growth of M2, which is quite substantial, does not really have important implications for the economic outlook. Both economic history and theory strongly suggest it was not just a bad luck. (The Fed has since pushed down the money supply, partly repairing its error, at a cost.) On corporate profits and inflation, The Economist shows more sophistication than the Wall Street Journal. The venerable British magazine writes (“Are Greedy Corporations Causing Inflation,” April 30, 2023): People are looking for someone to blame—and corporations are often top of the list. According to a recent survey by Morning Consult, a pollster, some 35% of Americans believe that “companies’ attempts to maximise profits” have contributed “the most” to inflation, more than any other factor by far. … Arguments for “greedflation” rest on unsure theoretical ground. Companies did not suddenly become avaricious. … If you are fuming at paying $10 for a coffee, blame the barista serving it to you as much as the owner. According to the monetary theory of inflation, however, the barista is not to blame either. (0 COMMENTS)

/ Learn More

American Economics Association Reaches a New Low

The American Economics Association has awarded the prestigious John Bates Clark medal to University of California, Berkeley economist Gabriel Zucman. At the link you’ll find what the AEA decision makers thought made him deserving. What’s missing? The shoddy work he did to make the data fit his story that in 2018 the tax rate on the “super-rich” exceeded the tax rate on the bottom 50 percent. That contradicted one of his own findings in a previous academic article. Economic historian Phil Magness, who was one of a number of people who caught the problem at the time, explained the details in a February 25, 2020 article titled “Harvard Finally Stands Up to Academic Duplicity“: The issue with Zucman’s work revolves around a stunning statistical claim that he made last fall. According to his own proprietary calculations, the overall effective tax rate paid by the ultra-rich in the United States had dipped below that paid by the bottom 50 percent of earners for the first time in 2018. Zucman released these statistics to journalists with much fanfare, where they were quickly trumpeted as “fact” by outlets including the New York Times and Washington Post to bolster Elizabeth Warren’s wealth-tax proposal. In reality, Zucman’s numbers had not even undergone scholarly peer review, as is the norm for work in the economic arena. The weeks that followed their release also revealed something far worse than failing to adequately vet this seemingly stunning empirical claim. Instead of objectively reporting the latest findings from tax statistics, Zucman was placing his finger on the scale. He appeared to be bending his results to conform to the political narrative of Warren’s campaign, which he was also advising at the time. Through a series of highly opaque and empirically suspect adjustments, Zucman had artificially inflated the tax rate paid by the poorest earners while simultaneously suppressing the tax rate paid by the rich. I was among the first economists to notice and call attention to the problems with Zucman’s new numbers. Shortly after his release to the New York Times, I noticed a strange discrepancy. The tax-rate estimates he provided for the ultra-rich – the top 0.001 percent of earners – did not match his own previously published academic work on the subject, including a 2018 article in the highly ranked Quarterly Journal of Economics. Whereas Zucman now claimed to show the ultra-wealthy paid just slightly north of 20 percent of their earnings in taxes, the most recently available year of his previously published numbers (2014) places the rate at 41 percent. I called attention to this discrepancy with a tweet, as did Columbia’s Wojtek Kopczuk and the University of Central Arkansas’s Jeremy Horpedahl. Then the floodgates of scrutiny opened. According to Magness, here’s how Zucman did it: At the bottom of the income ladder, he was artificially raising the depicted rate faced by the poorest earners. He did so by excluding federal tax programs that are intentionally designed to alleviate the tax burden on the poor, such as the Earned Income Tax Credit and the Child Tax Credit. By leaving out these programs, Zucman not only broke from decades of statistical conventions – he also created the illusion that the tax rate paid by the bottom quintile was nearly twice its actual level. Later investigation revealed that Zucman further tilted the scales through unconventional assumptions about the burdens of state and local consumption taxes on the poor. To avoid the empirical impossibility of infinite sales-tax rates that arise from accounting discrepancies between pre- and post-transfer income, Zucman essentially excluded the bottom decile of earners when assigning its tax incidence. This essentially causes him to misrepresent data from the second decile from the bottom as the poorest earners. Zucman’s handling of the very top of the distribution ventured even more aggressively into the territory of intentional data manipulation. The biggest discrepancy here came from his handling of how to assign corporate tax incidence across earnings. When economists examine corporate tax incidence, they usually distribute it across a variety of affected parties according to fairly standard assumptions about the portion that falls onto shareholders, onto other forms of capital, and onto the noncorporate sector of the economy due to various pass-through effects. Indeed, Zucman followed these conventional assumptions in his aforementioned academic article from 2018, coauthored with Saez and Thomas Piketty. In his new statistics, however, he jettisoned all conventional literature on corporate tax incidence and adopted his own heterodox approach that effectively assigns 100 percent of actual incidence to its statutory incidence, namely shareholders. This unconventional assumption not only conflicts with his prior work, but is sufficiently unrealistic to have caused a wave of jeers around the economics profession when it was discovered. In practical effect, however, it greatly augmented Zucman’s depicted tax rate on the top 0.001 percent in the mid-20th century and greatly reduced the same in the last few decades, mapping with the recent downward trend in corporate tax rates. As a result of this scrutiny, the president and provost of Harvard vetoed a job offer to Zucman. And it wasn’t just free-market types who were critical. Larry Summers, who appeared on a panel with Zucman’s co-author Emmanuel Saez, said that after examining the data that Zucman and Saez used to justify a wealth tax, he was “about 98.5% persuaded by their critics that their data are substantially inaccurate and substantially misleading.”(at the 20:40 point in the above link.) Notice, just following this part, how Summers, using his own data, cast doubt on the Zucman/Saez methodology. John Bates Clark deserved better. (0 COMMENTS)

/ Learn More

Fresh Air

My wife and I went to see the movie Air on Saturday and I highly recommend it. If you follow this blog closely and have read the post about my Wall Street Journal op/ed, co-authored with Don Boudreaux, on Air and ESG, you might wonder how I could write an op/ed without seeing the movie. The answer is that Don saw it and I took his word for it. The good news: he got it right. But I want to talk about something else: how good a movie this is. To review quickly, it follows Nike employee Sonny Vaccaro as he tries to make Nike a player in the basketball shoe market. Multiple spoilers ahead. Vaccaro makes some gutsy moves, going around Michael Jordan’s agent to talk directly with the real decision maker: Michael Jordan’s mom. Why do I like Air so much? It’s a good old-fashioned movie. There’s no sex or romance. It’s about one man’s determined moves to get Deloris Jordan to the bargaining table and to get his two bosses, played by Jason Bateman (Rob Strasser) and Ben Affleck (Nike CEO Phil Knight) to back him. Beyond that, my wife and I loved the 1984-era music. I always stay and watch the credits and there was a lot of music credited, more than the usual. Also, I’ve seen the movie Jerry McGuire at least 4 times and I swear that some of the background music in a couple of scenes was very close to the background music in a few Jerry McGuire scenes. Coincidentally, it was some of the music I liked most in Jerry McGuire. I also had a very personal reason for liking the movie. It all took place, as far as I could tell, in the summer of 1984, when Jordan was about to start playing with the Chicago Bulls. That summer was eventful in my wife’s and my lives. In August we moved from Arlington, VA, where we had lived when I was working with the Council of Economic Advisers and she was working as an editor with the Center for the Study of Market Processes (later the Mercatus Center) at George Mason University, to Monterey, CA. There I began a job as a temporary member of the faculty at the Naval Postgraduate School. My wife was 5-months pregnant with our daughter and so she didn’t drive with me. She flew, and we rendezvoused at her sister’s place in Chicago, and then I drove on to San Francisco, where we rendezvoused, picked up our cat Max, whom I had had a friend ship from Washington, and drove down to Monterey, where we had rented a house. Exciting times, and the movie’s music brought back my feelings of fear and excitement as we started a big new chapter in our lives.   (0 COMMENTS)

/ Learn More

Hayek’s Critique of Unlimited Democracy

I think the main interest of the third volume of Friedrich Hayek’s 1973-1978 trilogy Law, Legislation, and Liberty, titled The Political Order of a Free People, resides in its strong liberal critique of democracy as we know it. My review of this third volume is just out on Econlib. A few excerpts of my review (the quotes are of course from the book): The first broad argument of the book is that democracy has diverged from its original ideal and degenerated into an unlimited and totalitarian democracy. Unlimited democratic power can be traced back to the decline of Athenian democracy at the end of the 5th century BC when, as Aristotle noticed, “the emancipated people became a tyrant.” In a similar way, the British Parliament became sovereign, that is, theoretically omnipotent, in 1766, when it “explicitly rejected the idea that in its particular decisions it was bound to observe any general rules not of its own making.” Liberal democracy originally referred simply to “a method of procedure for determining government decisions” or, more practically, for getting rid of governments without bloodshed. Democracy was a protection against tyranny. It is an error to view democracy not as “a procedure for arriving at agreement on common action,” but instead “to give it a substantive content prescribing what the aim of those activities ought to be.” The current, unlimited democracy leads to rent-seeking (competition for government privileges), the triumph of special interest groups, and legal corruption. The cause is that a government with unlimited powers “cannot refuse to exercise them,” so everybody will rush to the public trough. I previously reviewed on Econlib the two previous volumes, respectively Rules and Order, and The Mirage of Social Justice. As the reader of my reviews will realize, I try to provide a summary of Hayek’s theory, but I also draw a few parallels with other theories, and raise some questions or doubts. For those who are not already familiar with Hayek’s thought, I would recommend reading my reviews in the same order as the books. (0 COMMENTS)

/ Learn More

Why Scott Alexander is wrong

Scott Alexander pushes back against the argument that building more housing in a city will reduce housing prices in that city.He begins by noting that housing costs tend to be higher in places that are relatively dense, such as New York and San Francisco. He is aware that this argument is subject to the “reverse causality” issue, which I call “reasoning from a price change”. Consider the graph that he provides: He is aware that the pattern above may show an upward sloping supply curve, not an upward sloping demand curve.  But he nonetheless suggests that it’s probably an upward sloping demand curve, and that building more housing in Oakland would make Oakland so much more desirable that prices actually rise, despite the greater supply of housing.  I have two problems with this sort of argument. First, I doubt that it’s true.  It is certainly the case that building more housing can make a city more desirable, and that this effect could be so strong that it overwhelms the price depressing impact of a greater quantity supplied.  But studies suggest that this is not generally the case. Texas provides a nice case study.  Among Texas’s big metro areas, Austin has the tightest restrictions on building and Houston is the most willing to allow dense infill development.  Even though Houston is the larger city, house prices are far higher in Austin: Houston pretty much describes the “Oakland with more housing” outcome that Alexander views as somewhat far-fetched.  Only in this case, it’s Austin with more housing.  Alexander seems too quick to accept the, “If you build it they will come” idea—that you can build more housing and thereby boost demand so much that prices actually rise. Alexander relies on the following intuition: Matt Yglesias tries to debunk the claim that building more houses raises local house prices. He presents several studies showing that, at least on the marginal street-by-street level, this isn’t true. I’m nervous disagreeing with him, and his studies seem good. But I find looking for tiny effects on the margin less convincing than looking for gigantic effects at the tails. When you do that, he has to be wrong, right? Here’s the problem with this argument.  It mixes up population change due to economic effects such as the benefits of agglomeration, with population changes due to regulatory changes such as less strict zoning.  If you look at things this way, then the stylized facts work against Alexander’s argument.  Over the past 50 years, increasingly strict zoning has reduced housing construction on big cities like New York and San Francisco.  As a result, their populations have increased by less than in cities with less strict zoning, such as Houston.  If Alexander were correct, then the price gap between the tightly controlled cities on the coast and the more laissez-faire cities of Middle America should have shrunk over time.  Instead, the price gap has widened.  New York and San Francisco were always more expensive than other cites, but with tighter zoning and less new construction the gap has become far wider.  Nonetheless, I suspect that there are at least a few cases where Alexander’s argument would be correct, especially in the case where the new housing was luxury homes that replaced slums.  For instance, if 100,000 homes in the (poorer) eastern half of Washington DC were replaced with 120,000 luxury townhouses, then prices might rise (due to a lower crime rate).  But even in that case, I believe Alexander would be drawing the wrong conclusion: And it doesn’t violate laws of supply and demand; if Oakland built more houses, this would lower the price of housing everywhere except Oakland: people who previously planned to move to NYC or SF would move to Oakland instead, lowering NYC/SF demand (and therefore prices). The overall effect would be that nationwide housing prices would go down, just like you would expect. But the decline would be uneven, and one way it would be uneven would be that housing prices in Oakland would go up. This isn’t an argument against YIMBYism. The effect of building more houses everywhere would be that prices would go down everywhere. But the effect of only building new houses in one city might not be that prices go down in that city. This is a coordination problem: if every city upzones together, they can all get lower house prices, but each city can minimize its own prices by refusing to cooperate and hoping everyone else does the hard work. This theory is a good match for higher-level management like Gavin Newsom’s gubernatorial interventions in California. Tell me why I’m wrong! Alexander is implicitly viewing this outcome as a “problem” for the city that builds more housing.  They must sacrifice so that the rest of the country can gain.  But in his scenario, Oakland is better off.  Indeed if it were not better off, then why would more people choose to live in Oakland?  In order for it to be true that building more housing boosts housing prices, it must also be true that the quality of existing houses (including neighborhood effects) rises by more than enough to offset the increase in supply.  That means the new housing construction must make Oakland such a desirable place to live that the amenity effect overwhelms the quantity effect. You see the same fallacy with criticism of highway expansion projects.  People will complain, “They added two more lanes to the freeway, but the traffic is worse than ever.”  But that’s a wonderful result!  If the traffic is worse than ever, despite many more people driving on the highway due to the extra lanes, then the welfare of commuters has increased for two reasons.  First, more people benefit from using the highway.  Second, the fact that they are willing to use it despite a higher time cost means that they value the service much more than before the expansion.  Otherwise, the traffic would not be worse. Of course, economic change always has winners and losers.  Here’s how I would describe the impact of allowing more housing construction in Oakland, in the unlikely event that this did raise housing prices: 1. America would benefit. 2.  Oakland would benefit. 3.  Poor people in America would benefit, in aggregate. 4.  Affluent people in America would benefit, in aggregate. 5.  Homeowners in Oakland would benefit. 6.  Some renters in Oakland would benefit (from a more economically dynamic city.) 7.  Some renters in Oakland would suffer from higher rents. In the much more likely case where new housing construction would lower prices, the impact described in #5 and #7 might reverse.  Either way, there is no defensible argument for not building more housing in Oakland, regardless of the impact on price.  If building more housing reduces its price, then there is a strong argument for allowing more housing construction.  If building more housing raises its price, then the argument for more construction is even stronger.   (2 COMMENTS)

/ Learn More