This is my archive

bar

Reporters Don’t Write Headlines

I’ve frequently argued that differences in state income tax rates help to explain interstate migration patterns, especially for high-income individuals,  Places like Tennessee, Texas and Florida are drawing residents from more highly taxes areas.  Bloomberg reporter Jonathan Levin recently made the following claim about Jeff Bezos’s decision to move to Florida: Lastly, I’d be remiss to ignore taxes completely. Washington, where Bezos founded Amazon in 1994, recently approved a new 7% capital gains tax targeting investment profits over $250,000, and that always stood to have a big impact on Bezos, who has sold down billions in Amazon.com stock over the years. In March, after the state Supreme Court upheld the new tax, his fellow Washington billionaire Ken Fisher announced (with characteristic grandstanding) that he was moving his money-management firm to zero-state-tax Texas. Bezos didn’t mention taxes explicitly, but the math must have crossed his mind. Perhaps Bloomberg editors were unhappy with his article, as they added the following headline: Bezos’ Miami Move Is Not About Washington’s Taxes The billionaire is returning to a city where he went to high school and where his parents live — it’s as simple as that. Really, that simple?  (Most people probably never look beyond the headline.) PS.  On September 24, Bloomberg published an article full of nonsensical made-up figures.  Nearly six weeks later, they have yet to correct the article.  Do they have an editor? (0 COMMENTS)

/ Learn More

Who’s Driving the Future?

In this episode of EconTalk from the archives, Russ Roberts hosts Rodney Brooks for a conversation on the current state of AI alongside its projections. Rodney Brooks is a professor of Robotics Emeritus at MIT. Brooks, also a robotics entrepreneur, is the co-founder of Robust AI. How soon will there be driverless cars? Will AI technology take over the workforce? Roberts and Brooks discuss these questions and more. Brooks finds his position on the future of technology in accordance with Roy Amara’s law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”     Speaking of driverless cars, Roberts and Brooks predict several challenges that may come about regarding their acceptance and use. Russ points to tech, regulation, and culture as the three key challenges in the development of driverless cars. In a world of driverless cars, questions would arise if a child was in the driverless car by themselves; Brooks presents the overall question of who is in charge? What issues do you think will come to the surface with the widespread development of driverless cars? What should be the balance between technological excellence and human-like capability of AI? Given that this podcast was recorded in 2018, did you expect that driverless cars would be up and running by now in 2023? Is there an issue with the majority of people falling into Amara’s law? Could this norm affect the development of groundbreaking tech? Amara’s law also reminds us of the danger of treating technology as magical. For example, Brooks shares the common theme of pundits mistaking performance for competence in technology. How should regulation put a check on focusing solely on the capability of technology as opposed to its shortcomings? To what extent might President Biden’s recent executive order accomplish this? What is a prospective technology that concerns you in terms of its competence, and why?   Another example of our underestimation of technology in the future is illustrated by the story of showing Isaac Newton an iPhone. Newton is one of the most brilliant minds ever, and yet Brooks believes that Newton would be dumbfounded by the capabilities of the iPhone, while also making conclusions about its ability. In terms of its capabilities, to what extent would you consider the current iPhone magic if you took the mindset of the time before it was created? What are your expectations for the future of personal devices?   Brooks, having worked in AI for forty years, believes in the ‘baby steps’ and ‘ladders’ that have been developed n the road to groundbreaking technological advancements as opposed to the trend of impatience for those achievements from the general public. Given the prevalence of ChatGPT and AI this year, what are some the conclusions people might reach as they overlook the ladders yet to be built for its advancement? Why is the hypothesis of robots and technology developing into some sort of human demise so popular? To what extent do you think humans can stay ahead of AI? Brennan Beausir is a student at Wabash College studying Philosophy, Politics, and Economics and was a 2023 Summer Scholar at Liberty Fund. (0 COMMENTS)

/ Learn More

Moral Principles as a Good Strategy

Committing to moral principles can be a good strategy if a sufficient number of other people in your social environment share these principles: it will reduce your transaction costs in social cooperation. Commitment adds a recognition sign to the paradigmatic Tit-for-Tat model in game theory: if you can commit to cooperate, others will incur less risk in cooperating with you, and everybody will be better off. (Robert Axelrod’s 1984 book The Evolution of Cooperation, which developed a simple model without the possibility of communication, generated a voluminous literature). In other words, if you are virtuous and enough others in your environment are too, virtue signaling is a good strategy. If others know that you are in the habit of honesty and decency, your life will be easier. With respect to a free society, committing to principles of reciprocity among natural equals (to use James Buchanan’s terms) will also contribute to maintaining or developing that society. This is a major idea in Buchanan’s Why I, Too, Am Not a Conservative; I think it is approximately translatable in Hayekian terms. This small book by Buchanan is a good and non-technical introduction to his ethical theory but (trigger warning!) you may be challenged. The idea of committing to moral principles even has implications for the conduct of war. Trying to occupy the high moral ground in wartime may not lead the enemy to compete for the top of the barrel: after all, if your group is waging a just war, moral principles are probably not the enemy’s strong point. However, the moral principles that political leaders signal may help keep some human decency in their soldiers, which will be useful after demobilization. Furthermore, this strategy will certainly economize on the capital of support from other people with moral principles in the world. (Buchanan and Hayek were probably less radical than the ideas I am expressing in matters of war, but I would be surprised if they wouldn’t have agreed with this paragraph.) (0 COMMENTS)

/ Learn More

U.S. Petroleum Policy Remembered: Decontrol and Regret

In my previous post, I described the effects of the oil reseller boom to explain why U.S. consumers paid record-high prices—even approximating the price of world oil—despite maximum price regulations at every transaction point to ensure the opposite result during the oil shortages of the 1970s. Despite years of regulatory tailoring and thousands of committed administrators (consolidated in the U.S. Department of Energy in 1977), the gasoline lines reappeared in 1979 with oil cutoffs from the Iranian Revolution. Enough was enough even for President Carter, whose National Energy Plan of two years earlier was in shambles. With support from Democrats, a phased decontrol bill was enacted with a windfall profit tax that was accelerated into full decontrol by President Reagan upon taking office in early 1981. It had been planned chaos, to use a term of Ludwig von Mises. The initial EPAA regulations covering 27 pages in the Federal Register would be supplemented by more than 5,000 pages of amendments in its first two years. Under this law, there would be “no fewer than six different regulatory agencies and seven distinct price control regimes, each successively more complicated and pervasive.”[1] The unprecedented peacetime exercise in cumulative intervention went far beyond the EPAA. Between 1977 and 1980, more than three hundred energy bills were considered in Congress. State legislatures considered many more. Before it was over, even the most anti-oil politicians had regrets. Senator Edward Kennedy (D-MA) complained about the “outrageous weed garden of regulation.”[2] James Schlesinger, the first head of the U.S. Department of Energy (created in 1977), called the experience “the political equivalent of Chinese water torture.”[3] The regulatory tsunami included dozens of state and federal mandates for energy conservation. In the name of energy security, the Strategic Petroleum Reserve (1975) and the Synthetic Fuels Corporation (1980) were born, each introducing their own issues and challenges. “Gapism”—regulation, taxation, and subsidization intended to either increase supply or decrease demand—threw good money after bad in the price controls-shortage program.[4]   Conclusion In reference to the 1970s oil crisis, Milton and Rose Friedman wrote, Economists may not know much. But we know one thing very well: how to produce surpluses and shortages. Do you want a surplus? Have the government legislate a minimum price that is above the price that would otherwise prevail…. Do you want a shortage? Have the government legislate a maximum price that is below the price that would otherwise prevail.[5] The peacetime experience of oil shortages is a case study of public policy gone wrong, and predictably so. The good news is that maximum price controls as an energy policy was driven from the debate and is a political nonstarter today. The bad news is that so much damage was done in a futile quest to plan petroleum from the center. Additionally, a policy of “energy security” (in wait for a third oil shock, the gasoline lines of which would not come without price controls) created new bureaucracies and new programs that remain to this day. History matters, and regulatory history informs public policy toward business. Never again.   Robert L. Bradley is the founder and CEO of the Institute for Energy Research. [1] Joseph P. Kalt, “The Creation, Growth, and Entrenchment of Special Interests in Oil Price Policy,” in The Political Economy of Deregulation: Interest Groups in the Regulatory Process, ed. Roger G. Noll and Bruce Owen (Washington, DC: American Enterprise Institute, 1983), p. 98. [2] 124 Cong. Rec. S17071 (daily ed. June 9, 1978) (statement of Sen. Kennedy). [3] Quoted in Daniel Yergin, The Prize: The Epic Quest for Oil, Money, and Power (New York: Simon & Schuster, 1991), p. 659. [4] Edward Mitchell. U.S. Energy Policy: A Primer (Washington, DC: American Enterprise Institute, 1974), pp. 17–26. [5] Milton and Rose Friedman, Free to Choose (New York: Harcourt Brace Jovanovich, 1979), p. 219. (0 COMMENTS)

/ Learn More

Ticketmaster and Taylor Swift

What’s the deal with Ticketmaster and concert fans? Taylor Swift fans were upset at Ticketmaster when the company canceled the general sale of tickets for the first leg of her U.S. tour. And then they were equally frustrated with the inability to get tickets. Fans seemed to have two major complaints. One, people are upset they could not get tickets to the event. Two, people feel service fees charged are too high. The assigned culprit? The greed of Ticketmaster and parent company Live Nation. Does this culprit make sense? At least for Taylor Swift, the demand far outstripped the supply of tickets. Far more individuals signed up for pre-sale than tickets available. For the first U.S. leg of Taylor Swift’s tour 3.5 million individuals registered for the pre-sale while 2.4 million tickets were sold.  And in many locations hundreds of fans gathered in the parking lot to simply hear the concert. But this explanation doesn’t seem to satisfy hungry fans upset that they didn’t get tickets to the show. It is the greed and monopoly of Live Nation that seems to be the issue.  Fans argue Live nation owns the venues and the ticketing system, thus creating a monopoly. Who then has the power in this market? Live Nation argues they own only 5% of venues, but others estimate they control up to 70% of ticketing. I believe the bigger issue is one that is unavoidable due to the nature of the market.  Let’s consider for a moment the fast food market. If I really want a McDonald’s hamburger, but they are sold out or unavailable for some reason, I can simply have a Wendy’s or Burger King sandwich. Are they completely interchangeable? No, but they are close enough substitutes that it is reasonable to interchange them. Now back to Taylor Swift. If I live in Denver and all the tickets are sold out for the night she comes to Denver, what are my options for that night? There is no comparable substitute for Taylor Swift. There is no ‘Wendy’s’ equivalent to Taylor Swift. By the nature of the market, Taylor Swift has created a monopoly for her concert that night. Even if the best singer in the area came to a competing venue saying she was going to sing all the Taylor Swift songs in a concert for everyone that didn’t get tickets, this is not a comparable substitute. So who really has the power? Is it Live Nation or Taylor Swift? I would argue it is Taylor Swift. And also her fans.  So what is the solution? Suing Ticketmaster? Getting the government to come in and break up Ticketmaster? 26 fans are suing Ticketmaster and Live Nation for anti-competitive practices, saying the companies use their power to charge above market prices. Yet, they must not be charging above market prices because people are reselling their tickets for well above the sticker price. This indicates to me that the initial price of the ticket was ‘too low’ according to supply and demand.  In fact, artists including Taylor Swift, price “below market” to try to make their concerts accessible to all fans. Pre-sales and verified fans are systems put in place to try to get tickets into the hands of actual fans planning to attend the concert and not scalpers. But selling concert tickets at below the market value creates other problems. We have established that demand far exceeds supply. There has to be some way to allocate this scarce resource. One way is allowing prices to dictate who values them the most monetarily, one is to have a verified fan process and lottery system. But whatever the system, some set of fans will be disappointed in not being able to buy tickets.  Perhaps we can have the government come in and allow other sellers to sell tickets. The government can create competition in the selling of tickets. But what makes us think multiple sellers would make the market any different? If I know I am selling a ticket to a unique event with excess demand, I would have no incentive to try and undercut my competitor because I know the tickets will sell regardless.  What about service fees? What is the purpose of fees? I previously would have had to stand in line to get tickets. For a massive concert, I might have had to sleep overnight or stand in line all day for a ticket at the actual box office. Online services allow me to save that time in line. I may have to ‘wait in the queue’, but it is virtual. I could continue working, watch a show, or tweet my favorite Taylor Swift lyrics while waiting. I no longer have to brave the elements to get my tickets. The service fee is me paying for the convenience to purchase my tickets at home. Service fees are relatively unavoidable. Typically, box office fees are less than online but not always. So, what is the solution to this? Well, I have the power in this case. I can simply stop giving the online providers my business. I can buy my tickets at the box office. Occasionally, I will have to forgo a show if I feel the service fees are too much. Taylor Swift still has all the power as the monopoly artist. She could negotiate different fees with Ticketmaster.  And if I genuinely feel prices and fees are too much, there is an alternative. It doesn’t feature the live Taylor Swift, but it does feature other dedicated fans. Some companies are hosting Taylor Swift themed parties, dances, and sing-alongs. Even in a market with a monopolist artist, individuals have found ways to create alternative experiences for fans to connect with one another and the artists music.  This situation is representative of broader implications of how markets work. People are quick to assign greed as the reason for various issues, but greed is ever-present. Issues with access to products in the market always come back to scarcity, which is also ever-present. There are various ways to deal with scarcity. Individuals can demonstrate their willingness to pay through money, time, or other means. The government coming in to dictate the way to run a particular market ignores the underlying issues with that particular market and the desires of the individuals involved.     Amy Crockett is a PhD Candidate in the Department of Economics and a Graduate Fellow in the F.A. Hayek Program at the Mercatus Center, both at George Mason University. (0 COMMENTS)

/ Learn More

Old Calabria: The Benefits of Emigration

While there’s a lot of recent discussion about the impact of immigration, Americans often overlook the effects of emigration. Norman Douglas traveled extensively in southern Italy during the early 1900s and wrote a book entitled Old Calabria.  At one point he discussed how emigration was transforming Italy, which at the time was quite poor: What is shattering family life is the speculative spirit born of emigration. A continual coming and going; two-thirds of the adolescent and adult male population are at this moment in Argentina or the United States—some as far afield as New Zealand. Men who formerly reckoned in sous now talk of thousands of francs; parental authority over boys is relaxed, and the girls, ever quick to grasp the advantages of money, lose all discipline and steadiness. . . .These emigrants generally stay away three or four years at a stretch, and then return, spend their money, and go out again to make more. Others remain for longer periods, coming back with huge incomes—twenty to a hundred francs a day. . . .  It is nothing short of a social revolution, depopulating the country of its most laborious elements. 788,000 emigrants left in one year alone (1906); in the province of Basilicata the exodus exceeds the birthrate. I do not know the percentage of those who depart never to return, but it must be considerable; the land is full of chronic grass-widows.Things will doubtless right themselves in due course; it stands to reason that in this acute transitional stage the demoralizing effects of the new system should be more apparent than its inevitable benefits. Already these are not unseen; houses are springing up round villages, and the emigrants return home with a disrespect for many of their country’s institutions which, under the circumstances, is neither deplorable nor unjustifiable. A large family of boy-children, once a dire calamity, is now the soundest of investments. Soon after their arrival in America they begin sending home rations of money to their parents; the old farm prospers once more, the daughters receive decent dowries. I know farmers who receive over three pounds a month from their sons in America—all under military age. . . .Previous to this wholesale emigration, things had come to such a pass that the landed proprietor could procure a labourer at a franc a day, out of which he had to feed and clothe himself; it was little short of slavery. The roles are now reversed, and while landlords are impoverished, the rich emigrant buys up the farms or makes his own terms for work to be done, wages being trebled. A new type of peasant is being evolved, independent of family, fatherland or traditions—with a sure haven of refuge across the water when life at home becomes intolerable. When people emigrate to a more successful place, there is a flow of information back to the home country.  People learn that things don’t have to be this way, and there is pressure for change.  Like trade, migration is not a zero sum game; it tends to improve cultures in both the sending and the receiving country. (0 COMMENTS)

/ Learn More

Misunderstanding Economic Profit

Small misunderstandings can snowball into major confusions. This is as true in economics as in any other field. Very often one finds a well-educated person build up a sophisticated analysis that ultimately rests on a misunderstanding of basic economics. Marx wrote thousands of pages of economic prophecy that rested on the false foundation of the labor theory of value. Modern observers are no less vulnerable. I was reminded of this when reading a book review by Scott Alexander of Peter Theil’s Zero to One. Peter Theil spends a lot of intellectual effort trying to explain something which, to him, cries out for an explanation, but seems to rest on a fundamental misunderstanding of what economists mean when talking about profit. According to the Scott Alexander’s review, “the basic economic argument goes like this: In a normal industry (eg restaurant ownership) competition should drive profit margins close to zero.” But this leads to the following mystery: “Neither the promise nor the warning has been borne out: business owners are often comfortable and sometimes rich.” To Theil, this is a contradiction between theory and reality that must be explained. Theil attempts to explain it by suggesting that wealthy businesses have “escaped competition and become at least a little monopoly-like.” But Theil is attempting to resolve a contradiction that doesn’t exist. Here’s where the misunderstanding lies. Economic theory does not predict that competitive markets will drive profit margins close to zero. What economic theory tells us is that competitive markets will drive the rate of economic profit towards zero. This may sound like two slightly different ways of saying the same thing, but there is a big difference between them. When most people think of profits, they think of accounting profits – income minus expenses, in the simplest formation. And this isn’t unreasonable – it describes what most people care about in their day-to-day life. Am I bringing in more money than I’m spending? If so, I’m profitable, and if not, I’m taking losses. But economic profits also consider the opportunity cost – that is, it factors in what else you could be doing. To put it another way, economic profits are the difference between your current choice and the best available alternative. Because of this, your economic profits can be low, zero, or even negative while you are making large accounting profits. If your next available option is just as good as your current situation, then you’re making zero economic profits- even if you have a very favorable cash flow. If your best alternative is only slightly worse than the status quo, you’re making a small economic profit. If there’s a better option for you out there, then you’re sustaining an economic loss, even if your bank account is very impressive. Consider this example. Suppose I can assign some square footage in a building I own to gambling. Let’s say I put in a bunch of nickel slot machines. Imagine that these machines are very popular – all day, every day, there are people sitting at the slot machines, putting in coins and pulling the handles. The money these machines bring in for me exceeds their expenses by $1 million a year. My accounting profits, therefore, are $1 million a year. But that doesn’t mean I’m making $1 million a year in economic profits. Instead of putting in nickel slots, I could have used that same square footage to put in blackjack tables. If those blackjack tables could have generated accounting profits of $5 million a year, that means the nickel slots carry an annual opportunity cost of $5 million. So even though I’m making accounting profits of $1 million a year with the slot machines, the opportunity cost of not setting up blackjack tables means I’m taking an economic loss of $4 million a year.  In almost all cases, whenever a non-economist decides they’ve made some new, cutting-edge observation that upends standard economic theory, an observation that economists have somehow overlooked, what’s usually going on is the non-economist is just misunderstanding an elementary point. This is one such case. Theil seems to believe that “the rate of economic profit tending towards zero” implies that in competitive markets, every business should be operating on the brink of bankruptcy. He expends a great deal of intellectual effort trying to explain why things haven’t worked out his way. But all his efforts ultimately rest on a misunderstanding of basic economics, and he’s trying to solve a mystery that doesn’t exist. The rate of economic profit tending towards zero just means that your next available option will tend to be nearly as good as your current option. This can be true whether you’re bankrupt, just barely scraping by, comfortably middle class, or a billionaire.   (0 COMMENTS)

/ Learn More

Planned Chaos: U.S. Petroleum Policy Remembered

A half-century ago this October, the Arab members of the Organization of Petroleum Exporting Countries (OPEC) announced a production cut and embargo against the United States. The consequences represent a case study in the perils of economic intervention by government. Wrong-headed public policy can turn market challenges into a full-blown crisis—and did so with petroleum in the 1970s. Worse, a false narrative emerged about energy security that would plague U.S. policy for decades. The crisis did not begin with the five percent production cut and embargo. It began with the passage of the Economic Stabilization Act of 1970, which gave the President authority to enact wage and price controls. Richard Nixon invoked this power on August 15, 1971, setting a 90-day freeze on all wages and prices in the U.S. economy. Monetary expansion, the real culprit behind price inflation, was conveniently ignored. The controls shocked Milton Friedman and other free-market economists but attracted wide business support. The “temporary” program, it was said, would quell inflationary expectations to check rising prices. But the first peacetime price control program in U.S. history would go through five phases over the next 33 months—Phase I (Freeze I), Phase II, Phase III, Phase III 1/2 (Freeze II), and Phase IV—and it would distort the petroleum market more than any other major industrial sector. Oil shortages at the wholesale level and spot gasoline lines by late 1972/early 1973 resulted in Congressional hearings on energy-use conservation, another peacetime first. Growing petroleum problems led Nixon to create three successive bureaucracies over oil policy. On the legislative front, a major energy bill was working its way through Congress to deal with petroleum prices and allocation. All this was before the October 1973 announcement by OPEC (itself created in 1960 in retaliation for U.S. oil import quotas). The Arab OPEC actions against the United States in fourth-quarter 1973 worsened Nixon’s oil crisis. But it was pre-existing federal regulation that fathered the panic at the pumps. As Ayn Rand noted at the time: The Arab oil embargo was not the cause of the energy crisis in this country: it was merely the straw that showed that the camel’s back was broken. There is no “natural” or geological crisis; there is an enormous political one. The U.S. on-and-off oil crisis persisted until decontrol and market adjustment set-in during 1981.   EPAA of 1973 The Emergency Petroleum Allocation Act of 1973 (EPAA), which Nixon had opposed for months, was enacted the month after the Embargo. EPAA linked price and allocation controls. “The creation of Part 210,” stated the Federal Energy Office, “recognizes the compelling necessity of viewing both allocation and price problems within the context of a single regulatory framework.”[1] Intervention-begetting-intervention marked the seven-year reign of EPAA. A higher price cap for “new oil” than (physically identical) “old oil” was introduced, a price schema that grew to three categories in 1976, five in 1977, and eight and finally eleven in 1979. With downstream parties differentially impacted by wellhead price categories, distortions reigned on the distribution side. Two regulatory programs, the supplier/purchaser rule and the buy/sell program, continually amended, tried to address price inequities between independents and integrated majors. Another distortion was U.S. refinery purchases given multi-tiered domestic price ceilings and unregulated import prices. Specifically, inland refineries tied to domestic oil capped at $5.25 per barrel in 1974 were greatly advantaged over coastal refineries paying a world price approaching $10 per barrel. The result was the Old Oil Entitlements Program of 1975, which required refiners with an average crude acquisition cost under the national average to write a monthly check to an oppositely situated refiner. Entitlements “equalization” was quickly politicized. The “small refiner bias” awarded bonus entitlements to refine low-cost oil without obligation to subsidize inefficient “tea kettles,” some of whom suddenly entered the market. Exemptions also rewarded the politically astute at the expense of their more efficient rivals—and consumers.   Oil Reseller Boom The refiner-entitlements program was the most visible and criticized program under the EPAA. But an almost invisible regulatory episode grew up alongside oil price and allocation controls—the oil reselling boom—that ranks as one of the most bizarre in U.S. history. The nation suffered through several major petroleum shortages during the 1970s. But for most of the price-controlled period, supply and demand meshed at retail without queues. Why did U.S. consumers pay record-high prices—even approximating the price of world oil—despite maximum price regulations at every transaction point to ensure the opposite result? Part of the answer was that domestic refiners purchased uncontrolled imports to price-blend with (underpriced) domestic regulated crude, increasing the cost of imported oil by an estimated 10–20 percent.[2] Second, a swarm of nouveau oil resellers profitably bought and sold price-regulated (underpriced) oil—a regulatory gap that energy planners could not plug despite regulating margins per transaction. While physical transportation, refining, and retailing involved a limited number of markups, resellers could buy and sell the oil repeatedly with the quantity and location physically undisturbed. Back-to-back trading (“daisy chaining”) became commonplace to capture the margins and prices that, by law, were denied at the wellhead. So long as the refiner could buy crude and make its maximum profit, and so long as the retailer could sell the churned product at full margin, the opportunists could bid up the price to “market” levels. Hundreds of resellers consummated hundreds of thousands of transactions in this way. The good news was that the resulting price increases kept motorists out of gasoline lines for most of the price-control period; the bad news was that domestic oil producers were prevented from producing an estimated one million (additional) barrels per day.[3] The revenue that would have gone to oil producers (and royalty owners), in other words, went to foreign petro-states and to fly-by-night resellers, a number of which became “regulatory millionaires.” This example of superfluous entrepreneurship was an unintended consequence of intervention.   Robert L. Bradley is the founder and CEO of the Institute for Energy Research. [1] The summary to follow is taken from Robert L. Bradley, Jr., Oil, Gas, and Government: The U.S. Experience (Lanham, MD: Rowman & Littlefield, 1996), chapter 9, chapter 12 (pp. 667–710), chapter 20 (pp. 1194–1228), and chapter 27. [2] Joseph P. Kalt. The Economics and Politics of Oil Price Regulation: Federal Policy in the Post-Embargo Era (Cambridge, MA: MIT Press, 1981), pp. 286–87. This represented a regulatory subsidy to OPEC and other exporters to the U.S. [3] Kalt, The Economics and Politics of Oil Price Regulation, p. 287. (0 COMMENTS)

/ Learn More

Don’t confuse supply with quantity supplied

Other things equal, a reduction in price leads to a lower quantity supplied. But other things equal, price never changes.  Price always changes because other things are not equal. This tweet caught my eye: This is an example of what I call “reasoning from a price change”.  Don’t do it! Most advocates of increased housing construction are proposing measures that would shift the supply curve to the right, resulting in both lower prices and higher output. Reasoning from a price change is a very common mistake.  You see Fed officials doing this when they speculate that higher bond yields might slow the economy.  Not if the higher bond yields reflect higher demand for credit. (0 COMMENTS)

/ Learn More

Grey’s Law and Universal Solutions

On a recent post, a commenter suggested something that struck me as a perfect example of what I will call Grey’s Law. Grey’s Law comes from an offhanded comment made by the YouTuber CGP Grey in one of his videos, where he said: There’s almost a law of the universe that solutions which are the first thing you’d think of and look sensible and are easy to implement are often terrible, ineffective solutions, once implemented will drag on civilization forever. The specific comment that brought Grey’s Law to mind was remarking on the education system, where a commenter said “if the goal was better education we would just do what Massachusetts and New Jersey do and avoid what Oklahoma does.” And this seems to make sense at first! After all, if you want to make any system better, wouldn’t you look at examples of systems that are performing well, find out what they are doing, and then simply use their system everywhere else? Certainly, that’s the first thing you’d think of, and it looks sensible, and it would be easy to implement so…oh wait, right, Grey’s Law. So, what’s wrong with this seemingly sensible solution?  At the highest level, it falls into one of the major pitfalls of High Modernist thought. I’ll summarize this pitfall by cribbing from Scott Alexander’s review of Seeing Like a State, where Alexander describes one of the tenants of High Modernism as believing “the solution is the solution. It is universal. The rational design for Moscow is the same as the rational design for Paris is the same as the rational design for Chandigarh, India.” Or, as this commenter would contend, the optimal education system for students in Massachusetts or New Jersey is the optimal education system for all students, everywhere. All we need to do is find out what they’re doing in Massachusetts and just do that everywhere.  But why on earth would we assume this is true? Students aren’t a Standardized Product Unit who all respond to a given education system in the same way – nor are groups of students in different states, towns, or districts. A system that works extremely well in New Jersey might be only moderately successful in Pennsylvania, and completely ineffective for students in the Appalachian region. Even a system that works well in one particular school district might be terribly suited for students in the next district over, or even from one classroom to the next in the very same school. Simply assuming that “we” (whatever “we” is supposed to mean) can just decide what the “right” system is and implement it everywhere handwaves away the enormous variety and complexity of circumstances that exist in different areas and among different students.  As an aside, I see exactly this sort of thinking a lot in my work as a healthcare analytics consultant. Most doctors I’ve worked with (thankfully, not all) often invoke the term “Best Practices” in a manner that almost makes you expect it to followed by the sign of the cross. They think that “we” just need to Determine Best Practices, often by looking at a particular institution that’s having strong success in a given area. From there, we need only Implement Best Practices at their own institution, and they’ll get the same results. And it never, ever works out that way. I wish it did – it would make my work so much easier! It would mean for any given issue for which a hospital needs help, my team and I would just need to find an effective solution once, and then for every future job, we can just implement the proper tool or system at the new institution and get equally good results. Alas, reality is not that simple or that simplistic. Every institution has different constraints, different patient populations, different resources – even the personalities of the medical staff can make a huge difference on how effective solutions are from place to place. It takes a good deal of work getting familiar with all the local circumstances to work out an effective solution – and that solution, sadly, won’t be applicable on the next job.  I also saw this iteration of Grey’s Law play out frequently in my time in the military. Most of the time, any given servicemember is at a given unit for about three years before getting orders to the next unit. During your time at a given unit, you’d almost certainly see the commanding officer replaced, along with the Sergeant Major and other lower-level officers and enlisted leaders. You always hoped that the new bigwigs would be one of the good ones. And one of the most reliable signals we learned for predicting which ones would be good or bad was their attitude on how their experience at their previous unit should inform what they do at the current unit. A universally bad sign was when they said something to the effect of “Back at my last unit, we did things in such and such a way, and everything worked great there. So going forward, we’re going to do it in such and such a way here too.” Commanders with that attitude were, in practice, terrible, ineffective commanders whose methods, once implemented, were a drag on unit performance.  Luckily, in practice, there was a way around such people, to prevent them from hampering mission effectiveness too much – a way described by James C. Scott in his book Two Cheers for Anarchism:  Workers have seized on the inadequacy of the rules to explain how things are actually run and have exploited it to their advantage. Thus, the taxi drivers of Paris have, when they were frustrated with the municipal authorities over fees or new regulations, resorted to what is known as a grave de zele. They would all, by agreement and on cue, suddenly begin to follow all the regulations in the code routier, and, as intended, this would bring traffic in Paris to a grinding halt. Knowing that traffic circulated in Paris only by a practiced and judicious disregard of many regulations, they could, merely by following the rules meticulously, bring it to a standstill. The English language version of this procedure is often known as a ‘work-to-rule’ strike. In an extended work-to-rule strike against the Caterpillar Corporation, workers reverted to following the inefficient procedures specified by engineers, knowing that it would cost the company valuable time and quality, rather than continuing the more expeditious practices they had long ago devised on the job. The actual work process in any office, on any construction site, or on any factory floor cannot be adequately explained by the rules, however elaborate, governing it; the work gets done only because of the effective informal understandings and improvisations outside those rules. In the same way, when we had a new commander who was drunk on the idea that what worked well at his last unit must also work well here, the result, in practice, was “a practiced and judicious disregard of many regulations” set out by that commander in favor of “the effective informal understandings and improvisations outside those rules.” This was not done flagrantly of course – it was a subtle understanding. But there was a sort of satisfied amusement among us to see a planner utterly convinced their plan was working, and equally unaware that the plan only appeared to be working to them because people knew better than to actually follow the plan. (0 COMMENTS)

/ Learn More