This is my archive

bar

Comparative Advantage

When asked by mathematician Stanislaw Ulam whether he could name an idea in economics that was both universally true and not obvious, economist Paul Samuelson’s example was the principle of comparative advantage. That principle was derived by David Ricardo in his 1817 book, Principles of Political Economy and Taxation. Ricardo’s result, which still holds up today, is that what matters is not absolute production ability but ability in producing one good relative to another. Reckoned in physical output—for example, bunches of bananas produced per day—a producer’s efficiency at growing bananas depends on the amounts of other goods and services he sacrifices by producing bananas (instead of other goods and services) compared with the amounts of other goods and services sacrificed by others who do, or who might, grow bananas. Here is a straightforward example. Ann and Bob are the only two people on an island. They use only two goods: bananas and fish. (The assumption of two persons and two goods is made only to make the example as clear as possible; it is not essential to the outcome. The same holds for all subsequent assumptions that I make using this example.) If Ann spends all of her working time gathering bananas, she gathers one hundred bunches per month but catches no fish. If, instead, she spends all of her working time fishing, she catches two hundred fish per month and gathers no bananas. If she divides her work time evenly between these two tasks, each month she gathers fifty bananas and catches one hundred fish. If Bob spends all of his working time gathering bananas, he gathers fifty bunches. If he spends all of his time fishing, he catches fifty fish. Table 1 shows the maximum quantities of bananas and fish that each can produce. If Ann and Bob do not trade, then the amounts that each can consume are strictly limited to the amounts that each can produce. Trade allows specialization based on comparative advantage and thus undoes this constraint, enabling each person to consume more than each person can produce. Suppose Ann and Bob divide their work time evenly between fishing and banana gathering. Table 2 shows the amounts that Ann and Bob each produce and consume every month. Table 1 Production Possibilities Bob Ann Bananas 50 100 Fish 50 200 Now Ann meets Bob and, after observing Bob’s work habits, offers Bob the following deal: “I’ll give you thirty-seven of my fish,” says Ann, “in exchange for twenty-five of your bananas.” Bob accepts. Table 2 Amounts Produced and Consumed before Specialization and Trade Bob Ann Bananas 25 50 Fish 25 100 Purely for expositional simplicity, assume that both Ann and Bob want to consume the same number of bananas with trade that each consumed before trade. Table 3 shows the amounts of bananas and fish that Ann and Bob produce in anticipation of trading with each other. On trading day, true to their word, Ann gives Bob thirty-seven fish and Bob gives Ann twenty-five bananas. Table 4 shows the amounts of bananas and fish that Ann and Bob each consume with trade. Note that Ann and Bob are both better off than they were before trade. Each has the same number of bananas to consume as before, but Ann now has thirteen more fish and Bob has twelve more fish to consume. This small society—let’s call it Annbobia—is wealthier by a total of twenty-five fish. This increase in total output is not the result of any of the factors Adam Smith identified. It is the result exclusively of Ann specializing more in fishing and Bob specializing more in gathering bananas. This happy outcome occurs because in this society (here, just two people), each person concentrates more fully on producing those goods that each produces comparatively efficiently—that is, efficiently compared with others. For each fish she catches, Ann sacrifices one-half of a banana; that is, for each fish she catches, she produces one-half fewer bananas than otherwise. For each banana she gathers, she sacrifices two fish. Standing alone, these numbers are meaningless. But when compared with the analogous numbers for Bob, the results tell where each person’s comparative advantage exists. Table 3 Amounts Produced with Specialization and Trade Bob Ann Bananas 50 25 Fish 0 150 Table 4 Amounts Consumed with Specialization and Trade Bob Ann Bananas 25 50 Fish 37 113 For each fish Bob catches, he sacrifices one banana. So Ann’s cost of producing fish is lower than Bob’s—one half of a banana per fish for Ann compared with one banana per fish for Bob. Ann should specialize in fishing. But if Ann catches fish at a lower cost than does Bob, then Bob produces bananas at a lower cost than does Ann. While Ann’s cost of producing a banana is two fish, Bob’s cost is only one fish. Bob should specialize in gathering bananas. Viewed from each individual’s perspective, Ann knows that each fish she catches costs her half a banana; so she is willing to sell each of her fish at any price higher than one-half of a banana. (In our example, she sold thirty-seven fish to Bob at a price of roughly two-thirds of a banana per fish.) Bob knows that each banana costs him one fish to produce, so he will sell bananas at any price higher than one fish per banana. (In our example, he sold twenty-five bananas at a price of about one and one-half fish per banana.) There is nothing special about this particular price. Any price of fish between half a banana and one full banana will generate gains from trade for both Ann and Bob. What is important is the existence of at least one price that is mutually advantageous for both persons. And such a price (or range of prices) will exist if comparative advantage exists—which is to say, if each person has a different cost of producing each good. When the lower-cost fisherman (Ann) produces more fish than she herself plans to consume—that is, catches fish that she intends to trade—Bob taps in to her greater efficiency at fishing. He cannot produce fish himself at a cost lower than one banana per fish, but by trading with Ann he acquires fish at a cost of two-thirds of a banana. Likewise, by trading with Bob, Ann taps in to Bob’s greater efficiency at gathering bananas. The above example, though simple, reveals comparative advantage’s essential feature. Making the example more realistic by adding millions of people and millions of goods and services only increases the applicability and power of the principle, because larger numbers of people and products mean greater scope for mutually advantageous specialization and exchange. Also, while the principle of comparative advantage is typically introduced to explain international trade, this principle is the root reason for all specialization and trade. Nothing about the presence or absence of a geopolitical border separating two trading parties is essential. But study of this principle does make clear that foreigners are willing to export only because they want to import. It is the desire for profitable exchange of goods and services that motivates all specialization and exchange. About the Author Donald J. Boudreaux is chairman of the economics department at George Mason University in Fairfax, Virginia. He was previously president of the Foundation for Economic Education. He blogs with Russell Roberts at http://www.cafehayek.com. Further Reading   Boudreaux, Donald J. “Does Increased International Mobility of Factors of Production Weaken the Case for Free Trade?” Cato Journal 23 (Winter 2004): 373–379. Also available online at: http://www.cato.org/pubs/journal/cj23n3/cj23n3-6.pdf. Buchanan, James M., and Yong J. Yoon. “Globalization as Framed by the Two Logics of Trade.” Independent Review 6 (Winter 2002): 399–405. Also available online at: http://www.independent.org/pdf/tir/tir_06_3_buchanan.pdf. Irwin, Douglas. Against the Tide. Princeton: Princeton University Press, 1996. Jones, Ronald W. “Comparative Advantage and the Theory of Tariffs.” Review of Economic Studies 28 (June 1961): 161– 175. Krugman, Paul. “Ricardo’s Difficult Idea.” Available online at: http://web.mit.edu/krugman/www/ricardo.htm. Machlup, Fritz. A History of Thought on Economic Integration. New York: Columbia University Press, 1977. Roberts, Russell D. The Choice. 3d ed. Englewood Cliffs, N.J.: Prentice Hall, 2006. Ruby, Douglas. “Comparative Advantage as a Basis for Specialization and Trade.” Available online at: http://www.digitaleconomist.com/ca_4010.html. Suranovic, Steven. “The Theory of Comparative Advantage—Overview.” Available online at: http://internationalecon.com/v1.0/ch40/40c000.html.   Related Links Division of Labor, from the Concise Encyclopedia of Economics. Free Trade, from the Concise Encyclopedia of Economics. Don Boudreaux on Globalization and Trade Deficits. EconTalk, January 2008. Roberts on Smith, Ricardo, and Trade. EconTalk, February 2010. Kling on Patterns of Sustainable Specialization and Trade. EconTalk, February 2011. Russ Roberts, The Power of Trade. Part 1: The Seemingly Simple Theory of Comparative Advantage. November 2006. Morgan Rose, A Brief History of Comparative Advantage. August 2001. Douglas A. Irwin, A Brief History of International Trade Policy. November 2001. Lauren Landsburg, Comparative Advantage: An Economics Topics Detail.   (0 COMMENTS)

/ Learn More

Campaign Finance

Conventional wisdom holds that money plays a central and nefarious role in American politics. Underlying this belief are two fundamental assumptions: (1) elective offices are effectively sold to the highest bidder, and (2) campaign contributions are the functional equivalent of bribes. Campaign finance regulations are thus an attempt to hinder the operation of this political marketplace. Of course, the scope of such regulation is itself limited by the constitutional protection of political speech, association, and the right to petition. Nevertheless, many Americans are willing to sacrifice their, and others’, free-speech rights in an attempt to limit the influence of moneyed interests in politics. One might think that the existence of a political marketplace would produce efficient policy outcomes, even if at the cost of the democratic ideals of equal representation and participation. However, public choice economists have shown that if favors are bought and sold, those who buy them often gain much per person, but their gains are more than offset by the smaller losses per person sustained by the large number of losers. So a political marketplace does not ensure efficient policies. Interestingly, though, scholarly research on the economics of campaign finance suggests that the political marketplace analogy is not a fair description of American democracy. Electoral Effects of Campaign Spending Every two years, public-interest groups and media pundits lament the fact that winning candidates typically far outspend their rivals. They infer from this that campaign spending drives electoral results. Most systematic studies, however, find no effect of marginal campaign spending on the electoral success of candidates.1 How can this be so? The best explanation to date is that competent candidates are adept at both convincing contributors to give money and convincing voters to give their vote. Consequently, the finding that campaign spending and electoral success are highly correlated exaggerates the importance of money to a candidate’s chances of winning. To gauge the causal relationship between campaign spending and electoral success, it is necessary to isolate the effects of increases in campaign spending that are unrelated to a candidate’s direct appeal to voters. For example, wealthy candidates are able to spend more money on their campaigns for reasons that have little to do with their popularity among voters. Consider the experience of Senator Jon Corzine (D-N.J.), who defeated a weak Republican opponent to gain election to the Senate in 2000. Corzine spent sixty million dollars, mostly from his personal fortune, on his Senate campaign. Many observers pointed to this episode as an example of how a wealthy individual can buy elective office. Despite his record spending, however, Corzine’s vote total ran behind that of the average House Democrat in New Jersey and behind the Democratic nominee for president, Al Gore, even though Gore did very little campaigning in strongly Democratic New Jersey. There is even some evidence that Corzine’s wealth was a liability, given that many yard signs urged his Republican opponent to “make him spend it all!” A more systematic analysis of the electoral fortunes of wealthy candidates found no significant association between electoral or fund-raising success and personal wealth.2 Related findings abound. For example, large campaign war chests carried over from the previous election do not deter challengers and confer no electoral advantage on incumbents. Similarly, large fund-raising windfalls attributable to changes in campaign finance laws have been shown to be unrelated to candidates’ subsequent electoral fortunes.3 Nevertheless, no serious scholar would argue that campaign spending is unimportant. These findings do not imply that anyone running for elective office would do as well (in terms of vote share) by not spending several million dollars. Instead, the appropriate conclusion is that in the vast majority of political contests, the identity of the victor would not be different had any one candidate spent a few hundred thousand dollars more (or less). Policy Consequences of Campaign Contributions Are campaign contributions the functional equivalent of bribes? The conventional wisdom is that donors must get something for their money, but decades of academic research on Congress has failed to uncover any systematic evidence that this is so. Indeed, legislators tend to act in accordance with the interests of their donors, but this is not because of some quid pro quo. Instead, donors tend to give to like-minded candidates.4 Of course, if candidates choose their policy positions in anticipation of a subsequent payoff in campaign contributions, there would be no real distinction between accepting bribes and accepting contributions from like-minded voters. However, studies of legislative behavior indicate that the most important determinants of an incumbent’s voting record are constituent interests, party, and personal ideology. In election years, constituent interests become more important than in nonelection years, but overall, these three factors explain nearly all of the variation in incumbents’ voting records.5 Most informed citizens react to these findings with incredulity. If campaign contributions do not buy favors, then why is so much money spent on politics? In fact, scholars of American politics have long noted how little is spent on politics. Consider that large firms spend ten times as much on lobbying as their employees spend on campaign contributions through PACs, as individuals, or in the form of unregulated contributions to political parties (i.e., soft money).6 I mention employee contributions because, contrary to the sloppy reporting that appears regularly in U.S. newspapers, corporations in the United States do not contribute to political campaigns: they are prohibited from doing so and have been so prohibited since 1907. When you read that Enron has given X million dollars to candidates, what that really means is that people who identify themselves as Enron employees have given X million dollars of their own money. In addition, political expenditures by employees of firms tend to be a fixed proportion of net revenues and do not rise and fall as relevant issues move on or off the policy agenda.7 Neither of these facts is easily reconciled with the notion that campaign contributions are the functional equivalent of bribes. Of course, neither does this imply that campaign contributions are completely inconsequential, only that the conventional wisdom overstates their importance. It is possible that evidence of the effect of campaign contributions may not be manifest in the roll-call votes of legislators. Scholars have long recognized that the relevant action may take place behind closed doors, where the content of legislation is determined. This is a much more difficult proposition to test, but at least one recent study has found no relationship between campaign contributions and the activities of legislators within committees.8 More convincing would be evidence that the states with more laissez-faire campaign finance regulations adopt substantively different policies. Unfortunately, to date, no such study has been conducted. So, why are campaign contributions not like bribes? There are several reasons: (1) federal law limits contribution amounts to federal candidates (as do most states); (2) bribery and influence peddling are illegal, so exchanges of money for campaign promises are unenforceable; (3) legislation is a collective activity, so it would be necessary to bribe a large number of legislators in order to influence policy; (4) the existence of competing interests raises the cost of trying to buy a legislative majority; (5) the existence of a muckraking press and political competition means that candidates try to avoid even the appearance of impropriety; and (6) the diminishing marginal productivity of campaign spending discussed above reduces the value of any individual contribution to almost nil. This last point is perhaps the most important. In 2000, total political spending in federal elections was about $3 billion. Contributions from individuals to candidates or parties accounted for nearly 80 percent of this total. The primary motivation for individual contributors is to support ideologically like-minded candidates, not to influence candidate positions. Further, the existence of these individual contributions drives down the marginal value of contributions from special-interest groups and hampers their ability to influence politicians. Lessons for Reform Political and legal decision makers have for too long considered the role of money in politics to be self-evident; this has led to a widespread and pervasive misunderstanding of the likely costs and benefits of campaign finance reform proposals. But political institutions are no less subject to scientific inquiry than are social or economic institutions. The consensus among academic researchers is that money is far less important in determining either election or policy outcomes than conventional wisdom holds it to be. Consequently, the benefits of campaign finance reforms have also been exaggerated. There is even some reason to be concerned that ill-considered reforms will have important unintended consequences. For example, analyses of the different regulatory regimes across states reveal that limits on individual contributions are associated with reduced political competition, which is in turn associated with reduced turnout. Further, exposure to campaign advertising makes voters more knowledgeable about candidates’ positions, which is not only desirable itself, but is also associated with increased voter turnout. Therefore, one unintended consequence of restrictive campaign finance reforms is to reduce voter awareness and participation. Another possibility is that reforms may reduce political accountability since incumbents can tailor reform legislation to effectively insulate themselves from viable competition. About the Author Jeffrey Milyo is an associate professor of economics at the University of Missouri in Columbia. Further Reading   Ansolabehere, Stephen, John M. de Figuerido, and James M. Snyder Jr. “Why Is There So Little Money in U.S. Politics?” Journal of Economic Perspectives 17, no. 1 (2003): 105–130. Levitt, Steven. “Congressional Campaign Reform.” Journal of Economic Perspectives 9, no. 1 (1995): 183–193. Milyo, Jeffrey. “The Political Economics of Campaign Finance.” Independent Review 3, no. 4 (1999): 537–547. Milyo, Jeffrey, David Primo, and Timothy Groseclose. “Corporate PAC Campaign Contributions in Perspective.” Business and Politics 2, no. 1 (2000): 75–88.   Footnotes 1. Steven Levitt, “Using Repeat Challengers to Estimate the Effects of Campaign Spending on Electoral Outcomes in the U.S. House,” Journal of Political Economy 102 (1994): 777–798.   2. Jeffrey Milyo and Timothy Groseclose, “The Electoral Effects of Incumbent Wealth,” Journal of Law and Economics 42 (1999): 699–722.   3. Jeffrey Milyo, “The Electoral Effects of Campaign Spending in House Elections,” Citizens’ Research Foundation, Los Angeles, 1998.   4. Steven Levitt, “Who are PACs Trying to Influence with Contributions: Politicians or Voters?” Economics and Politics 10, no. 1 (1998): 19–36.   5. Steven Levitt, “How Do Senators Vote? Disentangling the Role of Party Affiliation, Voter Preferences and Senator Ideology,” American Economic Review 86 (1996): 425–441.   6. Jeffrey Milyo, David Primo, and Timothy Groseclose, “Corporate PAC Contributions in Perspective,” Business and Politics 2, no. 1 (2000): 75–88.   7. Stephen Ansolebehere, John M. de Figuerido, and James M. Snyder Jr., “Why Is There So Little Money in U.S. Politics?” Journal of Economic Perspectives 17, no. 1 (2003): 105–130.   8. Gregory Wawro, Legislative Entrepreneurship in the United States House of Representatives (Ann Arbor: University of Michigan Press, 2000).   Related Links Jeffrey A. Miron, Campaign Finance Regulation. January 2001. Brink Lindsey and Steven Teles on the Captured Economy. EconTalk, December 2017. Stiglitz on Inequality. EconTalk, July 2012.   (0 COMMENTS)

/ Learn More

Benefit-Cost Analysis

Whenever people decide whether the advantages of a particular action are likely to outweigh its drawbacks, they engage in a form of benefit-cost analysis (BCA). In the public arena, formal BCA is a sometimes controversial technique for thoroughly and consistently evaluating the pros and cons associated with prospective policy changes. Specifically, it is an attempt to identify and express in dollar terms all of the effects of proposed government policies or projects. While not intended to be the only basis for decision making, BCA can be a valuable aid to policymakers. Although conceived more than 150 years ago by the French engineer Jules Dupuit, BCA saw its first widespread use in the evaluation of federal water projects in the United States in the late 1930s. Since then, it has also been used to analyze policies affecting transportation, public health, criminal justice, defense, education, and the environment. Because some of BCA’s most important and controversial applications have been in environmental policy, this discussion of key issues in BCA is illustrated with examples from the environmental arena. To ascertain the net effect of a proposed policy change on social well-being, we must first have a way of measuring the gains to the gainers and the losses to the losers. Implicit in this statement is a central tenet of BCA: the effects of a policy change on society are no more or no less than the aggregate of the effects on the individuals who constitute society. Thus, if no individual would be made better off by a policy change, there are no benefits associated with it; nor are there costs if no one is made worse off. In other words, BCA counts no values other than those held by the individual members of society. It is equally important to note that benefits and costs, even though they are almost always expressed in dollar terms in BCA, go well beyond changes in individuals’ incomes. If someone’s well-being is improved because of cleaner air—through improved visibility, for instance—he experiences a benefit even though his income may not change. Similarly, an increase in pollution that puts people at higher risk of disease imposes a cost on them even though their incomes may not fall. Indeed, a person would bear a cost (be made worse off) if the pollution posed a threat to an exotic and little-known species of animal that he cared about. Some criticize BCA on the grounds that it supposedly enshrines the free market and discourages government intervention. However, BCA exists precisely because economists recognize that free markets sometimes allocate resources inefficiently, causing problems such as dirty air and water. How, then, are benefits and costs estimated? While it is generally assumed that they are measured differently, benefits and costs are actually flip sides of the same coin. Benefits are measured by the willingness of individuals to pay for the outputs of the policy or project in question. The proper calculation of costs is the amount of compensation required to exactly offset negative consequences. Willingness to pay or compensation required should each be the dollar amount that would leave every individual just as well off following the implementation of the policy as before it. Suppose, for example, we wished to evaluate the benefits and costs of a proposal to control air pollution emissions from a large factory. On the positive side, pollution abatement will mean reduced damage to exposed materials, diminished health risks to people living nearby, improved visibility, and even new jobs for those who manufacture pollution control equipment. On the negative side, the required investments in pollution control may cause the firm to raise the price of its products, close down several marginal operations at its plant and lay off workers, and put off other planned investments designed to modernize its production facilities. How do we determine the willingness to pay for the favorable effects? First, it is relatively easy to value the reduced damage to materials. If, say, awnings will now last ten years rather than five years, it is straightforward to multiply the number of awnings times their price to get an idea of savings to consumers—so long as the price of awnings is not affected by the policy. If reduced pollution meant more agricultural output, it would be similarly easy to value because crops have well-defined market prices. In other words, when benefits involve marketed outputs, valuing them is not difficult. But what about reduced health risks or improved visibility? Because these are not things that people buy and sell directly, it is much less clear how to estimate the willingness to pay (the value of the benefits). Two major techniques are available. One, called the contingent valuation method, involves asking people directly, via sophisticated questionnaires, how much they would pay for reduced health risks or improved visibility. This approach makes it possible to estimate the benefits of programs—for example, the preservation of a remote wilderness area—for which other techniques generally are inapplicable. However, this approach has its limitations. One is that it often requires individuals to place dollar values on things they are unused to viewing in economic terms. As a result, their responses may not be as reliable as we would like. Also, responses to surveys are hypothetical; economists prefer values revealed in actual market transactions. Another approach is to observe how much people are willing to pay for goods that have an environmental quality component. For example, houses in unpolluted neighborhoods sell for more than those in polluted areas. Using statistical techniques to hold constant the other characteristics of houses and the neighborhoods in which they are located, it is possible to identify a “clean air premium.” This provides important information on the value to individuals of air quality improvements. A similar approach for estimating how much people value pollution control and other public policies that reduce health risks is to estimate how much of a wage premium they are paid to work in jobs that pose health risks. Yet other techniques infer values from such things as the time and money people spend traveling to and from desirable recreation sites. It is generally assumed that cost estimation involves a mere toting up of the expenditures that affected parties must make, as in our example of the firm controlling air pollution. As suggested above, however, matters are more complicated than this. Some firms not initially affected by regulation will incur higher costs—those purchasing the product of the regulated firm, for example. These “ripple” effects must be taken into account. Or if the polluting firm closes down some operations rather than purchase pollution control devices, its expenditures will be zero but the social costs are still positive. In such cases the costs are borne by employees, shareholders, and purchasers of its output. Unfortunately, techniques for making these more sophisticated cost estimates are still in their infancy; for this reason, virtually all BCAs still use direct expenditures as rough measures of true social costs. Three additional issues in BCA bear mention. First, government policies or projects typically produce streams of benefits and costs over time rather than in one-shot increments. Commonly, in fact, a substantial portion of the costs is incurred early in the life of a project, while benefits may extend for many years (perhaps beginning only after some delay). Yet, because people prefer a dollar today to one ten years from now (see interest rates), BCA typically discounts future benefits and costs back to present values. Not only are there technical disagreements among economists about the interest rate (or rates) at which these future impacts should be discounted, but discounting raises ethical problems as well. At a discount rate of 10 percent, for instance, $1 million in benefits to people fifty years from now has a present value of only $8,500. This powerful effect of discounting is of concern when BCA is applied to the evaluation of policies with significant intergenerational effects, such as those pertaining to the prevention of global climate change or the disposal of high-level radioactive wastes (which will be lethal for hundreds of thousands of years). A second sticking point in BCA is the fact that the willingness to pay for the favorable effects of a project or policy depends on the distribution of income: a billionaire would be able—and therefore willing—to pay more than a pauper for the same improvement in environmental quality, even though both cared about it with equal intensity. Some critics dislike BCA because it reduces benefits to pure dollar amounts. But BCA analysts use dollars to estimate benefits because there simply is no other way to directly measure the intensity with which people desire something. Third, suppose that the aforementioned problems were to disappear, and that benefits and costs could be easily expressed in dollar terms and converted to present values. According to modern BCA, a project or policy would be attractive if the benefits it would produce exceed the costs. This is because, in theory, those gaining from the project could compensate those made worse off and still be better off themselves. In our factory example, for instance, those enjoying the benefits of cleaner air gain more than the losses to consumers who must pay more for the factory’s output or to workers whose jobs are eliminated. Thus, the winners could compensate the losers and still come out ahead. In practice, of course, this compensation is seldom paid. Therefore, even the most efficient projects create some losers. This can undermine support for BCA in general and often makes it politically difficult to enact efficient policies—or, conversely, to block very inefficient projects, whose costs exceed benefits. In spite of these sticking points, BCA seems to be playing an increasingly important role in government decision making. One reason may be that shunning a comprehensive, analytical approach to decision making simply because it has flaws inevitably pushes decisions back into the realm of the ad hoc and purely political. While BCA does have very real shortcomings, it appears preferable to smoke-filled rooms. About the Author Paul R. Portney is dean of the Eller College of Management at the University of Arizona. He was previously president and senior fellow at Resources for the Future, an environmental think tank in Washington, D.C. Further Reading   Boardman, Anthony E., David H. Greenberg, Aidan R. Vining, and David L. Weimer. Cost-Benefit Analysis: Concepts and Practice. 2d ed. Upper Saddle River, N.J.: Prentice Hall, 2001. Gramlich, Edward M. Benefit-Cost Analysis of Government Programs. Englewood Cliffs, N.J.: Prentice Hall, 1981. Hammond, P. Brett, and Rob Coppock, eds. Valuing Health Risks, Costs, and Benefits for Environmental Decision Making: Report of a Conference. Washington, D.C.: National Academy Press, 1990. Kneese, Allen V. Measuring the Benefits of Clean Air and Water. Washington, D.C.: Resources for the Future, 1984. Kopp, Raymond, and Michael Hazilla. “Social Cost of Environmental Quality Regulations.” Journal of Political Economy 98 (1990): 853–873. Related Links Bjorn Lomborg on the Costs and Benefits of Attacking Climate Change. EconTalk, June 2019. Charles L. Hooper, NSA Surveillance: A Cost/Benefit Analysis. January, 2014. Donald Cox. The Economics of “Believe-It-Or-Not.” August, 2003. Lauren Heller, It’s Not Just About the Money. August, 2013. (0 COMMENTS)

/ Learn More

Bonds

Bond markets are important components of capital markets. Bonds are fixed-income financial assets—essentially IOUs that promise the holder a specified set of payments. The value of a bond, like the value of any other asset, is the present value of the income stream one expects to receive from holding the bond. This has several implications: 1- Bond prices vary inversely with market interest rates. Because the stream of promised payments usually is fixed no matter what subsequently happens to interest rates, higher rates reduce the present value of these promised payments, and thus the bond price. 2- The value of bonds falls when people come to expect higher inflation. The reason is that higher expected inflation raises market interest rates, and therefore reduces the present value of the fixed stream of promised payments. 3- The greater the uncertainty about whether the promised payments will be made (the risk that the issuer will default on the promised payments), the lower the expected payments to bondholders and the lower the value of the bond. 4- Bonds whose payments are subjected to lower taxation provide investors with higher expected after-tax payments. Because investors are interested in after-tax income, such bonds sell for higher prices. The major classes of bond issuers are the U.S. government, corporations, and municipal governments. The default risk and tax status differ from one kind of bond to another. U.S. Government Bonds The U.S. government is highly unlikely to default on promised payments to its bondholders because the government has the right to tax as well as the authority to print money. Thus, virtually all of the variation in the value of its bonds is due to changes in market interest rates. That is why most securities analysts use prices of U.S. government bonds to compute market interest rates. Because the U.S. government’s tax revenues rarely cover expenditures, it relies on debt financing for the balance. Moreover, on the occasions when the government does not have a budget deficit, it still sells new debt to refinance the old debt as it matures. Most of the debt sold by the U.S. government is marketable, meaning that it can be resold by its original purchaser. Marketable issues include treasury bills, treasury notes, and treasury bonds. The major nonmarketable federal debt sold to individuals is U.S. savings bonds. Treasury bills have maturities of up to one year and are generally issued in denominations of $10,000. They do not have a stated coupon; that is, the government does not write a separate interest check to the owner. Instead, the U.S. Treasury sells these bills at a discount to their redemption value. The size of the discount determines the effective interest rate on the bill. For instance, a dealer might offer a bill with 120 days left until maturity at a yield of 7.48 percent. To translate this quoted yield into the price, one must “undo” this discount computation. Multiply the 7.48 by 120/360 (the fraction of the conventional 360-day year employed in this market) to obtain 2.493, and subtract that from 100 to get 97.506. The dealer is offering to sell the bond for $97.507 per $100 of face value. Treasury notes and treasury bonds differ from treasury bills in several ways. First, their maturities generally are greater than one year. Notes have maturities of one to seven years, while bonds can be sold with any maturity, but their maturities at issue typically exceed five years. Second, bonds and notes specify periodic interest (coupon) payments as well as a principal repayment. Third, they normally are registered, meaning that the government records the name and address of the current owner. When treasury notes or bonds are sold initially, their coupon rate is typically set so that they will sell at close to their face (par) value. Yields on bills, notes, or bonds of different maturities usually differ. (The array of rates associated with bonds of different maturities is referred to as the term structure of interest rates.) Because investors can invest either in a long-term note or in a sequence of short-term bills, expectations about future short-term rates affect current long-term rates. Thus, if the market expects future short-term rates to exceed current short-term rates, then current long-term rates would exceed current short-term rates—the term structure would have a positive slope (see Figure 1). If, for example, the current short-term rate for a one-year T-bill is 5 percent, and the market expects the rate on a one-year T-bill sold one year from now to be 6 percent, then the current two-year rate must exceed 5 percent. If it did not, investors would expect to do better by buying one-year bills today and rolling them over into new one-year bills a year from now. Savings bonds are offered only to individuals. Two types have been offered, both registered. Series E bonds are essentially discount bonds; investors receive no interest until the bonds are redeemed. Series H bonds pay interest semiannually. Unlike marketable government bonds, which have fixed interest rates, rates received by savings bond holders normally are revised when market rates change. Some bonds—for instance, U.S. Treasury Inflation-Protected Securities (TIPS)—are indexed for inflation. If, for example, inflation were 10 percent per year, then the value of the bond would be adjusted to compensate for this inflation. If indexation were perfect, the change in expected payments due to inflation would exactly offset the inflation-caused change in market interest rates. Figure 1 Corporate Bonds Corporate bonds promise specified payments at specified dates. In general, the interest the bondholder receives is taxed as ordinary income. An issue of corporate bonds generally is covered by a trust indenture, a contract that promises a trustee (typically a bank or trust company) that it will comply with the indenture’s provisions (or covenants). These include a promise of payment of principal and interest at stated dates, as well as other provisions such as limitations of the firm’s right to sell pledged property, limitations on future financing activities, and limitations on dividend payments. Potential lenders forecast the likelihood of default on a bond and require higher promised interest rates for higher forecasted default rates. (This difference in promised interest rates between low- and high-risk bonds of the same maturity is called a credit spread.) Bond-rating agencies (Moody’s and Standard and Poor’s, for example) provide an indication of the relative default risk of bonds with ratings that range from Aaa (the best quality) to C (the lowest). Bonds rated Baa and above typically are referred to as “investment grade.” Below-investment-grade bonds are sometimes referred to as “junk bonds.” Junk bonds can carry promised yields that are three to six percentage points higher than those of Aaa bonds. They have a credit spread of three hundred to six hundred basis points, a basis point being one one-hundredth of a percentage point. One way that corporate borrowers can influence the forecasted default rate is to agree to restrictive provisions or covenants that limit the firm’s future financing, dividend, and investment activities—making it more certain that cash will be available to pay interest and principal. With a lower anticipated probability of default, buyers are willing to offer higher prices for the bonds. Corporate officers, thus, must weigh the costs of the reduced flexibility from including the covenants against the benefits of lower interest rates. Describing all the types of corporate bonds that have been issued would be difficult. Sometimes different names are employed to describe the same type of bond, and, infrequently, the same name will be applied to two quite different bonds. Standard types include the following: •Mortgage bonds are secured by the pledge of specific property. If default occurs, the bondholders are entitled to sell the pledged property to satisfy their claims. If the sale proceeds are insufficient to cover their claims, they have an unsecured claim on the corporation’s other assets. •Debentures are unsecured general obligations of the issuing corporation. The indenture will regularly limit issuance of additional secured and unsecured debt. •Collateral trust bonds are backed by other securities (typically held by a trustee). Such bonds are frequently issued by a parent corporation pledging securities owned by a subsidiary. •Equipment obligations (or equipment trust certificates) are backed by specific pieces of equipment (railroad rolling stock, aircraft, etc.). •Subordinated debentures have a lower priority in bankruptcy than ordinary (unsubordinated) debentures. Junior claims are generally paid only after senior claims have been satisfied but rank ahead of preferred and common stock. •Convertible bonds give the owner the option either to be repaid in cash or to exchange the bonds for a specified number of shares in the corporation. Municipal Bonds Historically, interest paid on bonds issued by state and local governments has been exempt from federal income taxes. Such interest may be exempt from state income taxes as well. For instance, the New York tax code exempts interest from bonds issued by New York and Puerto Rico municipalities. Because investors are interested in returns net of tax, municipal bonds generally have promised lower interest rates than other government bonds that have similar risk but that lack this attractive tax treatment. In 2003, the percentage difference (not the percentage point difference) between the yield on long-term U.S. government bonds and the yield on long-term municipals was about 10 percent. Thus, if an individual’s marginal tax rate were higher than 10 percent, the after-tax promised return would be higher from municipal bonds than from taxable government bonds. (Although this difference might appear small, there is a credit spread in municipals just as in corporates.) Municipal bonds typically are designated as either general obligation bonds or revenue bonds. General obligation bonds are backed by the “full faith and credit” (and thus the taxing authority) of the issuing entity. Revenue bonds are backed by a specifically designated revenue stream, such as the revenues from a designated project, authority, or agency, or by the proceeds from a specific tax. Frequently, such bonds are issued by agencies that plan to sell their services at prices that cover their expenses, including the promised payments on the debt. In such cases, the bonds are only as good as the enterprise that backs them. In 1983, for example, the Washington Public Power Supply System (WPPSS), which Wall Street quickly nicknamed “Whoops,” defaulted on $2.25 billion on its number four and number five nuclear power plants, leaving bondholders with much less than they had been promised. Industrial development bonds are used to finance the purchase or construction of facilities to be leased to private firms. Municipalities have used such bonds to subsidize businesses choosing to locate in their area by, in effect, giving them the benefit of loans at tax-exempt rates. Some municipal bonds are still sold in bearer form; that is, possession of the bond itself constitutes proof of ownership. Historically in the United States, most public bonds (government, corporate, and municipal) were bearer bonds. Now, the Internal Revenue Service requires bonds that pay taxable interest to be sold in registered form. About the Author Clifford W. Smith is the Epstein Professor of Finance at the William E. Simon Graduate School of Business Administration, University of Rochester. He is an advisory editor of the Journal of Financial Economics and an associate editor of the Journal of Derivatives, the Journal of Risk and Insurance, and the Journal of Financial Services Research. Further Reading   Brealey, Richard A., and Stewart C. Myers. Principles of Corporate Finance. 7th ed. Boston: McGraw-Hill/Irwin, 2003. Peavy, John W., and George H. Hempel. “The Effect of the WPPSS Crisis on the Tax-Exempt Bond Market.” Journal of Financial Research 10, no. 3 (1987): 239–247. Sharpe, William F., Gordon J. Alexander, and Jeffrey V. Bailey. Investments. Upper Saddle River, N.J.: Prentice Hall, 1999. Smith, Clifford W. Jr., and Jerold B. Warner. “On Financial Contracting: An Analysis of Bond Covenants.” Journal of Financial Economics 7, no. 3 (1979): 117–161. Related Links Robert P. Murphy, Government Debt and Future Generations. June 2015. From the Web: http://www.Moodys.com http://www.Investinginbonds.com   (0 COMMENTS)

/ Learn More

Austrian School of Economics

The Austrian school of economics was founded in 1871 with the publication of Carl Menger’s Principles of Economics. menger, along with william stanley jevons and leon walras, developed the marginalist revolution in economic analysis. Menger dedicated Principles of Economics to his German colleague William Roscher, the leading figure in the German historical school, which dominated economic thinking in German-language countries. In his book, Menger argued that economic analysis is universally applicable and that the appropriate unit of analysis is man and his choices. These choices, he wrote, are determined by individual subjective preferences and the margin on which decisions are made (see marginalism). The logic of choice, he believed, is the essential building block to the development of a universally valid economic theory. The historical school, on the other hand, had argued that economic science is incapable of generating universal principles and that scientific research should instead be focused on detailed historical examination. The historical school thought the English classical economists mistaken in believing in economic laws that transcended time and national boundaries. Menger’s Principles of Economics restated the classical political economy view of universal laws and did so using marginal analysis. Roscher’s students, especially Gustav Schmoller, took great exception to Menger’s defense of “theory” and gave the work of Menger and his followers, eugen böhm-bawerk and Friedrich Wieser, the derogatory name “Austrian school” because of their faculty positions at the University of Vienna. The term stuck. Since the 1930s, no economists from the University of Vienna or any other Austrian university have become leading figures in the so-called Austrian school of economics. In the 1930s and 1940s, the Austrian school moved to Britain and the United States, and scholars associated with this approach to economic science were located primarily at the London School of Economics (1931–1950), New York University (1944–), Auburn University (1983–), and George Mason University (1981–). Many of the ideas of the leading mid-twentieth-century Austrian economists, such as ludwig von mises and f. a. hayek, are rooted in the ideas of classical economists such as adam smith and david hume, or early-twentieth-century figures such as knut wicksell, as well as Menger, Böhm-Bawerk, and Friedrich von Wieser. This diverse mix of intellectual traditions in economic science is even more obvious in contemporary Austrian school economists, who have been influenced by modern figures in economics. These include armen alchian, james buchanan, ronald coase, Harold Demsetz, Axel Leijonhufvud, douglass north, Mancur Olson, vernon smith, Gordon Tullock, Leland Yeager, and Oliver Williamson, as well as Israel Kirzner and Murray Rothbard. While one could argue that a unique Austrian school of economics operates within the economic profession today, one could also sensibly argue that the label “Austrian” no longer possesses any substantive meaning. In this article I concentrate on the main propositions about economics that so-called Austrians believe. The Science of Economics Proposition 1: Only individuals choose. Man, with his purposes and plans, is the beginning of all economic analysis. Only individuals make choices; collective entities do not choose. The primary task of economic analysis is to make economic phenomena intelligible by basing it on individual purposes and plans; the secondary task of economic analysis is to trace out the unintended consequences of individual choices. Proposition 2: The study of the market order is fundamentally about exchange behavior and the institutions within which exchanges take place. The price system and the market economy are best understood as a “catallaxy,” and thus the science that studies the market order falls under the domain of “catallactics.” These terms derive from the original Greek meanings of the word “katallaxy”—exchange and bringing a stranger into friendship through exchange. Catallactics focuses analytical attention on the exchange relationships that emerge in the market, the bargaining that characterizes the exchange process, and the institutions within which exchange takes place. Proposition 3: The “facts” of the social sciences are what people believe and think. Unlike the physical sciences, the human sciences begin with the purposes and plans of individuals. Where the purging of purposes and plans in the physical sciences led to advances by overcoming the problem of anthropomorphism, in the human sciences, the elimination of purposes and plans results in purging the science of human action of its subject matter. In the human sciences, the “facts” of the world are what the actors think and believe. The meaning that individuals place on things, practices, places, and people determines how they will orient themselves in making decisions. The goal of the sciences of human action is intelligibility, not prediction. The human sciences can achieve this goal because we are what we study, or because we possess knowledge from within, whereas the natural sciences cannot pursue a goal of intelligibility because they rely on knowledge from without. We can understand purposes and plans of other human actors because we ourselves are human actors. The classic thought experiment invoked to convey this essential difference between the sciences of human action and the physical sciences is a Martian observing the “data” at Grand Central Station in New York. Our Martian could observe that when the little hand on the clock points to eight, there is a bustle of movement as bodies leave these boxes, and that when the little hand hits five, there is a bustle of movement as bodies reenter the boxes and leave. The Martian may even develop a prediction about the little hand and the movement of bodies and boxes. But unless the Martian comes to understand the purposes and plans (the commuting to and from work), his “scientific” understanding of the data from Grand Central Station would be limited. The sciences of human action are different from the natural sciences, and we impoverish the human sciences when we try to force them into the philosophical/scientific mold of the natural sciences. Microeconomics Proposition 4: Utility and costs are subjective. All economic phenomena are filtered through the human mind. Since the 1870s, economists have agreed that value is subjective, but, following alfred marshall, many argued that the cost side of the equation is determined by objective conditions. Marshall insisted that just as both blades of a scissors cut a piece of paper, so subjective value and objective costs determine price (see microeconomics). But Marshall failed to appreciate that costs are also subjective because they are themselves determined by the value of alternative uses of scarce resources. Both blades of the scissors do indeed cut the paper, but the blade of supply is determined by individuals’ subjective valuations. In deciding courses of action, one must choose; that is, one must pursue one path and not others. The focus on alternatives in choices leads to one of the defining concepts of the economic way of thinking: opportunity costs. The cost of any action is the value of the highest-valued alternative forgone in taking that action. Since the forgone action is, by definition, never taken, when one decides, one weighs the expected benefits of an activity against the expected benefits of alternative activities. Proposition 5: The price system economizes on the information that people need to process in making their decisions. Prices summarize the terms of exchange on the market. The price system signals to market participants the relevant information, helping them realize mutual gains from exchange. In Hayek’s famous example, when people notice that the price of tin has risen, they do not need to know whether the cause was an increase in demand for tin or a decrease in supply. Either way, the increase in the price of tin leads them to economize on its use. Market prices change quickly when underlying conditions change, which leads people to adjust quickly. Proposition 6: Private property in the means of production is a necessary condition for rational economic calculation. Economists and social thinkers had long recognized that private ownership provides powerful incentives for the efficient allocation of scarce resources. But those sympathetic to socialism believed that socialism could transcend these incentive problems by changing human nature. Ludwig von Mises demonstrated that even if the assumed change in human nature took place, socialism would fail because of economic planners’ inability to rationally calculate the alternative use of resources. Without private ownership in the means of production, Mises reasoned, there would be no market for the means of production, and therefore no money prices for the means of production. And without money prices reflecting the relative scarcities of the means of production, economic planners would be unable to rationally calculate the alternative use of the means of production. Proposition 7: The competitive market is a process of entrepreneurial discovery. Many economists see competition as a state of affairs. But the term “competition” invokes an activity. If competition were a state of affairs, the entrepreneur would have no role. But because competition is an activity, the entrepreneur has a huge role as the agent of change who prods and pulls markets in new directions. The entrepreneur is alert to unrecognized opportunities for mutual gain. By recognizing opportunities, the entrepreneur earns a profit. The mutual learning from the discovery of gains from exchange moves the market system to a more efficient allocation of resources. Entrepreneurial discovery ensures that a free market moves toward the most efficient use of resources. In addition, the lure of profit continually prods entrepreneurs to seek innovations that increase productive capacity. For the entrepreneur who recognizes the opportunity, today’s imperfections represent tomorrow’s profit.1 The price system and the market economy are learning devices that guide individuals to discover mutual gains and use scarce resources efficiently. Macroeconomics Proposition 8: Money is nonneutral. Money is defined as the commonly accepted medium of exchange. If government policy distorts the monetary unit, exchange is distorted as well. The goal of monetary policy should be to minimize these distortions. Any increase in the money supply not offset by an increase in money demand will lead to an increase in prices. But prices do not adjust instantaneously throughout the economy. Some price adjustments occur faster than others, which means that relative prices change. Each of these changes exerts its influence on the pattern of exchange and production. Money, by its nature, thus cannot be neutral. This proposition’s importance becomes evident in discussing the costs of inflation. The quantity theory of money stated, correctly, that printing money does not increase wealth. Thus, if the government doubles the money supply, money holders’ apparent gain in ability to buy goods is prevented by the doubling of prices. But while the quantity theory of money represented an important advance in economic thinking, a mechanical interpretation of the quantity theory underestimated the costs of inflationary policy. If prices simply doubled when the government doubled the money supply, then economic actors would anticipate this price adjustment by closely following money supply figures and would adjust their behavior accordingly. The cost of inflation would thus be minimal. But inflation is socially destructive on several levels. First, even anticipated inflation breaches a basic trust between the government and its citizens because government is using inflation to confiscate people’s wealth. Second, unanticipated inflation is redistributive as debtors gain at the expense of creditors. Third, because people cannot perfectly anticipate inflation and because the money is added somewhere in the system—say, through government purchase of bonds—some prices (the price of bonds, for example) adjust before other prices, which means that inflation distorts the pattern of exchange and production. Since money is the link for almost all transactions in a modern economy, monetary distortions affect those transactions. The goal of monetary policy, therefore, should be to minimize these monetary distortions, precisely because money is nonneutral.2 Proposition 9: The capital structure consists of heterogeneous goods that have multispecific uses that must be aligned. Right now, people in Detroit, Stuttgart, and Tokyo City are designing cars that will not be purchased for a decade. How do they know how to allocate resources to meet that goal? Production is always for an uncertain future demand, and the production process requires different stages of investment ranging from the most remote (mining iron ore) to the most immediate (the car dealership). The values of all producer goods at every stage of production derive from the value consumers place on the product being produced. The production plan aligns various goods into a capital structure that produces the final goods in, ideally, the most efficient manner. If capital goods were homogeneous, they could be used in producing all the final products consumers desired. If mistakes were made, the resources would be reallocated quickly, and with minimal cost, toward producing the more desired final product. But capital goods are heterogeneous and multispecific; an auto plant can make cars, but not computer chips. The intricate alignment of capital to produce various consumer goods is governed by price signals and the careful economic calculations of investors. If the price system is distorted, investors will make mistakes in aligning their capital goods. Once the error is revealed, economic actors will reshuffle their investments, but in the meantime resources will be lost.3 Proposition 10: Social institutions often are the result of human action, but not of human design. Many of the most important institutions and practices are not the result of direct design but are the by-product of actions taken to achieve other goals. A student in the Midwest in January trying to get to class quickly while avoiding the cold may cut across the quad rather than walk the long way around. Cutting across the quad in the snow leaves footprints; as other students follow these, they make the path bigger. Although their goal is merely to get to class quickly and avoid the cold weather, in the process they create a path in the snow that actually helps students who come later to achieve this goal more easily. The “path in the snow” story is a simple example of a “product of human action, but not of human design” (Hayek 1948, p. 7). The market economy and its price system are examples of a similar process. People do not intend to create the complex array of exchanges and price signals that constitute a market economy. Their intention is simply to improve their own lot in life, but their behavior results in the market system. Money, law, language, science, and so on are all social phenomena that can trace their origins not to human design, but rather to people striving to achieve their own betterment, and in the process producing an outcome that benefits the public.4 The implications of these ten propositions are rather radical. If they hold true, economic theory would be grounded in verbal logic and empirical work focused on historical narratives. With regard to public policy, severe doubt would be raised about the ability of government officials to intervene optimally within the economic system, let alone to rationally manage the economy. Perhaps economists should adopt the doctors’ creed: “First do no harm.” The market economy develops out of people’s natural inclination to better their situation and, in so doing, to discover the mutually beneficial exchanges that will accomplish that goal. Adam Smith first systematized this message in The Wealth of Nations. In the twentieth century, economists of the Austrian school of economics were the most uncompromising proponents of this message, not because of a prior ideological commitment, but because of the logic of their arguments. About the Author Peter J. Boettke is a professor of economics at George Mason University, where he is also the deputy director of the James M. Buchanan Center for Political Economy and a senior fellow at the Mercatus Center. He is the editor of the Review of Austrian Economics. Further Reading General Reading   Boettke, P., ed. The Elgar Companion to Austrian Economics. Brookfield, Vt.: Edward Elgar, 1994. Dolan, E., ed. The Foundations of Modern Austrian Economics. Mission, Kans.: Sheed and Ward, 1976. Available online at: http://www.econlib.org/library/NPDBooks/Dolan/dlnFMA.html   Classic Readings   Böhm-Bawerk, E. Capital and Interest. 3 vols. 1883. South Holland, Ill.: Libertarian Press, 1956. Available online at: http://www.econlib.org/library/BohmBawerk/bbCI.html Hayek, F. A. Individualism and Economic Order. Chicago: University of Chicago Press, 1948. Kirzner, I. Competition and Entrepreneurship. Chicago: University of Chicago Press, 1973. Menger, C. Principles of Economics. 1871. New York: New York University Press, 1976. Mises, L. von. Human Action: A Treatise on Economics. New Haven: Yale University Press, 1949. Available online at: http://www.econlib.org/library/Mises/HmA/msHmA.html O’ Driscoll, G., and M. Rizzo. The Economics of Time and Ignorance. Oxford: Basil Blackwell, 1985. Rothbard, M. Man, Economy and State. 2 vols. New York: Van Nostrand Press, 1962. Vaughn, K. Austrian Economics in America. Cambridge: Cambridge University Press, 1994.   History of the Austrian School of Economics   Boettke, P., and Peter Leeson. “The Austrian School of Economics: 1950–2000.” In Jeff Biddle and Warren Samuels, eds., The Blackwell Companion to the History of Economic Thought. London: Blackwell, 2003. Hayek, F. A. “Economic Thought VI: The Austrian School.” In International Encyclopedia of the Social Sciences. New York: Macmillan, 1968. Machlup, F. “Austrian Economics.” In Encyclopedia of Economics. New York: McGraw-Hill, 1982.   Footnotes 1. Entrepreneurship can be characterized by three distinct moments: serendipity (discovery), search (conscious deliberation), and seizing the opportunity for profit.   2. The search for solutions to this elusive goal generated some of the most innovative work of the Austrian economists and led to the development in the 1970s and 1980s of the literature on free banking by F. A. Hayek, Lawrence White, George Selgin, Kevin Dowd, Kurt Schuler, and Steven Horwitz.   3. Propositions 8 and 9 form the core of the Austrian theory of the business cycle, which explains how credit expansion by the government generates a malinvestment in the capital structure during the boom period that must be corrected in the bust phase. In contemporary economics, Roger Garrison is the leading expositor of this theory.   4. Not all spontaneous orders are beneficial and, thus, this proposition should not be read as an example of a Panglossian fallacy. Whether individuals pursuing their own self-interest generate public benefits depends on the institutional conditions within which they pursue their interests. Both the invisible hand of market efficiency and the tragedy of the commons are results of individuals striving to pursue their individual interests; but in one social setting this generates social benefits, whereas in the other it generates losses. New institutional economics has refocused professional attention on how sensitive social outcomes are to the institutional setting within which individuals interact. It is important, however, to realize that classical political economists and the early neoclassical economists all recognized the basic point of new institutional economists, and that it was only the mid-twentieth-century fascination with formal proofs of general competitive equilibrium, on the one hand, and the Keynesian preoccupation with aggregate variables, on the other, that tended to cloud the institutional preconditions required for social cooperation.   Related Links Steven Horwitz, The Five Best Introductory Books in Austrian Economics. EconLog, December 2019. Steven Horwitz, The Five (okay, ten) Essential Books in Austrian Economics. EconLog, December 2019. Boettke on Austrian Economics. EconTalk, December 2007. Steven Horwitz, Ludwig von Mises’s Socialism: A Still Timely Case Against Marx. October, 2018. Don Boudreaux on Macroeconomics and Austrian Business Cycle Theory. EconTalk, April 2009. Boettke on the Austrian Perspective on Business Cycles and Monetary Policy. EconTalk, January 2009. Edwin G. Dolan (ed.), The Foundations of Modern Austrian Economics. Norman Barry, The Tradition of Spontaneous Order. Laurence S. Moss (ed.), The Economics of Ludwig von Mises: Toward a Critical Reappraisal. Boettke on Mises. EconTalk, December 2010. Caldwell on Hayek. EconTalk, January 2011. Boudreaux on Reading Hayek. EconTalk, December 2012. (0 COMMENTS)

/ Learn More

Bubbles

What Are Bubbles? In 1996, the fledgling Internet portal Yahoo.com made its stock-market debut. This was during a time of great excitement—as well as uncertainty—about the prosperous “new economy” that the rapidly expanding Internet promised. By the beginning of the year 2000, Yahoo shares were trading at $240 each.1 Exactly one year later, however, Yahoo’s stock sold for only $30 per share. A similar story could be told for many of Yahoo’s “dot-com” contemporaries—a substantial period of market-value growth during the late 1990s followed by a rapid decline as the twenty-first century approached. With the benefit of hindsight, many concluded that dot-com stocks were overvalued in the late 1990s, which created an “Internet bubble” that was doomed to burst. Thus, as this account implies, the definition of a bubble involves some characterization of the extent to which an asset is overvalued. Let us define the “fundamental value” of an asset as the present value of the stream of cash flows that its holder expects to receive. These cash flows include the series of dividends that the asset is expected to generate and the expected price of the asset when sold.2 In an efficient market, the price of an asset is equal to its fundamental value. For instance, if a stock is trading at a price below its fundamental value, savvy investors in the market will pounce on the profit opportunity by purchasing more shares of the stock. This will bid up the stock’s price until no further profits can be achieved—that is, until its price equals its fundamental value; the same mechanism works to correct stocks that are trading above their fundamental values. So, if an asset is persistently trading at a price higher than its fundamental value, we would say that its price exhibits a bubble and that the asset is overvalued by an amount equal to the bubble—the difference between the asset’s trading price and its fundamental value. This definition implies that if such bubbles persist, investors are irrational in their failure to profit from the “overpriced” asset. Thus, we refer to this type of bubble as an “irrational bubble.” Over the past few decades, economists have generated a compelling amount of evidence to suggest that asset markets are remarkably efficient. These markets comprise thousands of traders who constantly seek to exploit even the smallest profit opportunities. If irrational bubbles appear, investors can use a variety of market instruments (such as options and short positions) to quickly burst them and achieve profits by doing so. Yet episodes like those of the dot-com era suggest at least the possibility that asset prices might persistently deviate from their fundamental values. Is it, then, possible that the market may at any time succumb to the “madness of crowds”? To see how prices might persistently deviate from traditional market fundamentals, imagine that you are considering an investment in the publicly held firm Bootstrap Microdevices (BM), which is trading at fifty dollars per share. You know that BM will not declare any dividends and have ample reason to believe that one year from now BM will be trading at only ten dollars per share. Yet you also firmly believe that you can sell your BM shares in six months for one hundred dollars each. It would be entirely rational for you to purchase BM shares now and plan to sell them in six months.3 If you did so, you and those who shared your beliefs would be “riding a bubble” and would bid up the price of BM shares in the process. This example illustrates that if bubbles exist, they might be perpetuated in a manner that would be difficult to call irrational. The key to understanding this is in recalling that an asset’s fundamental value includes its expected price when sold. If investors rationally expect an asset’s selling price to increase, then including this in their assessment of the asset’s fundamental value would be justified. It is possible, then, that the price of such an asset could grow and persist even if the viability of its issuing company is unlikely to support these prices indefinitely. This situation can be called a “rational bubble.”4 Because market fundamentals are based on expectations of future events, bubbles can be identified only after the fact. For instance, it will be several years before we truly understand the impact of the Internet on our economy. It is possible that future innovations based on Internet technologies will fundamentally justify people’s decision to buy and hold Yahoo shares at $240 each. In this light, it would be difficult to condemn those who paid such a price for Yahoo shares at a time when Internet usage was growing exponentially. Can bubbles, rational or otherwise, exist? An ex post examination of history’s so-called famous first bubbles helps to answer this question. Famous First Bubbles The Tulip Bubble Tulip bulb speculation in seventeenth-century Holland is widely recounted as a classic example of how bubbles can be generated by the “madness of crowds.”5 In 1593, tulip bulbs arrived in Holland and subsequently became fashionable accessories for elite households. A handful of bulbs were infected with a virus known as mosaic, so named for the brilliant mosaic of colors exhibited by flowers from infected bulbs. These rare bulbs soon became symbols of their owners’ prominence and vehicles for speculation. In 1625, an especially rare type of infected bulb called Semper Augustus sold for two thousand guilders—about $23,000 in 2003 dollars. By 1627, at least one of these bulbs was known to have sold at today’s equivalent of $70,000. The growth in value of the Semper Augustus continued until a dramatic decline in early 1637, when they could not be sold for more than 10 percent of their peak value. The dramatic rise and fall of Semper Augustus prices, and the fortunes made and lost on them, exhibited the symptoms of a classic bubble. Yet economist Peter Garber provided compelling evidence that “tulipmania” did not generate a bubble. He argued that the dynamics of bulb prices during the tulip episode were typical of even today’s market for rare bulbs. It is important to note that the mosaic virus could not be systematically introduced to common bulb types. The only way to cultivate a prized Semper Augustus was to raise it from the offshoot bud of an infected mother bulb. Just as the fundamental value of a stock includes its expected stream of dividends, the fundamental value of a Semper Augustus included its expected stream of rare offspring. As the rare bulbs were introduced to the public, their growing popularity, combined with their limited supply, commanded a high price. This price was pushed up by speculators who hoped to profit from the bulb’s popularity by cultivating its valuable offspring. These offspring expanded the supply of bulbs, making them less rare, and thus less valuable. Perhaps tulips’ decreased popularity accelerated this downward trend in bulb prices. Interestingly, a small quantity of prototype lily bulbs sold at a 1987 Netherlands flower auction for more than $900,000 in 2003 dollars, and their offspring now sell at a tiny fraction of this price; yet no one mentions “lilymania.” The Mississippi and South Sea Bubbles In 1717, John Law organized the Compagnie d’Occident to take over the French government’s trade monopolies in Louisiana and on Canadian beaver pelts. The company was later renamed the Compagnie des Indies following a series of mergers and acquisitions, including France’s Banque Royale, whose notes were guaranteed by the crown. Eventually the company acquired the right to collect all taxes and mint new coinage, and it funded these enterprises with a series of share issues at successively higher prices. Shares sold for five hundred livres each at the company’s onset, but their price increased to nearly ten thousand livres in October 1720 after these expansive moves. By September 1721, however, shares of the Compagnie des Indies fell back to their original value of five hundred livres. Meanwhile, in England, the South Sea Company, whose only notable asset at the time was a defunct trade monopoly with the Spanish colonies in South America, had its own expansion plans. The company’s goals were not as well defined as those of its French counterpart, but it managed to gain broad parliamentary support through a series of bribes and generous share allowances. In January 1720, South Sea shares sold for 120 pounds each. This price rose to 1,000 pounds by June of that year through a series of new issues. By October, however, prices fell to 290 pounds. Were Compagnie des Indies and South Sea Company shareholders riding bubbles? Peter Garber provided a detailed account of how market fundamentals, not irrational speculation or rational bubble dynamics, might have driven these price movements. The companies started with similar plans to finance their ventures by acquiring government debt in exchange for shares. This generated streams of government cash payments, at reduced interest rates, that could be used as leverage to finance each company’s commercial enterprises. With this came an extraordinary degree of visible privilege and support from their governments, extending all the way up to their royal families. The remarkable credibility of each company’s potential for profit and growth may well have justified their peak share prices based on market fundamentals. The decline of South Sea share prices began with Parliament’s passage of the Bubble Act in June 1720—an act that was intended to limit the expansion of the South Sea Company’s competitors. This placed downward price pressure on competitors’ shares that were largely bought on margin. A wave of selling, including South Sea shares, ensued in a scramble for liquidity to meet these margins. As prices continued to drop, Parliament turned against the company and liquidated its assets. In France, the fall of the Compagnie des Indies was more complex. At the peak of its market value, many investors wanted to convert their capital gains into the more tangible asset of gold. Of course, there was not enough gold in France at the time to satisfy all of these desires, just as there is not enough gold in the United States today to back each dollar. The Banque Royale intervened by fixing Compagnie share prices at nine thousand livres and exchanging its notes for Compagnie stock. Within a few months, France’s money supply was effectively doubled since Banque Royale notes were considered legal tender. A period of hyperinflation ensued, followed by the company’s stopgap deflationary efforts of reducing the fixed price of Compagnie shares to five thousand livres. Confidence in the company dissolved, and John Law was eventually removed from power. This brief account shows how each company’s rise and fall is traceable to events that were likely to change how investors fundamentally valued South Sea and Compagnie shares, contrary to what a bubble hypothesis would suggest. These companies were essentially performing large-scale financial experiments based on prospects for long-term growth. They ultimately failed, but they could well have shown enough promise to convince even the most incredulous investors of their potential for success. It would be difficult to characterize what may have been rational behavior ex ante as evidence of bubble formation. Indeed, John Law’s operations with the Banque Royale essentially attempted to expand French commerce by expanding France’s money supply. This monetary policy is one that an entire generation of Keynesian economists promoted more than two hundred years after the Mississippi and South Sea “bubbles.” Yet few economists, even those highly dismissive of Keynesian economics, are willing to call Keynesians irrational. The Modern Bubble Debate The jury is still out on whether or not bubbles can persist in modern asset markets. Debates continue among economists even on the existence of irrational or rational bubbles. And there is often confusion in trying to distinguish irrational bubbles from rational bubbles that might be generated by investors’ rational but flawed perceptions of market fundamentals. Most modern efforts focus on developing sophisticated statistical methods to detect bubbles, but none has enjoyed a consensus of support among economists. About the Author Seiji Steimetz is an economics professor at California State University at Long Beach. He was previously a senior consultant at Bates White LLC, an economic consulting firm. Further Reading Introductory   Garber, Peter. Famous First Bubbles: The Fundamentals of Early Manias. Cambridge: MIT Press, 2000. Mackay, Charles. Memoirs of Extraordinary Popular Delusions and the Madness of Crowds. London: Office of National Illustrated Library, 1852. Available online at: http://www.econlib.org/library/Mackay/macEx.html Malkiel, Burton. A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing. New York: Norton, 2003. Shiller, Robert. Irrational Exuberance. Princeton: Princeton University Press, 2000. Smant, David. “Famous First Bubbles or Bubble Myths Explained?” Available online at: http://www.few.eur.nl/few/people/smant/m-economics/bubbles.htm.   Advanced   Abreu, Dilip, and Markus Brunnermeier. “Bubbles and Crashes.” Econometrica 71 (2003): 173–204. Evans, George. “Pitfalls in Testing for Explosive Bubbles in Asset Prices.” American Economic Review 4 (1991): 922–930. Flood, Robert, and Robert Hodrick. “On Testing for Speculative Bubbles.” Journal of Economic Perspectives 4 (1990): 85–101. Garber, Peter. “Famous First Bubbles.” Journal of Economic Perspectives 4 (1990): 35–54. Garber, Peter. “Tulipmania.” Journal of Political Economy 3 (1989): 535–560. Shiller, Robert. “Speculative Prices and Popular Models.” Journal of Economic Perspective 4 (1990): 55–65. Stiglitz, Joseph. “Symposium on Bubbles.” Journal of Economic Perspectives 4 (1990): 13–18.   Footnotes 1. This is a split-adjusted figure. The actual trading price at the time was $475 per share.   2. If the asset is to be held forever, its fundamental value is just the present value of its expected dividend stream since the present value of any dollar amount to be received an infinite number of years from now is zero.   3. In doing so, one might say that you were applying the “greater fool theory” to your investment decision, thereby building a “castle in the air.”   4. Economists often refer to these types of bubble conditions as “bootstrap equilibria.” High prices are thought to be held high by self-fulfilling prophecies, just as one might attempt to hold himself high off the ground by pulling up on his bootstraps.   5. This section is based primarily on the influential work of economist Peter Garber.   Related Links Eugene Fama, from the Concise Encyclopedia of Economics Fama on Finance. EconTalk, January 2012. Stock Market, from the Concise Encyclopedia of Economics Shiller on Housing and Bubbles. EconTalk, September 2008. Pedro Schwartz, Housing Bubbles…and the Laboratory. April 2015. (0 COMMENTS)

/ Learn More

Behavioral Economics

How Behavioral Economics Differs from Traditional Economics All of economics is meant to be about people’s behavior. So, what is behavioral economics, and how does it differ from the rest of economics? Economics traditionally conceptualizes a world populated by calculating, unemotional maximizers that have been dubbed Homo economicus. The standard economic framework ignores or rules out virtually all the behavior studied by cognitive and social psychologists. This “unbehavioral” economic agent was once defended on numerous grounds: some claimed that the model was “right”; most others simply argued that the standard model was easier to formalize and practically more relevant. Behavioral economics blossomed from the realization that neither point of view was correct. The standard economic model of human behavior includes three unrealistic traits—unbounded rationality, unbounded willpower, and unbounded selfishness—all of which behavioral economics modifies. Nobel Memorial Prize recipient Herbert Simon (1955) was an early critic of the idea that people have unlimited information-processing capabilities. He suggested the term “bounded rationality” to describe a more realistic conception of human problem-solving ability. The failure to incorporate bounded rationality into economic models is just bad economics—the equivalent to presuming the existence of a free lunch. Since we have only so much brainpower and only so much time, we cannot be expected to solve difficult problems optimally. It is eminently rational for people to adopt rules of thumb as a way to economize on cognitive faculties. Yet the standard model ignores these bounds. Departures from rationality emerge both in judgments (beliefs) and in choices. The ways in which judgment diverges from rationality are extensive (see Kahneman et al. 1982). Some illustrative examples include overconfidence, optimism, and extrapolation. An example of suboptimal behavior involving two important behavioral concepts, loss aversion and mental accounting, is a mid-1990s study of New York City taxicab drivers (Camerer et al. 1997). These drivers pay a fixed fee to rent their cabs for twelve hours and then keep all their revenues. They must decide how long to drive each day. The profit-maximizing strategy is to work longer hours on good days—rainy days or days with a big convention in town—and to quit early on bad days. Suppose, however, that cabbies set a target earnings level for each day and treat shortfalls relative to that target as a loss. Then they will end up quitting early on good days and working longer on bad days. The authors of the study found that this is precisely what they do. Consider the second vulnerable tenet of standard economics, the assumption of complete self-control. Humans, even when we know what is best, sometimes lack self-control. Most of us, at some point, have eaten, drunk, or spent too much, and exercised, saved, or worked too little. Though people have these self-control problems, they are at least somewhat aware of them: they join diet plans and buy cigarettes by the pack (because having an entire carton around is too tempting). They also pay more withholding taxes than they need to in order to assure themselves a refund; in 1997, nearly ninety million tax returns paid an average refund of around $1,300. Finally, people are boundedly selfish. Although economic theory does not rule out altruism, as a practical matter economists stress self-interest as people’s primary motive. For example, the free-rider problems widely discussed in economics are predicted to occur because individuals cannot be expected to contribute to the public good unless their private welfare is thus improved. But people do, in fact, often act selflessly. In 1998, for example, 70.1 percent of all households gave some money to charity, the average dollar amount being 2.1 percent of household income.1 Likewise, 55.5 percent of the population age eighteen or more did volunteer work in 1998, with 3.5 hours per week being the average hours volunteered.2 Similar selfless behavior has been observed in controlled laboratory experiments. People often cooperate in prisoners’ dilemma games and turn down unfair offers in “ultimatum” games. (In an ultimatum game, the experimenter gives one player, the proposer, some money, say ten dollars. The proposer then makes an offer of x, equal or less than ten dollars, to the other player, the responder. If the responder accepts the offer, he gets x and the proposer gets 10 − x. If the responder rejects the offer, then both players get nothing. Standard economic theory predicts that proposers will offer a token amount (say twenty-five cents) and responders will accept, because twenty-five cents is better than nothing. But experiments have found that responders typically reject offers of less than 20 percent (two dollars in this example). Behavioral Finance If economists had been asked in the mid-1980s to name a discipline within economics to which bounded rationality was least likely to apply, finance would probably have been the one most often named. One leading economist called the efficient markets hypothesis (see definition below), which follows from traditional economic thinking, the best-established fact in economics. Yet finance is perhaps the branch of economics where behavioral economics has made the greatest contributions. How has this happened? Two factors contributed to the surprising success of behavioral finance. First, financial economics in general, and the efficient market hypothesis (see efficient capital markets) in particular, generated sharp, testable predictions about observable phenomena. Second, high-quality data are readily available to test these sharp predictions. The rational efficient markets hypothesis states that stock prices are “correct” in the sense that asset prices reflect the true or rational value of the security. In many cases, this tenet of the efficient market hypothesis is untestable because intrinsic values are not observable. In some special cases, however, the hypothesis can be tested by comparing two assets whose relative intrinsic values are known. Consider closed-end mutual funds (Lee et al. 1991). These funds are much like typical (open-end) mutual funds, except that to cash out of the fund, investors must sell their shares on the open market. This means that the market prices of closed-end funds are determined by supply and demand rather than set equal to the value of their assets by the fund managers, as in open-end funds. Because closed-end funds’ holdings are public, market efficiency would mean that the price of the fund should match the price of the underlying securities they hold (the net asset value, or NAV). Instead, closed-end funds typically trade at substantial discounts relative to their NAV, and occasionally at substantial premia. Most interesting from a behavioral perspective is that closed-end fund discounts are correlated with one another and appear to reflect individual investor sentiment. (Individual investors rather than institutions are the primary owners of closed-end funds.) Lee and his colleagues found that discounts shrank in months when shares of small companies (also owned primarily by individuals) did well and in months when there was a lot of initial public offering (IPO) activity, indicating a “hot” market. Since these findings were predicted by behavioral finance theory, they move the research beyond the demonstration of an embarrassing fact (price not equal to NAV) toward a constructive understanding of how markets work. The second principle of the efficient market hypothesis is unpredictability. In an efficient market, it is not possible to predict future stock price movements based on publicly available information. Many early violations of this principle had no explicit link to behavior. Thus it was reported that small firms and “value firms” (firms with low price-to-earnings ratios) earned higher returns than other stocks with the same risk. Also, stocks in general, but especially stocks of small companies, have done well in January and on Fridays (but poorly on Mondays). An early study by Werner De Bondt and Richard Thaler (1985) was explicitly motivated by the psychological finding that individuals tend to overreact to new information. For example, experimental evidence suggested that people tended to underweight base rate data (or prior information) in incorporating new data. De Bondt and Thaler hypothesized that if investors behave this way, then stocks that perform quite well over a period of years will eventually have prices that are too high because people overreacting to the good news will drive up their prices. Similarly, poor performers will eventually have prices that are too low. This yields a prediction about future returns: past “winners” ought to underperform, while past “losers” ought to outperform the market. Using data for stocks traded on the New York Stock Exchange, De Bondt and Thaler found that the thirty-five stocks that had performed the worst over the past five years (the losers) outperformed the market over the next five years, while the thirty-five biggest winners over the past five years subsequently underperformed. Follow-up studies showed that these early results cannot be attributed to risk; by some measures the portfolio of losers was actually less risky than the portfolio of winners. More recent studies have found other violations of unpredictability that have the opposite pattern from that found by De Bondt and Thaler, namely underreaction rather than overreaction. Over short periods—for example, six months to one year—stocks display momentum: the stocks that go up the fastest for the first six months of the year tend to keep going up. Also, after many corporate announcements such as large earnings changes, dividend initiations and omissions, share repurchases, and splits, the price jumps initially on the day of the announcement and then drifts slowly upward for a year or longer (see Shleifer 2000 for a nice introduction to the field). Behavioral economists have also hypothesized that investors are reluctant to realize capital losses because doing so would mean that they would have to “declare” the loss to themselves. Hersh Shefrin and Meir Statman (1985) dubbed this hypothesis the “disposition effect.” Interestingly, the tax law encourages just the opposite behavior. Yet Terrance Odean (1998) found that in a sample of customers of a discount brokerage firm, investors were more likely to sell a stock that had increased in value than one that had decreased. While around 15 percent of all gains were realized, only 10 percent of all losses were realized. Odean showed, moreover, that the loser stocks that were held underperformed the gainer stocks that were sold. Saving If finance was held to be the field in which a behavioral approach was least likely, a priori, to succeed, saving had to be one of the most promising. Although the standard life-cycle model of savings abstracts from both bounded rationality and bounded willpower, saving for retirement is both a difficult cognitive problem and a difficult self-control problem. It is thus perhaps less surprising that a behavioral approach has been fruitful here. As in finance, progress has been helped by the combination of a refined standard theory with testable predictions and abundant data sources on household saving behavior. Suppose that Tom is a basketball player and therefore earns most of his income early in his life, while Ray is a manager who earns most of his income late in life. The life-cycle model predicts that Tom would save his early income to increase consumption later in life, while Ray would borrow against future income to increase consumption earlier in life. The data do not support this prediction. Instead, they show that consumption tracks income over individuals’ life cycles much more closely than the standard life-cycle model predicts. Furthermore, the departures from predicted behavior cannot be explained merely by people’s inability to borrow. James Banks, Richard Blundell, and Sarah Tanner (1998) showed, for example, that consumption drops sharply as individuals retire and their incomes drop because they have not saved enough for retirement. Indeed, many low- to middle-income families have essentially no savings. The primary cause of this lack of saving appears to be lack of self-control. One bit of evidence supporting this conclusion is that virtually all of Americans’ saving takes place in forms that are often called “forced savings”—for example, accumulating home equity by paying the mortgage and participating in pension plans. Coming full circle, individuals may impose another type of “forced” savings on themselves—high tax withholding—so that when the refund comes, they can buy something they might not have had the willpower to save up for. One of the most interesting research areas has been devoted to measuring the effectiveness of tax-advantaged savings programs such as individual retirement accounts (IRAs) and 401(k) plans. Consider the original IRA program of the early 1980s. This program provided tax subsidies for savings up to a threshold, often two thousand dollars per year. Because there was no tax incentive to save more than two thousand dollars per year, those saving more than the threshold should not have increased their total saving, but instead should have merely switched some money from a taxable account to the IRA. Yet, by some accounts, these programs appear to have generated substantial new savings. Some researchers argue that almost every dollar of savings in IRAs appears to represent new savings. In other words, people are not simply shifting their savings into IRAs and leaving their total behavior unchanged. Similar results are found for 401(k) plans. The behavioral explanation for these findings is that IRAs and 401(k) plans help solve self-control problems by setting up special mental accounts that are devoted to retirement savings. Households tend to respect the designated use of these accounts, and the tax penalty that must be paid if funds are removed prematurely bolsters people’s self-control.3 An interesting flip side to IRA and 401(k) programs is that these programs have generated far less than the full participation expected. Many eligible people do not participate, forgoing, in effect, a cash transfer from the government (and in some cases from their employer). Ted O’Donoghue and Matthew Rabin (1999) presented an explanation based on procrastination and hyperbolic discounting. Individuals typically show very sharp impatience for short-horizon decisions, but much more patience at long horizons. This behavior is often referred to as hyperbolic discounting, in contrast to the standard assumption of exponential discounting, in which patience is independent of horizon. In exponential models, people are equally patient at long and short horizons. O’Donoghue and Rabin argued that hyperbolic individuals will show exactly the low IRA participation that we observe. Though hyperbolic people will eventually want to participate in IRAs (because they are patient in the long run), something always comes up in the short run (where they are very impatient) that provides greater immediate reward. Consequently, they may indefinitely delay starting an IRA. If people procrastinate about joining the savings plan, then it should be possible to increase participation rates simply by lowering the psychic costs of joining. One simple way of accomplishing this is to switch the default option for new workers. In most companies, employees who become eligible for the 401(k) plan receive a form inviting them to join; to join, they have to send the form back and make some choices. The default option, therefore, is not to join. Several firms have made the seemingly inconsequential change of switching the default: employees are enrolled into the plan unless they explicitly opt out. This change often produces dramatic increases in savings rates. For example, in one company studied by Brigitte C. Madrian and Dennis F. Shea (2000), the employees who joined after the default option was switched were 50 percent more likely to participate than the workers in the year prior to the change. The authors also found that the default asset allocation—that is, the allocation the firm made among stocks, bonds, and so on if the employee made no explicit choice—had a strong effect on workers’ choices. The firm had made the default asset allocation 100 percent in a money market account, and the proportion of workers “selecting” this allocation soared. It is possible to go further and design institutions that help people make better choices, as defined by the people who choose. One successful effort along these lines is Richard Thaler and Shlomo Benartzi’s (2004) “Save More Tomorrrow” program (SMarT). Under the SMarT plan, employers invite their employees to join a plan in which employees’ contribution rates to their 401(k) plan increase automatically every year (say, by two percentage points). The increases are timed to coincide with annual raises, so the employee never sees a reduction in take-home pay, thus avoiding loss aversion (at least in nominal terms). In the first company that adopted the SMarT plan, the participants who joined the plan increased their savings rates from 3.5 percent to 13.6 percent after four pay raises (Thaler and Benartzi 2004). About the Authors Richard H. Thaler is the Ralph and Dorothy Keller Distinguished Service Professor of Economics and Behavioral Science at the University of Chicago’s Graduate School of Business, where he is director of the Center for Decision Research. He is also a research associate at the National Bureau of Economic Research (NBER), where he codirects the behavioral economics project. Sendhil Mullainathan is a professor of economics at Harvard University and a research associate with the NBER. In 2002, he was awarded a grant from the MacArthur Fellows Program. Further Reading   Banks, James, Richard Blundell, and Sarah Tanner. “Is There a Retirement-Savings Puzzle?” American Economic Review 88, no. 4 (1998): 769–788. Camerer, Colin, Linda Babcock, George Loewenstein, and Richard H. Thaler. “Labor Supply of New York City Cabdrivers: One Day at a Time.” Quarterly Journal of Economics 112, no. 2 (1997): 407–441. Conlisk, John. “Why Bounded Rationality?” Journal of Economic Literature 34, no. 2 (1996): 669–700. De Bondt, Werner F. M., and Richard H. Thaler. “Does the Stock Market Overreact?” Journal of Finance 40, no. 3 (1985): 793–805. DeLong, Brad, Andrei Shleifer, Lawrence Summers, and Robert Waldman. “Noise Trader Risk in Financial Markets.” Journal of Political Economy 98, no. 4 (1990): 703–738. Kahneman, Daniel, and Amos Tversky. “Judgement Under Uncertainty: Heuristics and Biases.” Science 185 (1974): 1124–1131. Kahneman, Daniel, and Amos Tversky. “Prospect Theory: An Analysis of Decision Under Risk.” Econometrica 47, no. 2 (1979): 263–291. Kahneman, Daniel, Paul Slovic, and Amos Tversky. Judgement Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press, 1982. Laibson, David. “Golden Eggs and Hyperbolic Discounting.” Quarterly Journal of Economics 112, no. 2 (1997): 443–477. Lee, Charles M. C., Andrei Shleifer, and Richard H. Thaler. “Investor Sentiment and the Closed-End Fund Puzzle.” Journal of Finance 46, no. 1 (1991): 75–109. Madrian, Brigitte C., and Dennis F. Shea. “The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior.” Quarterly Journal of Economics 116, no. 4 (2000): 1149–1187. Odean, Terrance. “Are Investors Reluctant to Realize Their Losses?” Journal of Finance 53, no. 5 (1998): 1775–1798. O’Donoghue, Ted, and Matthew Rabin. “Procrastination in Preparing for Retirement.” In Henry Aaron, ed., Behavioral Dimensions of Retirement Economics. Washington, D.C.: Brookings Institution, 1999. Shefrin, Hersh, and Meir Statman. “The Disposition to Sell Winners Too Early and Ride Losers Too Long: Theory and Evidence.” Journal of Finance 40, no. 3 (1985): 777–790. Shleifer, Andrei. Inefficient Markets: An Introduction to Behavioral Finance. Clarendon Lectures. Oxford: Oxford University Press, 2000. Shleifer, Andrei, and Robert Vishny. “The Limits of Arbitrage.” Journal of Finance 52, no. 1 (1997): 35–55. Simon, Herbert A. “A Behavioral Model of Rational Choice.” Quarterly Journal of Economics 69 (February 1955): 99–118. Thaler, Richard H. “Mental Accounting and Consumer Choice.” Marketing Science 4, no. 3 (1985): 199–214. Thaler, Richard H., and Shlomo Benartzi. “Save More Tomorrow: Using Behavioral Economics to Increase Employee Saving.” Journal of Political Economy 112 (February 2004): S164–S187.   Footnotes * This article is a revision of a manuscript originally written as an entry in the International Encyclopedia of the Social and Behavioral Sciences.   1. Data are from the Chronicle of Philanthropy (1999), available online at: http://philanthropy.com/free/articles/v12/i01/1201whodonated.htm.   2. Data are from Independent Sector (2004), available online at: http://www.independentsector.org/programs/research/volunteer_time.html. 3. Some issues remain controversial. See the debate in the fall 1996 issue of the Journal of Economic Perspectives. Related Links Richard Thaler on Libertarian Paternalism. EconTalk, November 2006. Phil Rosenzweig on Leadership, Decisions, and Behavioral Economics. EconTalk, April 2015. Rubinstein on Game Theory and Behavioral Economics. EconTalk, April 2011. Richard Epstein on Happiness, Inequality, and Envy. EconTalk, November 2008. Rosenberg on the Nature of Economics. EconTalk, September 2011. The Economics of Paternalism. EconTalk, September 2006. More EconTalk episodes on Behavioral Economics. Richard McKenzie, Market Competitiveness and Rationality: A Brain-Focused Perspective. October, 2019. Richard McKenzie, Of Diet Cokes and Brain-Focused Economics. March, 2018. Richard McKenzie, Predictably Rational or Predictably Irrational? January, 2010. Arnold Kling, Phools and Their Money. October, 2015. (0 COMMENTS)

/ Learn More

Business Cycles

The United States and all other modern industrial economies experience significant swings in economic activity. In some years, most industries are booming and unemployment is low; in other years, most industries are operating well below capacity and unemployment is high. Periods of economic prosperity are typically called expansions or booms; periods of economic decline are called recessions or depressions. The combination of expansions and recessions, the ebb and flow of economic activity, is called the business cycle. Business cycles as we know them today were codified and analyzed by Arthur Burns and Wesley Mitchell in their 1946 book Measuring Business Cycles. One of Burns and Mitchell’s key insights was that many economic indicators move together. During an expansion, not only does output rise, but also employment rises and unemployment falls. New construction also typically increases, and inflation may rise if the expansion is particularly brisk. Conversely, during a recession, the output of goods and services declines, employment falls, and unemployment rises; new construction also declines. In the era before World War II, prices also typically fell during a recession (i.e., inflation was negative); since the 1950s prices have continued to rise during downturns, though more slowly than during expansions (i.e., the rate of inflation falls). Burns and Mitchell defined a recession as a period when a broad range of economic indicators falls for a sustained period, roughly at least half a year. Business cycles are dated according to when the direction of economic activity changes. The peak of the cycle refers to the last month before several key economic indicators—such as employment, output, and retail sales— begin to fall. The trough of the cycle refers to the last month before the same economic indicators begin to rise. Because key economic indicators often change direction at slightly different times, the dating of peaks and troughs is necessarily somewhat subjective. The National Bureau of Economic Research (NBER) is an independent research institution that dates the peaks and troughs of U.S. business cycles. Table 1 shows the NBER monthly dates for peaks and troughs of U.S. business cycles since 1890. Recent research has shown that the NBER’s reference dates for the period before World War I are not truly comparable with those for the modern era because they were determined using different methods and data. Figure 1 shows the unemployment rate since 1948, with periods that the NBER classifies as recessions shaded in gray. Clearly, a key feature of recessions is that they are times of rising unemployment. In many ways, the term “business cycle” is misleading. “Cycle” seems to imply that there is some regularity in the timing and duration of upswings and downswings in economic activity. Most economists, however, do not think there is. As Figure 1 shows, expansions and recessions occur at irregular intervals and last for varying lengths of time. For example, there were three recessions between 1973 and 1982, but, then the 1982 trough was followed by eight years of uninterrupted expansion. The 1980 recession lasted just six months, while the 1981 recession lasted sixteen months. For describing the swings in economic activity, therefore, many modern economists prefer the term “short-run economic fluctuations” to “business cycle.” Table 1 Business Cycle Peaks and Troughs in the United States, 1890-2004 Peak Trough Peak Trough July 1890 May 1891 May 1937 June 1938 Jan. 1893 June 1894 Feb. 1945 Oct. 1945 Dec. 1895 June 1897 Nov. 1948 Oct. 1949 June 1899 Dec. 1900 July 1953 May 1954 Sep. 1902 Aug. 1904 Aug. 1957 Apr. 1958 May 1907 June 1908 Apr. 1960 Feb. 1961 Jan. 1910 Jan. 1912 Dec. 1969 Nov. 1970 Jan. 1913 Dec. 1914 Nov. 1973 Mar. 1975 Aug. 1918 Mar. 1919 Jan. 1980 July 1980 Jan. 1920 July 1921 July 1981 Nov. 1982 May 1923 July 1924 July 1990 Mar. 1991 Oct. 1926 Nov. 1927 Mar. 2001 Nov. 2001 Aug. 1929 Mar. 1933 Causes of Business Cycles Just as there is no regularity in the timing of business cycles, there is no reason why cycles have to occur at all. The prevailing view among economists is that there is a level of economic activity, often referred to as full employment, at which the economy could stay forever. Full employment refers to a level of production in which all the inputs to the production process are being used, but not so intensively that they wear out, break down, or insist on higher wages and more vacations. When the economy is at full employment, inflation tends to remain constant; only if output moves above or below normal does the rate of inflation systematically tend to rise or fall. If nothing disturbs the economy, the full-employment level of output, which naturally tends to grow as the population increases and new technologies are discovered, can be maintained forever. There is no reason why a time of full employment has to give way to either an inflationary boom or a recession. Business cycles do occur, however, because disturbances to the economy of one sort or another push the economy above or below full employment. Inflationary booms can be generated by surges in private or public spending. For example, if the government spends a lot to fight a war but does not raise taxes, the increased demand will cause not only an increase in the output of war matériel, but also an increase in the take-home pay of defense workers. The output of all the goods and services that these workers want to buy with their wages will also increase, and total production may surge above its normal, comfortable level. Similarly, a wave of optimism that causes consumers to spend more than usual and firms to build new factories may cause the economy to expand more rapidly than normal. Recessions or depressions can be caused by these same forces working in reverse. A substantial cut in government spending or a wave of pessimism among consumers and firms may cause the output of all types of goods to fall. Another possible cause of recessions and booms is monetary policy. The Federal Reserve System strongly influences the size and growth rate of the money stock, and thus the level of interest rates in the economy. Interest rates, in turn, are a crucial determinant of how much firms and consumers want to spend. A firm faced with high interest rates may decide to postpone building a new factory because the cost of borrowing is so high. Conversely, a consumer may be lured into buying a new home if interest rates are low and mortgage payments are therefore more affordable. Thus, by raising or lowering interest rates, the Federal Reserve is able to generate recessions or booms. Figure 1. Unemployment Rate and Recessions ZOOM   Source: The data are from the Bureau of Labor Statistics.Note: The series graphed is the seasonally adjusted civilian unemployment rate for those age sixteen and over. The shaded areas indicate recessions. This description of what causes business cycles reflects the Keynesian or new Keynesian view that cycles are the result of nominal rigidities. Only when prices and inflationary expectations are not fully flexible can fluctuations in overall demand cause large swings in real output. An alternative view, referred to as the new classical framework, holds that modern industrial economies are quite flexible. As a result, a change in spending does not necessarily affect real output and employment. For example, in the new classical view a change in the stock of money will change only prices; it will have no effect on real interest rates and thus on people’s willingness to invest. In this alternative framework, business cycles are largely the result of disturbances in productivity and tastes, not of changes in aggregate demand. The empirical evidence is strongly on the side of the view that deviations from full employment are often the result of spending shocks. Monetary policy, in particular, appears to have played a crucial role in causing business cycles in the United States since World War II. For example, the severe recessions of both the early 1970s and the early 1980s were directly attributable to decisions by the Federal Reserve to raise interest rates. On the expansionary side, the inflationary booms of the mid-1960s and the late 1970s were both at least partly due to monetary ease and low interest rates. The role of money in causing business cycles is even stronger if one considers the era before World War II. Many of the worst prewar depressions, including the recessions of 1908, 1921, and the Great Depression of the 1930s, were to a large extent the result of monetary contraction and high real interest rates. In this earlier era, however, most monetary swings were engendered not by deliberate monetary policy but by financial panics, policy mistakes, and international monetary developments. Historical Record of Business Cycles Table 2 shows the peak-to-trough decline in industrial production, a broad monthly measure of manufacturing and mining activity, in each recession since 1890. The industrial production series used was constructed to be comparable over time. Many other conventional macroeconomic indicators, such as the unemployment rate and real GDP, are not consistent over time. The prewar versions of these series were constructed using methods and data sources that tended to exaggerate cyclical swings. As a result, these conventional indicators yield misleading estimates of the degree to which business cycles have moderated over time. Table 2 Peak-to-Trough Decline in Industrial Production Year of NBER Peak % Decline Year of NBER Peak % Decline 1890 −5.3 1937 −32.5 1893 −17.3 1945 −35.5 1895 −10.8 1948 −10.1 1899 −10.0 1953 −9.5 1902 −9.5 1957 −13.6 1907 −20.1 1960 −8.6 1910 −9.1 1969 −7.0 1913 −12.1 1973 −13.1 1918 −6.2 1980 −6.6 1920 −32.5 1981 −9.4 1923 −18.0 1990 −4.1 1926 −6.0 2001 −6.2 1929 −53.6 Source: The industrial production data for 1919–2004 are from the Board of Governors of the Federal Reserve System. The series before 1919 is an adjusted and smoothed version of the Miron-Romer index of industrial production. This series is described in the appendix to “Remeasuring Business Cycles” by Christina D. Romer. Note: The peak-to-trough decline is calculated using the actual peaks and troughs in the industrial production series. These turning points often differ from the NBER dates by a few months, and occasionally by as much as a year. The empirical record on the duration and severity of recessions over time reflects the evolution of economic policy. The recessions of the pre–World War I era were relatively frequent and quite variable in size. This is consistent with the fact that before World War I, the government had little influence on the economy. Prewar recessions stemmed from a wide range of private-sector-induced fluctuations in spending, such as investment busts and financial panics, that were left to run their course. As a result, recessions occurred frequently, and some were large and some were small. After World War I the government became much more involved in managing the economy. Government spending and taxes as a fraction of GDP rose substantially in the 1920s and 1930s, and the Federal Reserve was established in 1914. Table 2 makes clear that the period between the two world wars was one of extreme volatility. The declines in industrial production in the recessions of 1920, 1929, and 1937 were larger than in any recessions in the pre– World War I and post–World War II periods. A key factor in these extreme fluctuations was the replacement, by the 1920s, of some of the private-sector institutions that had helped the U.S. economy weather prewar fluctuations with government institutions that were not yet fully functional. The history of the interwar era is perhaps best described as a painful learning period for the Federal Reserve. The downturn of the mid-1940s obviously reflects the effect of World War II. The war generated an incredible boom in economic activity, as production surged in response to massive government spending. The end of wartime spending led to an equally spectacular drop in industrial production as the economy returned to more normal levels of labor and capital utilization. Recessions in the early postwar era were of roughly the same average severity as those before World War I, although they were somewhat less frequent than in the earlier period and were more consistently of moderate size. The decreasing frequency of downturns reflects progress in economic policymaking. The Great Depression brought about large strides in the understanding of the economy and the capacity of government to moderate cycles. The Employment Act of 1946 mandated that the government use the tools at its disposal to stabilize output and employment. And indeed, economic policy since World War II has almost certainly counteracted some shocks and hence prevented some recessions. In the early postwar era, however, policymakers tended to carry expansionary policy too far, and in the process caused inflation to rise. As a result, policymakers, particularly the Federal Reserve, felt compelled to adopt contractionary policies that led to moderate recessions in order to bring inflation down. This boom-bust cycle was a common feature of the 1950s, 1960s, and 1970s. Recessions in the United States have become noticeably less frequent and severe since the mid-1980s. The nearly decade-long expansions of the 1980s and 1990s were interrupted by only very mild recessions in 1990 and 2001. Economists attribute this moderation of cycles to a number of factors, including the increasing importance of services (a traditionally stable sector of the economy) and a decline in adverse shocks, such as oil price increases and fluctuations in consumer and investor sentiment. Most economists believe that improvements in monetary policy, particularly the end of overexpansion followed by deliberate contraction, have been a significant factor as well.     In addition to reductions in the frequency and severity of downturns over time, the effects of recessions on individuals in the United States and other industrialized countries almost surely have been lessened in recent decades. The advent of unemployment insurance and other social welfare programs means that recessions no longer wreak the havoc on individuals’ standards of living that they once did. About the Author Christina D. Romer is a professor of economics at the University of California, Berkeley, and co-director of the Program in Monetary Economics at the National Bureau of Economic Research. Further Reading   Burns, Arthur F., and Wesley C. Mitchell. Measuring Business Cycles. New York: National Bureau of Economic Research, 1946. Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867–1960. Princeton: Princeton University Press for NBER, 1963. Romer, Christina D. “Changes in Business Cycles: Evidence and Explanations.” Journal of Economic Perspectives 13 (Spring 1999): 23–44. Romer, Christina D. “Remeasuring Business Cycles.” Journal of Economic History 54 (September 1994): 573–609. Related Links Financial Crisis of 2008, from the Concise Encyclopedia of Economics. Bubbles, from the Concise Encyclopedia of Economics. Stock Market, from the Concise Encyclopedia of Economics.  Edward C. Prescott biography, from the Concise Encyclopedia of Economics. Jan Tinbergen biography, from the Concise Encyclopedia of Economics. Ludwig von Mises biography, from the Concise Encyclopedia of Economics.  Robert P. Murphy, The Importance of Capital in Economic Theory. May 2014. Boettke on the Austrian Perspective on Business Cycles and Monetary Policy. EconTalk, January 2009. Don Boudreaux on Macroeconomics and Austrian Business Cycle Theory. EconTalk, April 2009. Shlaes on the Great Depression. EconTalk, June 2007. Lucas on Growth, Poverty, and Business Cycles. EconTalk, February 2007. Ramey on Stimulus and Multipliers. EconTalk, October 2011. Gene Epstein on Gold, the Fed, and Money. EconTalk, June 2008. Robert Solow on Growth and the State of Economics. EconTalk, October 2014.           (0 COMMENTS)

/ Learn More

Auctions

When most people hear the word “auction,” they think of the open-outcry, ascending-bid (or English) auction. But this kind of auction is only one of many. Fundamentally, an auction is an economic mechanism whose purpose is the allocation of goods and the formation of prices for those goods via a process known as bidding. Depending on the properties of the bidders and the nature of the items to be auctioned, various auction structures may be either more efficient or more profitable to the seller than others. Like all well-designed economic mechanisms, the designer assumes that individuals will act strategically and may hold private information relevant to the decision at hand. Auction design is a careful balance of encouraging bidders to reveal valuations, discouraging cheating or collusion, and maximizing revenues. William Vickrey first established the taxonomy of auctions based on the order in which the auctioneer quotes prices and the bidders tender their bids. He established four major (one-sided) auction types: (1) the ascending-bid (open, oral, or English) auction; (2) the descending-bid (Dutch) auction; (3) the first-price, sealed-bid auction; and (4) the second-price, sealed-bid (Vickrey) auction. The Four Basic Auction Types The most common type of auction, the English auction, is often used to sell art, wine, antiques, and other goods. In it, the auctioneer opens the bidding at a reserve price (which may be zero), the lowest price he is willing to accept for the item. Once a bidder has announced interest at that price, the auctioneer solicits further bids, usually raising the price by a predetermined bid increment. This continues until no one is willing to increase the bid any further, at which point the auction is closed and the final bidder receives the item at his bid price. Because the winner pays his bid, this type of auction is known as a first-price auction. The Dutch auction, also a first-price auction, is descending. That is, the auctioneer begins at a high price, higher than he believes the item will fetch, then decreases the price until a bidder finally calls out, “Mine!” The bidder then receives the item at the price at which he made the call. If multiple items are offered, the process continues until all items are sold. One of the primary advantages of Dutch auctions is speed. Since there are never more bids than there are items being auctioned, the process takes relatively little time. This is one reason they are used in places such as flower markets in Holland (hence the name “Dutch”). In the English and Dutch auctions, bidders receive information as others bid (or refrain from bidding). However, in the third type of auction, known as the first-price, sealed-bid auction, this is not the case. In this mechanism, each bidder submits a single bid in a sealed envelope. Then, all of the envelopes are opened and the highest bidder is announced, and he receives the item at his bid price. This type of auction is most often used for refinancing credit and foreign exchange, among other (primarily financial) venues. The fourth type is the second-price, sealed-bid auction, otherwise known as the Vickrey auction. As in the first-price, sealed-bid auction, bidders submit sealed envelopes in one round of bid submission. The highest bidder wins the item, but at the price offered by the second-highest bidder (or, in a multiple-item case, the highest unsuccessful bid). This type of auction is rarely used aside from setting the foreign exchange rates in some African countries. Why So Many Auction Forms? One might think so many canonical auction forms unnecessary, that there is always a best choice that will yield the most surplus to the seller. In fact, under some strict assumptions, the revenue equivalence theorem (also due to Vickrey) states that all four auction types will result in an identical level of revenue to the seller. However, these assumptions regarding the nature of the item’s value and the risk attitudes of the bidders are very restrictive and rarely hold. The first assumption of the theorem is that the asset being auctioned has an independent, private value to all bidders. This assumption tends to hold when the item is for personal consumption, without thought toward resale, as might be the case for furniture, art, or wine. In this case, the value of the item is considered to be personal and independent of the value others might place on it (independent, private values). The assumption does not hold when bidders perceive a value of resale, either of the item itself or of a by-product of the item. Buying land for the rights to the oil that lies beneath it would be a good example. In this case, the value is common; that is, individual bids are predicated not only on personal valuation, but also on the valuation of prospective buyers. Each bidder tries to estimate the value of an object using the same known measurements (common values), but their conclusions may vary widely. In common-value environments, bidders may face the “winner’s curse.” If all of the bidders will eventually realize the same value from the item, then the primary differentiator between the bidders is their perception of that value. Absent special information about the item being purchased, the winner is the person with the largest positive error in his valuation, and, unless he is lucky, he will wind up losing money. The second assumption of the revenue equivalence theorem is that all bidders are risk-neutral. The strict definition of risk neutrality is: given the choice between a guaranteed return r and a gamble with expected return also equal to r, the bidder is completely indifferent. The bidder who prefers the guaranteed return is said to be “risk-averse,” while the bidder who prefers the gamble is said to be “risk-loving.” The style of auction a seller chooses depends on his judgment about which of these assumptions holds. If values are common rather than independent, the English auction yields higher seller revenue than the second-price, sealed-bid auction, which in turn yields higher revenues than the Dutch and first-price, sealed-bid auctions (which are tied). The rankings illustrate the strategic advantages of increased information. Because the English auction reveals all bids to all bidders, it permits dynamic updating of personal valuation. (If I see that others believe the real estate is worth more, I too may decide it is worth more.) In comparison, bidders, recognizing the winner’s curse, bid less aggressively in first-price, sealed-bid auctions and shade their bids downward. Similar reasoning applies to Dutch (descending) auctions. While the information is not updated in a second-price sealed-bid format, the winner pays the bid of the next-highest bidder, and so bidders raise bids, secure that they will not be disadvantaged if rival bids are lower. In fact, in both the first-price, sealed-bid auction and the Dutch auction, no information is revealed and the bidder pays the value of his bid. Therefore, in terms of revenue maximization, it does not matter which of these auctions a seller chooses; nor does it matter whether the bidders have private or common values. What about the role of risk aversion? In first-price, sealed-bid and Dutch auctions, risk aversion causes bidders to bid slightly higher than they might otherwise. Since they have only one chance to bid, fear of losing the item induces overbidding. In the English and Vickrey auctions, however, bidders are induced to bid their true valuation, regardless of risk attitudes. Once a seller has decided on which of the four basic auction forms to use, he can use many variations within the auction to further manipulate the outcome to maximize revenue. These mechanisms can have profound, and often counterintuitive, effects on bidding behavior—and therefore on outcomes. Among the available mechanisms are reserve prices, entry fees, invited bidders only, closing rules, lot sizes, proxy bidding, bidding increment rules, and postwin payment rules. Auction Success and Failure—An Example The 1994 U.S. Federal Communications Commission (FCC) auctions of wireless bandwidth provide a useful example of both the successes and the failures of auction design. The auction to allocate Personal Communications Service (PCS) spectrum had four primary goals: (1) to attain efficient allocation of spectrum, (2) to encourage rapid deployment and network build out, (3) to attain diversity of ownership, and (4) to raise revenue. Goals 1, 2, and 4 are met by any well-designed auction, as the winner is the one who values the item most. PCS licenses are a classic common-values good, in that they have a common, large, but uncertain value, triggering the winner’s curse. The FCC developed an elaborate network of rules to ensure the desired outcomes. To encourage price discovery, the auction was a multiround, ascending-bid, first-price auction. The many licenses available covered the entire United States, allowing major complementarities and substitutes in this market. To allow bidding that took this into account, the auctions were simultaneous, and no auction ended until they all did (every license was open until there were no more bids on any of them). Further, because the FCC wanted to discourage bidders from sitting on the sidelines until the very end, an activity rule was imposed. These and an elaborate network of other rules were carefully balanced to ensure the desired outcome. The result was great success in maximizing revenues. The 1994 FCC auction stumbled, however, in its goal of diversifying ownership. To achieve this goal, the FCC set aside two blocks (C and F) for entrepreneurs, female and minority-owned firms, and regional companies. To that end, the FCC took the carefully designed auction and changed it just a little bit. Bidders in these two special blocks received a 25 percent bid credit. That is, if they bid eighty dollars for an item, the bid was treated as if they had bid one hundred dollars. Further, their deposit requirement was just one-fifth of what the other winning bidders paid. Lastly, “diversity bids” were offered a generous installment payment plan. Bidders had a month to furnish 10 percent of the bid and owed no more principal until seven years later. The interest for this loan was charged at the T-bill rate. Unfortunately, this seemingly small change had disastrous effects. The payment policy created moral hazard (see insurance) by, in effect, providing bidders with low-cost insurance against big misestimates or drops in the value of the bandwidth. Since winning bidders had to make a down payment of only 10 percent, if, after seven years, the item turned out to be worth less than 90 percent of the bid price, then the purchaser could simply default. This is precisely what happened. Companies bought the licenses and invested 10 percent, and then declared bankruptcy when the license turned out to be worth less than 90 percent of the bid. Nearly every company that won a license in the C or F blocks in the 1994 auction either went bankrupt or was bought by a larger firm. In the end, the FCC’s ham-fisted pursuit of a noble goal destroyed this segment of the auction entirely. PCS auctions continue today, though they have been massively restructured. About the Author Leslie R. Fine is a scientist in the Information Dynamics Lab at HP Labs in Palo Alto, California. Further Reading   Ashenfelter, Orley. “How Auctions Work for Wine and Art.” Journal of Economic Perspectives 3 (1989): 23–26. Kagel, J. H. “Auctions: A Survey of Experimental Research.” In John H. Kagel and Alvin E. Roth, eds., The Handbook of Experimental Economics. Princeton: Princeton University Press, 1995. Pp. 1–86. Klemperer, P. D., ed. The Economic Theory of Auctions. Cheltenham, U.K.: Edward Elgar, 1999. McAfee, R. P., and J. McMillan. “Auctions and Bidding.” Journal of Economic Literature 25 (1987): 699–738. Milgrom, P. R. “Auctions and Bidding: A Primer.” Journal of Economic Perspectives 3 (1989): 3–22. Milgrom, P. R. “Putting Auction Theory to Work: The Simultaneous Ascending Auction.” Journal of Political Economy 108, no.21 (2000): 245–272. Related Links Vernon Smith on Markets and Experimental Economics. EconTalk, May 2007. (0 COMMENTS)

/ Learn More

Brand Names

Consumers pay a higher price for brand-name products than for products that do not carry an established brand name. Because this involves paying extra for what some consider an identical product that merely has been advertised and promoted, brand names may appear to be economically wasteful. This argument was behind the decision to eliminate all brand names on goods produced in the Soviet Union immediately after the 1917 Communist revolution. The problems this experiment caused—problems described by economist Marshall Goldman—suggest that brand names serve an important economic function. When the producers of products are not identified with brand names, a crucial element of the market mechanism cannot operate because consumers cannot use their past experience to know which products to buy and which not to buy. In particular, consumers can neither punish companies that supply low-quality products by stopping their purchases nor reward companies that supply high-quality products by increasing their purchases. Thus, when all brand names, including factory production marks, were eliminated in the Soviet Union, unidentified producers manufacturing indistinguishable products each had an incentive to supply lower-quality goods. And the inability to punish these producers created significant problems for consumers. Consumer reliance on brand names gives companies the incentive to supply high-quality products because they can take advantage of superior past performance to charge higher prices. Benjamin Klein and Keith Leffler (1981) showed that this price premium paid for brand-name products facilitates market exchange. A company that creates an established brand for which it can charge higher prices knows that if it supplies poor products and its future demand declines, it will lose the stream of income from the future price premium it would otherwise have earned on its sales. This decrease in future income amounts to a depreciation in the market value of the company’s brand-name. A company’s brand-name capital, therefore, is a form of collateral that ensures company performance. Companies without valuable brand names that are not earning price premiums on their products, on the other hand, have less to lose when they supply low-quality products and their demand falls. Therefore, while consumers may receive a direct benefit for the extra price they pay for brand-name products, such as the status of driving a BMW, the higher price also creates market incentives for companies with valuable brand names to maintain and improve product quality because they have something to lose if they perform poorly. Brand-name quality assurance is especially important when consumers lack complete information about product quality at the time of purchase. Companies may take advantage of this lack of information by shaving product quality, thereby lowering costs and increasing short-term profits. A company that takes such actions, however, will experience a decrease in its future demand, and therefore in its long-term profits. The greater the value of a company’s brand name—that is, the greater the present value of the extra profit a company earns on its sales—the more likely it is that this long-term negative effect on profits will outweigh any short-term positive effect and deter a policy of intentional quality deterioration. Moreover, the greater the value of a company’s brand name, the more likely the company is to take quality-control precautions. To protect its brand name, a company will want to make sure its consumers are satisfied. When it is difficult to determine the quality of a product before purchase and the consequences of poor quality are significant, it makes economic sense for consumers to rely on brand names and the company reputations associated with them. By paying more for a brand-name product in those circumstances, consumers are not acting irrationally. Consumers know that companies with established reputations for consistent high quality have more to lose if they do not perform well—namely, the loss of the ability to continue to charge higher prices. A company’s high reputation indicates not only that the company has performed well in the past, but also that it will perform well in the future because it has an economic incentive to maintain and improve the quality of its products. A consumer who pays a high price for a brand-name product is paying for the assurance of increased quality. When a company performs poorly, the brand-name, market-enforced sanction it faces is usually much greater than any court-enforced legal sanction it might face. Consider, for example, the case of defective Firestone tires on Ford Explorer sport-utility vehicles in 2000. Because consumers cannot ascertain the quality of tires by direct examination, they rely largely on the tire supplier’s brand name, which was badly damaged in this case. One day after Bridgestone (Firestone’s Japan-based parent company) announced the recall of the defective tires, Bridgestone’s stock price dropped nearly 20 percent; it continued to fall over the next three weeks as additional information about the problem was disclosed. Overall, this amounted to a decline of nearly 40 percent in Bridgestone’s stock-market value relative to the Nikkei general market index. Ford’s stock price did not drop initially, but eventually it fell about 18 percent relative to the S&P 500 index over the same period as information was revealed that Ford was aware of the possibility of tire failure more than a year before the tire recall. These stock-market declines amounted to losses of about $7 billion in Bridgestone’s market value and nearly $10 billion in Ford Motor Company’s market value—market measures of each company’s future lost profit caused by these events. These costs were substantially greater than the direct costs associated with the recall and liability litigation, estimated by Bridgestone at $754 million and by Ford at $590 million. Although these direct costs clearly were substantial, they were dwarfed by the brand-name market costs borne by Bridgestone and Ford, which were between some nine and seventeen times as large. Similar market effects occurred in 1993 when E. coli bacteria in the hamburger meat purchased by Jack-in-the-Box killed four people and sickened about five hundred. Although Jack-in-the-Box reacted quickly to the food poisoning and took actions to prevent its recurrence, its stock-market value fell by more than 30 percent when this information was disclosed, or more than double the direct litigation and recall costs. Even in cases where the problem is not strictly the company’s “fault,” such as the 1982 Tylenol tampering cases that led to seven poisoning deaths, the $2 billion (or more than 20 percent) decline in stock-market value borne by the producer, Johnson and Johnson, was almost ten times as great as the company’s direct recall and litigation costs. While the government regulates the quality of products, the regulatory cost that can be imposed on companies is generally a small fraction of the economic cost that the market imposes on poorperforming companies with established brand names. If those companies had lacked brand names, the economic punishment they suffered would have been much smaller. Because brand-name companies have a greater incentive to ensure high quality, consumers who buy brand-name products are necessarily paying for something: the added assurance that the company has taken the necessary measures to protect its reputation for quality. Therefore, even for purchases of a “standardized” product such as aspirin, where most suppliers purchase the basic ingredient, acetylsalicylic acid, from the same manufacturer, it may make sense for consumers to purchase a higher-priced brand-name product. Consumers are not ignorant or irrational when they buy an advertised brand-name aspirin rather than a non-brand-name product at a lower price. Bottled aspirin supplied by brand-name and “non-brand-name” producers may differ technologically in dissolve rate, shelf life, and other factors. But more important, the products differ economically. A lower-priced “nonbrand” aspirin is not economically equivalent to higher-priced brand-name aspirin, because a company selling aspirin under a valuable brand name has more to lose if something goes wrong. The brand-name aspirin supplier, therefore, has a greater economic incentive to take added precautions in producing the product. Similar economic forces are at work when multiple generic drug companies produce the same drug. Because pharmacies generally have an incentive to purchase the lowest-cost generic variant, each generic company has the incentive to lower costs, including reducing its quality-control efforts, subject only to imperfect FDA audits. When companies do not earn a large price premium on their products, the potential sanction the companies face for poor quality control is much lower than the economic cost borne by brand-name companies. Seen in this light, the question is not whether consumers are ignorant or irrational when they pay a higher price for a brand-name product, but whether they are paying too much for the additional quality assurance brand names necessarily provide. Even people who assume that all aspirin is alike spend some money on brand-name assurance since they do not buy “nonbrand” aspirin off the back of a pickup truck at a swap meet. Instead, they may buy “lower-brand-name” aspirin, such as aspirin carrying the brand name of a chain drugstore. It is significant, however, that consumers buy a much smaller share of such “lower-brand-name” aspirin when purchasing children’s aspirin than when buying adult-dosage aspirin. Many people decide, as evidenced by their behavior, that although they are willing to purchase less brand-name assurance for themselves, they want the higher-quality assurance for their children, for whom quality-control considerations may be more important. About the Author Benjamin Klein is professor emeritus of economics at UCLA and director, LECG, LLC. Further Reading   Goldman, Marshall. “Product Differentiation and Advertising: Some Lessons from the Soviet Experience.” Journal of Political Economy 68 (1960): 346–357. Klein, Benjamin, and Keith Leffler. “The Role of Market Forces in Assuring Contractual Performance.” Journal of Political Economy 89 (1981): 615–641. Mitchell, Mark. “The Impact of External Parties on Brand-Name Capital: The 1982 Tylenol Poisonings and Subsequent Cases.” Economic Inquiry 27, no. 4 (1989): 601–618. Related Links Advertising, from the Concise Encyclopedia of Economics. Consumer Protection, from the Concise Encyclopedia of Economics.  Rory Sutherland on Alchemy. EconTalk, November 2019. Postrel on Style. EconTalk, November 2006. Morgan Rose, Shedding Light on Market Power. December 2002.   (0 COMMENTS)

/ Learn More