This is my archive

bar

Balance of Payments

Few subjects in economics have caused so much confusion—and so much groundless fear—in the past four hundred years as the thought that a country might have a deficit in its balance of payments. This fear is groundless for two reasons: (1) there never is a deficit, and (2) it would not necessarily hurt anything if there was one. The balance-of-payments accounts of a country record the payments and receipts of the residents of the country in their transactions with residents of other countries. If all transactions are included, the payments and receipts of each country are, and must be, equal. Any apparent inequality simply leaves one country acquiring assets in the others. For example, if Americans buy automobiles from Japan, and have no other transactions with Japan, the Japanese must end up holding dollars, which they may hold in the form of bank deposits in the United States or in some other U.S. investment. The payments Americans make to Japan for automobiles are balanced by the payments Japanese make to U.S. individuals and institutions, including banks, for the acquisition of dollar assets. Put another way, Japan sold the United States automobiles, and the United States sold Japan dollars or dollar-denominated assets such as treasury bills and New York office buildings. Although the totals of payments and receipts are necessarily equal, there will be inequalities—excesses of payments or receipts, called deficits or surpluses—in particular kinds of transactions. Thus, there can be a deficit or surplus in any of the following: merchandise trade (goods), services trade, foreign investment income, unilateral transfers (foreign aid), private investment, the flow of gold and money between central banks and treasuries, or any combination of these or other international transactions. The statement that a country has a deficit or surplus in its “balance of payments” must refer to some particular class of transactions. As Table 1 shows, in 2004 the United States had a deficit in goods of $665.4 billion but a surplus in services of $48.8 billion. Many different definitions of the balance-of-payments deficit or surplus have been used in the past. Each definition has different implications and purposes. Until about 1973 attention was focused on a definition of the balance of payments intended to measure a country’s ability to meet its obligation to exchange its currency for other currencies or for gold at fixed exchange rates. To meet this obligation, countries maintained a stock of official reserves, in the form of gold or foreign currencies, that they could use to support their own currencies. A decline in this stock was considered an important balance-of-payments deficit because it threatened the ability of the country to meet its obligations. But that particular kind of deficit, by itself, was never a good indicator of the country’s financial position. The reason is that it ignored the likelihood that the country would be called on to meet its obligation and the willingness of foreign or international monetary institutions to provide support. After 1973, interest in official reserve positions as a measure of balance of payments greatly diminished as the major countries gave up their commitment to convert their currencies at fixed exchange rates. This reduced the need for reserves and lessened concern about changes in the size of reserves. Since 1973, discussions of “the” balance-of-payments deficit or surplus usually refer to what is called the current account. This account contains trade in goods and services, investment income earned abroad, and unilateral transfers. It excludes the capital account, which includes the acquisition or sale of securities or other property. Because the current account and the capital account add up to the total account, which is necessarily balanced, a deficit in the current account is always accompanied by an equal surplus in the capital account, and vice versa. A deficit or surplus in the current account cannot be explained or evaluated without simultaneous explanation and evaluation of an equal surplus or deficit in the capital account. A country is more likely to have a deficit in its current account the higher its price level, the higher its gross national product, the higher its interest rates, the lower its barriers to imports, and the more attractive its investment opportunities—all compared with conditions in other countries—and the higher its exchange rate. The effects of a change in one of these factors on the current account balance cannot be predicted without considering the effect on the other causal factors. For example, if the U.S. government increases tariffs, Americans will buy fewer imports, thus reducing the current account deficit. But this reduction will occur only if one of the other factors changes to bring about a decrease in the capital account surplus. If none of these other factors changes, the reduced imports from the tariff increase will cause a decline in the demand for foreign currency (yen, deutsche marks, etc.), which in turn will raise the value of the U.S. dollar (see foreign exchange). The increase in the value of the dollar will make U.S. exports more expensive and imports cheaper, offsetting the effect of the tariff increase. The net result is that the tariff increase brings no change in the current account balance. Table 1 The U.S. Balance of Payments, 2004 *. Includes statistical discrepancy. Goods −665.4 Services +48.8 Investment income +30.4 Balance on goods, services, and income −587.2 Unilateral transfers −80.9 Balance on current account −668.1 Nonofficial capital* +270.6 Official reserve assets +397.5 Balance on capital account +668.1 Total balance 0 Source: U.S. Department of Commerce, Survey of Current Business. Notes: Dollar amounts are in billions; += surplus; − = deficit. Contrary to the general perception, the existence of a current account deficit is not in itself a sign of bad economic policy or bad economic conditions. If the United States has a current account deficit, all this means is that the United States is importing capital. And importing capital is no more unnatural or dangerous than importing coffee. The deficit is a response to conditions in the country. It may be a response to excessive inflation, to low productivity, or to inadequate saving. It may just as easily occur because investments in the United States are secure and profitable. Furthermore, the conditions to which the deficit responds may be good or bad and may be the results of good or bad policy; but if there is a problem, it is in the underlying conditions and not in the deficit per se. During the 1980s there was a great deal of concern about the shift of the U.S. current account balance from a surplus of $5 billion in 1981 to a deficit of $161 billion in 1987. This shift was accompanied by an increase of about the same amount in the U.S. deficit in goods. Claims that this shift in the international position was causing a loss of employment in the United States were common, but that was not true. In fact, between 1981 and 1987, the number of people employed rose by more than twelve million, and employment as a percentage of population rose from 60 percent to 62.5 percent. Many people were also anxious about the other side of the accounts—the inflow of foreign capital that accompanied the current account deficit—fearing that the United States was becoming owned by foreigners. The inflow of foreign capital did not, however, reduce the assets owned by Americans. Instead, it added to the capital within the country. In any event, the amount was small relative to the U.S. capital stock. Measurement of the net amount of foreign-owned assets in the United States (the excess of foreign assets in the United States over U.S. assets abroad) is very uncertain. At the end of 1988, however, it was surely much less than 4 percent of the U.S. capital stock and possibly even zero. Later, there was fear of what would happen when the capital inflow slowed down or stopped. But after 1987 it did slow down and the economy adjusted, just as it had adjusted to the big capital inflow earlier, by a decline in the current account and trade deficits. These same concerns surfaced again in the late 1990s and early 2000s as the current account went from a surplus of $4 billion in 1991 to a deficit of $666 billion in 2004. The increase in the current account deficit account, just as in the 1980s, was accompanied by an almost equal increase in the deficit in goods. Interestingly, the current account surpluses of 1981 and 1991 both occurred in the midst of a U.S. recession, and the large deficits occurred during U.S. economic expansions. This makes sense because U.S. imports are highly sensitive to U.S. economic conditions, falling more than proportionally when U.S. GDP falls and rising more than proportionally when U.S. GDP rises. Just as in the 1980s, U.S. employment expanded, with the U.S. economy adding more than twenty-one million jobs between 1991 and 2004. Also, employment as a percentage of population rose from 61.7 percent in 1991 to 64.4 percent in 2000 and, although it fell to 62.3 percent in 2004, was still modestly above its 1991 level. How about the issue of foreign ownership? By the end of 2003, Americans owned assets abroad valued at market prices of $7.86 trillion, while foreigners owned U.S. assets valued at market prices of $10.52 trillion. The net international investment position of the United States, therefore, was $2.66 trillion. This was only 8.5 percent of the U.S. capital stock.1 About the Author Herbert Stein, who died in 1999, was a senior fellow at the American Enterprise Institute in Washington, D.C., and was on the board of contributors of the Wall Street Journal. He was chairman of the Council of Economic Advisers under Presidents Richard Nixon and Gerald Ford. The editor, David R. Henderson, with the help of Kevin Hoover and Mack Ott, updated the data and added the last two paragraphs. Further Reading   Dornbusch, Rudiger, Stanley Fischer, and Richard Startz. Macroeconomics. 9th ed. New York: McGraw-Hill Irwin, 2003. For general concepts and theory, see pp. 298–332. Economic Report of the President. 2004. For good, clear reasoning about balance of payments, see pp. 239–264. Survey of Current Business. Online at: http://www.bea.gov/bea/pubs.htm (for current data).   Footnotes 1. If by capital stock we mean the net value of U.S. fixed reproducible assets, which was $31.4 trillion in 2003. See Survey of Current Business, September 2004, online at: http://www.bea.gov/bea/ARTICLES/2004/09September/Fixed_Assets.pdf. Related Links Pedro Schwartz, Commercial Reprisals are a Mistake. July 2018. Don Boudreaux on Globalization and Trade Deficits. EconTalk, January 2008. Jacob Viner, Studies in the Theory of International Trade. Ludwig von Mises, The Theory of Money and Credit.  (0 COMMENTS)

/ Learn More

Bankruptcy

Bankruptcy is common in America today. Notwithstanding two decades of largely uninterrupted economic growth, the annual bankruptcy filing rate has quintupled, topping 1.5 million individuals annually. Recent years also have seen several of the largest and most expensive corporate bankruptcies in history. This confluence of skyrocketing personal bankruptcies in a period of prosperity, an increasingly expensive and dysfunctional Chapter 11 reorganization system, and the macroeconomic competitive pressures of globalization has spurred legislative efforts to reform the bankruptcy code. History of Bankruptcy Early English bankruptcy laws were designed to assist creditors in collecting the debtor’s assets, not to protect the debtor or discharge (forgive) his debts. The Bankruptcy Clause of the U.S. Constitution also reflects this procreditor purpose of early bankruptcy law. Under the Articles of Confederation, the states alone governed debtor-creditor relations. This situation led to diverse and contradictory state laws, many of which were prodebtor laws designed to favor farmers (see regulation). Like other provisions of the Constitution, the enumeration of the bankruptcy power in article I, section 8 was designed to encourage the development of a commercial republic and to temper the excesses of prodebtor state legislation that proliferated under the Articles of Confederation. As James Madison observed in Federalist number 42: The power of establishing uniform laws of bankruptcy is so intimately connected with the regulation of commerce, and will prevent so many frauds where the parties or their property may lie or be removed into different States that the expediency of it [i.e., Congress’s exclusive power to enact bankruptcy laws] seems not likely to be drawn into question. The primary purpose of the Bankruptcy Clause was to protect creditors, not debtors, and in fact, debtor’s prisons persisted in many states well into the eighteenth century. During the nineteenth century, the federal government exercised its bankruptcy powers only sporadically and in response to major economic downturns. The first bankruptcy law lasted from 1800 to 1803, the second from 1841 to 1843, and the third from 1867 to 1878. During the periods without a federal bankruptcy law, debtor-creditor relations were governed solely by the states. The first permanent federal bankruptcy law was enacted in 1898 and remained in effect, with amendments, until it was replaced with a comprehensive new law in 1978, the essential structure of which remains in place today. Because bankruptcy law intervenes only when a debtor is insolvent, nonbankruptcy and state law govern most issues relating to standard debtor-creditor relations, such as contracts, real estate mortgages, secured transactions, and collection of judgments. Federal bankruptcy law is thus a hybrid system of federal law layered on top of this foundation of state law, leading to variety in debtor-creditor regimes. Bankruptcy law is generally procedural in nature and therefore attempts to preserve nonbankruptcy substantive rights, such as whether a creditor has a valid claim to collect against the debtor in bankruptcy, unless modification is necessary to advance an overriding bankruptcy policy. Bankruptcy Policies Bankruptcy law serves three basic purposes: (1) to solve a collective action problem among creditors in dealing with an insolvent debtor, (2) to provide a “fresh start” to individual debtors overburdened by debt, and (3) to save and preserve the going-concern value of firms in financial distress by reorganizing rather than liquidating. First, bankruptcy law solves a collective action problem among creditors. Nonbankruptcy debt collection law is an individualized process grounded in bilateral transactions between debtors and creditors. Outside bankruptcy, debt collection is essentially a race of diligence. Creditors able to translate their claims against the debtor into claims against the debtor’s property are entitled to do so, subject to state laws that declare some of the debtor’s property, such as the debtor’s homestead, to be “exempt” from creditors’ claims. When a debtor is insolvent and there are not enough assets to satisfy all creditors, however, a common-pool problem arises (see tragedy of the commons). Each creditor has an incentive to try to seize assets of the debtor, even if this prematurely depletes the common pool of assets for creditors as a whole. Although creditors as a group may be better off by cooperating and working together to distribute the debtor’s assets in an orderly fashion, each individual creditor has an incentive to race to grab his share. If he waits and others do not, there may not be enough assets available to satisfy his claim. Bankruptcy stops this race of diligence in favor of an orderly distribution of the debtor’s assets through a collective proceeding that jointly involves anyone with a claim against the debtor. Once the debtor files for bankruptcy, all creditor collection actions are automatically “stayed,” prohibiting further collection actions without permission of the bankruptcy court. In addition, any collections by creditors from an insolvent debtor in the period preceding the debtor’s bankruptcy filing can be prohibited as a “preference.” One interesting policy option that is not currently allowed is to allow parties to solve the common-pool problem through contract and corporate law, making bankruptcy unnecessary. The second bankruptcy policy is the provision of a fresh start for individual debtors through a cancellation, or “discharge,” of his debts in bankruptcy. Although many rationales have been offered for the fresh start, none is wholly persuasive, and none provides a compelling rationale for the current American rule that the debtor’s right to a discharge is mandatory and nonwaiveable. This requirement increases the risk of lending to the debtor, raising the cost of credit for all debtors and leading to the rationing and denial of credit to high-risk borrowers. Allowing debtors to waive or modify their discharge right in some or all situations might be more efficient and better for debtors because by modifying their discharge rights, debtors could get lower interest rates or other more favorable credit terms. Indeed, the American system is unique in providing a mandatory fresh-start policy. Personal bankruptcy filing rates have risen dramatically over the past twenty-five years, from fewer than 200,000 annual filings in 1979 to more than 1.6 million in 2004. Personal bankruptcy filing rates were traditionally caused by factors such as high personal debt rates, divorce, and unemployment. But given the unprecedented prosperity during the past twenty-five years—a period of generally low unemployment, declining divorce rate, low interest rates and rapid accumulation of household wealth due to a booming stock market and residential real estate market—this traditional model of the causes of consumer bankruptcy filings has become increasingly untenable (Zywicki 2005b). Scholars have suggested that the decline in the stigma associated with bankruptcy, changes in the relative economic benefits and costs of filing bankruptcy (especially the relaxation of the bankruptcy laws in the 1978 Bankruptcy Code), and changes in the consumer credit system itself have made individuals more willing to file bankruptcy than in the past (Zywicki 2005b). In response to this unprecedented rise in personal bankruptcies and the underlying reason for it, Congress has proposed reforms to reduce the abuse and fraud of the current system. One suggested reform is to require high-income filers to repay some of their debts out of their future income as a condition for filing bankruptcy (Jones and Zywicki 1999). The third bankruptcy policy is the promotion of the reorganization of firms in financial distress. A firm confronting financial problems might be worth more as a going concern than it would be if it was closed and sold piecemeal to satisfy creditors’ claims. A firm’s assets may be more valuable when kept together and owned by that firm than if they are liquidated and sold to a third party. Such assets could include physical assets (e.g., custom-made machinery), human capital assets (such as management or a specially skilled workforce), or particular synergies between various assets of the company (such as knowledge of how best to exploit intellectual property). Thus, maintaining the existing combination of assets as a going concern, rather than liquidating the firm, could make creditors better off. The railroads at the turn of the century exemplify this principle. Rather than liquidating them and selling off the various pieces for scrap (e.g., tearing up the tracks and selling them as scrap steel), reorganization kept the rail network in place and the trains rolling, and creditors were paid out of the operating revenues of the reorganized firm. Other firms, however, may not be merely in financial distress. Some may be economically failed enterprises generating a value less than the opportunity costs of their assets. Economic efficiency, and concern for creditors, would require such firms to be liquidated and their assets redeployed to higher-valued uses. For instance, given the ubiquity and dominance of computers, it was obviously efficient to liquidate the venerable Smith-Corona typewriter company and allow its workers to retrain and its physical assets to be reallocated in the economy. It is difficult to distinguish a firm in financial distress from an economically failed enterprise, and it is doubtful that the current reorganization system is very accurate at making the distinction. First, the decision whether to reorganize is made by a bankruptcy judge rather than by the market. The reorganization decision, therefore, is essentially a form of mini–central planning, with the bankruptcy judge making the planners’ decision whether to allow the business to continue operating or to shut it down. As such, the decision is subject to the standard knowledge and incentive problems that plague central planning generally (see friedrich august hayek). Second, the decision whether to file and with which court is made by the debtor himself and the debtor’s management staff, which will have obvious incentives to file in friendly courts and to push for reorganization and the preservation of their jobs. Third, the beneficiaries of reorganization efforts (incumbent management, workers, suppliers, etc.) have great incentives to participate in the bankruptcy case and to make their interests known to the judge. Secured creditors will accept a reorganization only if the company is worth more dead than alive. But unsecured creditors, who have no hope of recovering their investment if the company is killed, have an incentive to favor reorganization even if there is only a tiny probability that reorganization will work: a small probability of something is better than a certainty of nothing. Given the errors and inefficiencies inherent in the current system, some scholars have proposed replacing the current judicial-centered system or at least supplementing it with various market mechanisms. One such mechanism would be an auction of the assets of the company as a going concern (Baird 1986). Another would be ex ante collective contracts (such as provisions in a firm’s corporate charter) that would apply if the firm became insolvent and would put creditors on notice about the risks of dealing with a particular company, causing them to tailor their interest rates and other credit terms accordingly. The economic costs of inefficient reorganizations can be substantial. First, in large reorganization cases, the direct costs of bankruptcy reorganization routinely exceed several hundred million dollars in professional and other fees. Second, there is an opportunity cost associated with retaining the current allocation of assets, even if temporarily. For instance, a failing business continues to occupy its current location and to retain its workers and assets, not only slowing the reallocation of these assets to higher-valued uses in other firms and industries, but also injuring consumers, suppliers, and others. The Future of Bankruptcy Law The past several years have seen concerted efforts to reform the bankruptcy laws to address many of the above concerns. The anomaly of skyrocketing consumer bankruptcy filings during an era of economic prosperity has spurred widespread support for efforts to reform the consumer bankruptcy system. A few such reforms would include requiring high-income debtors who can repay a substantial portion of their debts to do so by entering a Chapter 13 repayment plan rather than filing for Chapter 7 bankruptcy, limiting repeat filings, and limiting some property exemptions. The proposed bankruptcy reform legislation would also attempt to streamline and reduce the cost and delay of corporate Chapter 11 bankruptcy proceedings, especially as they apply to small business bankruptcies. Comprehensive bankruptcy reform legislation has been proposed in every Congress since the late 1990s but, notwithstanding overwhelming bipartisan support in both houses, has not yet been enacted. One reason is that various politicians introduced extraneous but controversial political issues; another reason is that bankruptcy professionals oppose reforms that would reduce the number of bankruptcies filed and the expense of bankruptcy proceedings. On the other hand, the increasing pressure of economic globalization and the increasing challenges of bankruptcies involving multinational corporations have created incentives for bankruptcy reform. As investment capital increasingly flows worldwide, globalization creates strong incentives for national economies to adopt efficient economic policies, including bankruptcy policies. The current American bankruptcy system rests on investors’ willingness to voluntarily continue to invest in American firms despite the danger that capital investment will be trapped in an expensive and inefficient reorganization regime if the firm fails. By contrast, some major economies, such as Germany and Japan, have introduced more flexibility into their bankruptcy systems. Although many commentators have advocated establishing a uniform transnational bankruptcy system by treaty, devising a scheme that would gain assent from member countries would be difficult. Also, such a regime would likely be subject to many of the same interest-group pressures that characterize the American regime. The competitive forces of globalization may generate, instead of a “top-down” global bankruptcy system, an efficient and spontaneous convergence of bankruptcy systems throughout the world. About the Author Todd J. Zywicki is a professor of law at George Mason University’s School of Law and a senior research fellow of the James Buchanan Center, Program on Economics, Politics, and Philosophy. He was previously the director of the Office of Policy Planning at the Federal Trade Commission. Further Reading   Baird, Douglas G. Elements of Bankruptcy. 3d ed. New York: Foundation Press, 2001. Baird, Douglas G. “The Uneasy Case for Corporate Reorganization.” Journal of Legal Studies 15 (1986): 127–147. Jackson, Thomas H. The Logic and Limits of Bankruptcy Law. Cambridge: Harvard University Press, 1986. Jones, Edith H., and Todd J. Zywicki. “It’s Time for Means-Testing.” Brigham Young University Law Review 1999 (1999): 177–250. Rasmussen, Robert K. “A Menu Approach to Corporate Bankruptcy.” Texas Law Review 71 (1992): 51–121. Skeel, David A. Jr. Debt’s Dominion: A History of Bankruptcy Law in America. Princeton: Princeton University Press, 2001. White, Michelle J. “Economic Versus Sociological Approaches to Legal Research: The Case of Bankruptcy.” Law and Society Review 25 (1991): 685–709. Zywicki, Todd J. “The Bankruptcy Clause.” In Edwin Meese et al., ed., The Heritage Guide to the Constitution. Washington, D.C.: Heritage Foundation, 2005a. Pp. 112–114. Zywicki, Todd J. “An Economic Analysis of the Consumer Bankruptcy Crisis.” Northwestern University Law Review 99, no. 4 (2005b): 1463–1541. Zywicki, Todd J. “The Past, Present, and Future of Bankruptcy Law in America.” Michigan Law Review 101, no. 6 (2003): 2016–2036.   Related Links Zywicki on Debt and Bankruptcy. EconTalk, March 2009. Skeel on Bankruptcy and the Auto Industry Bailout. EconTalk, July 2011. Zingales on Capitalism and Crony Capitalism. EconTalk, July 2012. Michael D. Thomas, Does Economics Need More Than One Lesson? October 2019. John J. Lalor, Cyclopaedia of Political Science, Political Economy, and the Political History of the United States.   (0 COMMENTS)

/ Learn More

Bank Runs

A run on a bank occurs when a large number of depositors, fearing that their bank will be unable to repay their deposits in full and on time, simultaneously try to withdraw their funds immediately. This may create a problem because banks keep only a small fraction of deposits on hand in cash; they lend out the majority of deposits to borrowers or use the funds to purchase other interest-bearing assets such as government securities. When a run comes, a bank must quickly increase its cash to meet depositors’ demands. It does so primarily by selling assets, often hastily and at fire-sale prices. As banks hold little capital and are highly leveraged, losses on these sales can drive a bank into insolvency. The danger of bank runs has been frequently overstated. For one thing, a bank run is unlikely to cause insolvency. Suppose that depositors, worried about their bank’s solvency, start a run and switch their deposits to other banks. If their concerns about the bank’s solvency are unjustified, other banks in the same market area will generally gain from recycling funds they receive back to the bank experiencing the run. They would do this by making loans to the bank or by purchasing the bank’s assets at non-fire-sale prices. Thus, a run is highly unlikely to make a solvent bank insolvent. Of course, if the depositors’ fears are justified and the bank is economically insolvent, other banks will be unlikely to throw good money after bad by recycling their funds to the insolvent bank. As a result, the bank cannot replenish its liquidity and will be forced into default. But the run would not have caused the insolvency; rather, the recognition of the existing insolvency caused the run. A more serious potential problem is spillover to other banks. The likelihood of this happening depends on what the “running” depositors do with their funds. They have three choices: 1- They can redeposit the money in banks that they think are safe, known as direct redeposit. 2- If they perceive no bank to be safe, they can buy treasury securities in a “flight to quality.” But what do the sellers of the securities do? If they deposit the proceeds in banks they believe are safe, as is likely, this is an indirect redeposit. 3- If neither the depositors nor the sellers of the treasury securities believe that any bank is safe, they hold the funds as currency outside the banking system. A run on individual banks would then be transformed into a run on the banking system as a whole.   If the run is either type 1 or type 2, no great harm is done. The deposits and reserves are reshuffled among the banks, possibly including overseas banks, but do not leave the banking system. Temporary loan disruptions may occur because borrowers have to transfer from deposit-losing to deposit-gaining banks, and interest rates and exchange rates (see foreign exchange) may change. But these costs are not the calamities that people often associate with bank runs. Higher costs could occur in a type 3 run, because currency (an important component of bank reserves) would be removed from the banking system. Banks operate on a fractional reserve basis, which means that they hold only a fraction of their deposits as reserves. When people try to convert their deposits into currency, the money supply shrinks, dampening economic activity in other sectors. In addition, almost all banks would sell assets to replenish their liquidity, but few banks would be buying. Losses would be large, and the number of bank failures would increase. In practice, bank failures have been relatively infrequent. From the end of the Civil War through 1920 (after the Federal Reserve was established in 1913 but before the Federal Deposit Insurance Corporation was formed in 1933), the bank failure rate was lower, on average, than that of nonbanking firms. The failure rate increased sharply in the 1920s and again between 1929 and 1933, when nearly 40 percent of U.S. banks failed. Yet, from 1875 through 1933, losses from failures averaged only 0.2 percent of total deposits in the banking system annually. Losses to depositors at failed banks averaged only a fraction of the annual losses suffered by bondholders of failed nonbanking firms. A survey of all failures of national banks from 1865 through 1936 by J. F. T. O’Connor, comptroller of the currency from 1933 through 1938, concluded that runs were a contributing cause in less than 15 percent of the three thousand failures. The fact that the number of runs on individual banks was far greater than this means that most runs did not lead to failures. The evidence suggests that most bank runs were then and are today type 1 or 2, and few were of the contagious type 3. Because a type 3 run—a run on the banking system—causes an outflow of currency, such a run can be identified by an increase in the ratio of currency to the money supply (most of the various measures of the money supply consist of currency in the hands of the public plus different types of bank deposits). Increases in this ratio have occurred in only four periods since the Civil War, and in only two—1893 and 1929–1933—did an unusually large number of banks fail. Thus, market forces and the banking system on its own successfully insulated runs on individual banks in most periods. Moreover, even in the 1893 and 1929–1933 incidents, the evidence is unclear whether the increase in bank failures caused the economic downturn or the economic downturn caused the bank failures. As a result of the introduction of deposit insurance in 1933, runs into currency are even less likely today. The threat of runs from perceived troubled large banks, which have sizable uninsured deposits, to perceived safe banks serves as a form of market discipline that may reduce the likelihood of runs on all banks by giving them an incentive to strengthen their financial positions. About the Author George G. Kaufman is the John F. Smith Professor of Finance and Economics at Loyola University in Chicago. He is also cochair of the Shadow Financial Regulatory Committee. Further Reading   Allen, Franklin, and Douglas Gale. “Optimal Financial Crises.” Journal of Finance 53, no. 4 (1998): 1245–1284. Benston, George J., Robert A. Eisenbeis, Paul M. Horvitz, Edward J. Kane, and George G. Kaufman. Perspectives on Safe and Sound Banking. Cambridge: MIT Press, 1986. Carlstrom, Charles T. “Bank Runs, Deposit Insurance, and Bank Regulation.” Parts 1 and 2. Federal Reserve Bank of Cleveland Economic Commentary, February 1 and 15, 1988. Diamond, Douglas W., and Philip H. Dydvig. “Bank Runs, Deposit Insurance and Liquidity.” Journal of Political Economy 91, no. 3 (1983): 401–419. Gorton, Gary. “Banking Panics and Business Cycles.” Oxford Economic Papers 40 (December 1988): 751–781. Kaufman, George G. “Bank Runs: Causes, Benefits and Costs.” Cato Journal 2, no. 3 (1988): 559–588. Kaufman, George G. “Banking Risk in Historical Perspective.” Research in Financial Services 1 (1989): 151–164. Neuberger, Jonathan A. “Depositor Discipline and Bank Runs.” Federal Reserve Bank of San Francisco, weekly letter, April 12, 1991. Schumaker, Liliana. “Bank Runs and Currency Runs in a System Without a Safety Net.” Journal of Monetary Economics 46, no. 1 (2000): 257–277. Tallman, Ellis. “Some Unanswered Questions About Bank Panics.” Federal Reserve Bank of Atlanta Economic Review, November/December 1988.   Related Links Lawrence H. White, Competing Money Supplies. Concise Encyclopedia of Economics. Eugene White on Bank Regulation. EconTalk, April 2012. Calomiris and Haber on Fragile by Design. EconTalk, February 2014. Selgin on the Fed. EconTalk, December 2010. Selgin on Free Banking. EconTalk, November 2008. Rustici on Smoot-Hawley and the Great Depression. EconTalk, January 2010.     (0 COMMENTS)

/ Learn More

Antitrust

Origins Before 1890, the only “antitrust” law was the common law. Contracts that allegedly restrained trade (e.g., price-fixing agreements) often were not legally enforceable, but they did not subject the parties to any legal sanctions, either. Nor were monopolies illegal. Economists generally believe that monopolies and other restraints of trade are bad because they usually reduce total output, and therefore the overall economic well-being for producers and consumers (see monopoly). Indeed, the term “restraint of trade” indicates exactly why economists dislike monopolies and cartels. But the law itself did not penalize monopolies. The Sherman Act of 1890 changed all that by outlawing cartelization (every “contract, combination . . . or conspiracy” that was “in restraint of trade”) and monopolization (including attempts to monopolize). The Sherman Act defines neither the practices that constitute restraints of trade nor monopolization. The second important antitrust statute, the Clayton Act, passed in 1914, is somewhat more specific. It outlaws, for example, certain types of price discrimination (charging different prices to different buyers), “tying” (making someone who wants to buy good A buy good B as well), and mergers—but only when the effects of these practices “may be substantially to lessen competition or to tend to create a monopoly.” The Clayton Act also authorizes private antitrust suits and triple damages, and exempts labor organizations from the antitrust laws. Economists did not lobby for, or even support, the antitrust statutes. Rather, the passage of such laws is generally ascribed to the influence of populist “muckrakers” such as Ida Tarbell, who frequently decried the supposed ability of emerging corporate giants (“the trusts”) to increase prices and exploit customers by reducing production. One reason most economists were indifferent to the law was their belief that any higher prices achieved by the supposed anticompetitive acts were more than outweighed by the pricereducing effects of greater operating efficiency and lower costs. Interestingly, Tarbell herself conceded, as did “trustbuster” Teddy Roosevelt, that the trusts might be more efficient producers. Only recently have economists looked at the empirical evidence (what has happened in the real world) to see whether the antitrust laws were needed. The popular view that cartels and monopolies were rampant at the turn of the century now seems incorrect to most economists. Thomas DiLorenzo (1985) has shown that the trusts against which the Sherman Act supposedly was directed were, in fact, expanding output many times faster than overall production was increasing nationwide; likewise, the trusts’ prices were falling faster than those of all enterprises nationally. In other words, the trusts were doing exactly the opposite of what economic theory says a monopoly or cartel must do to reap monopoly profits. Anticompetitive Practices In referring to contracts “in restraint of trade,” or to arrangements whose effects “may be substantially to lessen competition or to tend to create a monopoly,” the principal antitrust statutes are relatively vague. There is little statutory guidance for distinguishing benign from malign practices. Thus, judges have been left to decide which practices run afoul of the antitrust laws. An important judicial question has been whether a practice should be treated as “per se illegal” (i.e., devoid of redeeming justification, and thus automatically outlawed) or whether it should be judged by a “rule of reason” (its legality depends on how it is used and on its effects in particular situations). To answer such questions, judges sometimes have turned to economists for guidance. In the early years of antitrust, though, economists were of little help. They had not extensively analyzed arrangements such as tying, information sharing, resale price maintenance, and other commercial practices challenged in antitrust suits. But as the cases exposed areas of economic ignorance or confusion about different commercial arrangements, economists turned to solving the various puzzles. Indeed, analyzing the efficiency rationale for practices attacked in antitrust litigation has dominated the intellectual agenda of economists who study what is called industrial organization. Initially, economists concluded that unfamiliar commercial arrangements that were not explicable in a model of perfect competition must be anticompetitive. In the past forty years, however, economic evaluations of various practices have changed. Economists now see that the perfect competition model relies on assumptions—such as everyone having perfect information and zero transaction costs—that are inappropriate for analyzing real-world production and distribution problems. The use of more sophisticated assumptions in their models has led economists to conclude that many practices previously deemed suspect are not typically anticompetitive. This change in evaluations has been reflected in the courts. Per se liability has increasingly been superseded by rule-of-reason analysis reflecting the procompetitive potential of a given practice. Under the rule of reason, courts have become increasingly sophisticated in analyzing information and transaction costs and the ways that contested commercial practices can reduce them. Economists and judges alike are more sophisticated in several important areas. Vertical Contracts Most antitrust practitioners once believed that vertical mergers (i.e., one company acquiring another that is either a supplier or a customer) reduced competition. Today, most antitrust experts believe that vertical integration usually is not anticompetitive. Progress in this area began in the 1950s with work by Aaron Director and the Antitrust Project at the University of Chicago. Robert Bork, a scholar involved with this project (and later the federal judge whose unsuccessful nomination to the U.S. Supreme Court caused much controversy), showed that if firm A has monopoly power, vertically integrating with firm B (or acquiring B) does not increase A’s monopoly power in its own industry. Nor does it give A monopoly power in B’s industry if that industry was competitive in the first place. Lester Telser, also of the University of Chicago, showed in a famous 1960 article that manufacturers used resale price maintenance (“fair trade”) not to create monopoly at the retail level, but to stimulate nonprice competition among retailers. Since retailers operating under fair trade agreements could not compete by cutting price, noted Telser, they instead competed by demonstrating the product to uninformed buyers. If the product is a sophisticated one that requires explaining to prospective buyers, resale price maintenance can be a rational—and competitive—action by a manufacturer. The same rationale can account for manufacturers’ use of exclusive sales territories. This new knowledge about vertical contracts has had a large impact on judicial antitrust rulings. Horizontal Contracts Changes in the assessment of horizontal contracts (agreements among competing sellers in the same industry) have come more slowly. Economists remain almost unanimous in condemning all horizontal price-fixing. Many, however (e.g., Donald Dewey), have indicated that price-fixing may actually be procompetitive in some situations, a conclusion bolstered by Michael Sproul’s empirical finding that in industries where the government successfully sues against price-fixing, prices increase, rather than decrease, after the suit. At a minimum, Peter Asch and Joseph Seneca have shown empirically, price-fixers have not earned higher than normal profits. Other practices that some people believed made it easier for competitors to fix prices have been shown to have procompetitive explanations. Sharing of information among competitors, for example, may not necessarily be a prelude to price-fixing; it can instead have an independent efficiency rationale. Perhaps the most important change in economists’ understanding has occurred in the area of mergers. Particularly with the work of Joe Bain and George Stigler in the 1950s, economists (and courts) inferred a lack of competition in markets simply from the fact that an industry had a high four-firm concentration ratio (the percentage of sales accounted for by the four largest firms in the industry). But later work by economists such as Yale Brozen and Harold Demsetz demonstrated that crrelations between concentration and profits either were transitory or were due more to superior efficiency than to anticompetitive conduct. Their work followed that of Oliver Williamson, who showed that even if merger caused a large increase in monopoly power, it would be efficient if it produced only slight cost reductions. As a result of this new evidence and new thinking, economists and judges no longer assume that concentration alone indicates monopoly. The various versions of the Department of Justice/Federal Trade Commission Merger Guidelines promulgated in the 1980s and revised in the 1990s have deemphasized concentration as a factor inviting government challenge of a merger. Nonmerger Monopolization Perhaps the most publicized monopolization case of recent years is the government’s case against Microsoft, which (see Liebowitz and Margolis 2001) rested on questionable empirical claims and resulted ultimately in victory for Microsoft on most of the government’s allegations. The failure of the government’s case reflects a general recent decline in the importance of monopolization cases. Worries about monopoly have progressively diminished with the realization that various practices traditionally thought to be monopolizing devices (including vertical contracts, as discussed above) actually have procompetitive explanations. Likewise, belief in the efficacy of predatory pricing—cutting price below cost—as a monopolization device has diminished. Work begun by John McGee in the late 1950s (also an outgrowth of the Chicago Antitrust Project) showed that firms are highly unlikely to use predatory pricing to create monopoly. That work is reflected in several recent Supreme Court opinions, such as that in Matsushita Electric Industrial Co. v. Zenith Radio Corp., where the Court wrote, “There is a consensus among commentators that predatory pricing schemes are rarely tried, and even more rarely successful.” As older theories of monopolization have died, newer ones have been hatched. In the 1980s, economists began to lay out new monopolization models based on strategic behavior, often relying on game-theory constructs. They postulated that companies could monopolize markets by raising rivals’ costs (sometimes called “cost predation”). For example, if firm A competes with firm B and supplies inputs to both itself and to B, A could raise B’s costs by charging B a higher price. It remains to be seen whether economists will ultimately accept the proposition that raising a rival’s costs can be a viable monopolizing strategy, or how the practice will be treated in the courts. But courts have sometimes imposed antitrust liability on firms possessing supposedly “essential facilities” when they deny competitors access to those facilities. The recent era of antitrust reassessment has resulted in general agreement among economists that the most successful instances of cartelization and monopoly pricing have involved companies that enjoy the protection of government regulation of prices and government control of entry by new competitors. Occupational licensing and trucking regulation, for example, have allowed competitors to alter terms of competition and legally prevent entry into the market. Unfortunately, monopolies created by the federal government are almost always exempt from antitrust laws, and those created by state governments frequently are exempt as well. Municipal monopolies (e.g., taxicabs, utilities) may be subject to antitrust action but often are protected by statute. The Effects of Antitrust With the hindsight of better economic understanding, economists now realize that one undeniable effect of antitrust has been to penalize numerous economically benign practices. Horizontal and especially vertical agreements that are clearly useful, particularly in reducing transaction costs, have been (or for many years were) effectively banned. A leading example is the continued per se illegality of resale price maintenance. Antitrust also increases transaction costs because firms must hire lawyers and often must litigate to avoid antitrust liability. One of the most worrisome statistics in antitrust is that for every case brought by government, private plaintiffs bring ten. The majority of cases are filed to hinder, not help, competition. According to Steven Salop, formerly an antitrust official in the Carter administration, and Lawrence J. White, an economist at New York University, most private antitrust actions are filed by members of one of two groups. The most numerous private actions are brought by parties who are in a vertical arrangement with the defendant (e.g., dealers or franchisees) and who therefore are unlikely to have suffered from any truly anticompetitive offense. Usually, such cases are attempts to convert simple contract disputes (compensable by ordinary damages) into triple-damage payoffs under the Clayton Act. The second most frequent private case is that brought by competitors. Because competitors are hurt only when a rival is acting procompetitively by increasing its sales and decreasing its price, the desire to hobble the defendant’s efficient practices must motivate at least some antitrust suits by competitors. Thus, case statistics suggest that the anticompetitive costs from “abuse of antitrust,” as New York University economists William Baumol and Janusz Ordover (1985) referred to it, may actually exceed any procompetitive benefits of antitrust laws. The case for antitrust gets no stronger when economists examine the kinds of antitrust cases brought by government. As George Stigler (1982, p. 7), often a strong defender of antitrust, summarized, “Economists have their glories, but I do not believe that antitrust law is one of them.” In a series of studies done in the early 1970s, economists assumed that important losses to consumers from limits on competition existed, and constructed models to identify the markets where these losses would be greatest. Then they compared the markets where government was enforcing antitrust laws with the markets where governments should enforce the laws if consumer well-being was the government’s paramount concern. The studies concluded unanimously that the size of consumer losses from monopoly played little or no role in government enforcement of the law. Economists have also examined particular kinds of antitrust cases brought by the government to see whether anticompetitive acts in these cases were likely. The empirical answer usually is no. This is true even in price-fixing cases, where the evidence indicates that the companies targeted by the government either were not fixing prices or were doing so unsuccessfully. Similar conclusions arise from studies of merger cases and of various antitrust remedies obtained by government; in both instances, results are inconsistent with antitrust’s supposed goal of consumer well-being. If public-interest rationales do not explain antitrust, what does? A final set of studies has shown empirically that patterns of antitrust enforcement are motivated at least in part by political pressures unrelated to aggregate economic welfare. For example, antitrust is useful to politicians in stopping mergers that would result in plant closings or job transfers in their home districts. As Paul Rubin documented, economists do not see antitrust cases as driven by a search for economic improvement. Rubin reviewed all articles written by economists that were cited in a leading industrial organization textbook (Scherer and Ross 1990) generally favorable to antitrust law. Per economists’ evaluations, more bad than good cases were brought. “In other words,” wrote Rubin, “it is highly unlikely that the net effect of actual antitrust policy is to deter inefficient behavior.. . . Factors other than a search for efficiency must be driving antitrust policy” (Rubin 1995, p. 61). What might those factors be? Pursuing a point suggested by Nobel laureate Ronald Coase (1972, 1988), William Shughart argued that economists’ support for antitrust derives considerably from their ability to profit personally, in the form of full-time jobs and lucrative part-time work as experts in antitrust matters: “Far from contributing to improved antitrust enforcement, economists have for reasons of self-interest actively aided and abetted the public law enforcement bureaus and private plaintiffs in using the Sherman, Clayton and FTC Acts to subvert competitive market forces” (Shughart 1998, p. 151). About the Author Fred S. McChesney is the Class of 1967 James B. Haddad Professor of Law at Northwestern University School of Law and a professor in the Kellogg School of Management at Northwestern. Further Reading   Asch, Peter, and J. J. Seneca. “Is Collusion Profitable?” Review of Economics and Statistics 53 (February 1976): 1–12. Baumol, William J., and Janusz A. Ordover. “Use of Antitrust to Subvert Competition.” Journal of Law and Economics 28 (May 1985): 247–265. Bittlingmayer, George. “Decreasing Average Cost and Competition: A New Look at the Addyston Pipe Case.” Journal of Law and Economics 25 (October 1982): 201–229. Bork, Robert H. The Antitrust Paradox: A Policy at War with Itself. New York: Basic Books, 1978. Bork, Robert H. “Vertical Integration and the Sherman Act: The Legal History of an Economic Misconception.” University of Chicago Law Review 22 (Autumn 1954): 157–201. Brozen, Yale. “The Antitrust Task Force Deconcentration Recommendation.” Journal of Law and Economics 13 (October 1970): 279–292. Coase, R. H. “Industrial Organization: A Proposal for Research.” In V. Fuchs, ed., Economic Research: Retrospective and Prospect. Vol. 3. Cambridge, Mass.: National Bureau of Economic Research. Reprinted in R. H. Coase, The Firm, the Market and the Law. Chicago: University of Chicago Press, 1988. Coate, Malcolm B., Richard S. Higgins, and Fred S. Mcchesney. “Bureaucracy and Politics in FTC Merger Challenges.” Journal of Law and Economics 33 (October 1990): 463–482. Crandall, Robert W., and Clifford Winston. “Does Antitrust Policy Improve Consumer Welfare? Assessing the Evidence.” Journal of Economic Perspectives 17, no. 4 (2003): 3–26. Demsetz, Harold. “Industry Structure, Market Rivalry, and Public Policy.” Journal of Law and Economics 16 (April 1973): 1–9. Dewey, Donald. “Information, Entry and Welfare: The Case for Collusion.” American Economic Review 69 (September 1979): 588–593. DiLorenzo, Thomas J. “The Origins of Antitrust: An Interest-Group Perspective.” International Review of Law and Economics 5 (June 1985): 73–90. Liebowitz, Stan J., and Stephen E. Margolis. Winners, Losers and Microsoft. Rev. ed. Oakland, Calif.: Independent Institute, 2001. McGee, John S. “Predatory Price Cutting: The Standard Oil (N.J.) Case.” Journal of Law and Economics 1 (1958): 137–169. Rubin, Paul H. “What Do Economists Think About Antitrust? A Random Walk down Pennsylvania Avenue.” In Fred S. McChesney and William F. Shughart II, eds., The Causes and Consequences of Antitrust: The Public-Choice Perspective. Chicago: University of Chicago Press, 1995. Scherer, F. M., and David Ross. Industrial Market Structure and Economic Performance. 3d ed. Boston: Houghton Mifflin, 1990. Shughart, William F. II. “Monopoly and the Problem of the Economists.” In Fred S. McChesney, ed., Economic Inputs, Legal Outputs: The Role of Economists in Modern Antitrust. New York: Wiley, 1998. Shughart, William F. II, and Robert D. Tollison. “The Positive Economics of Antitrust Policy: A Survey Article.” International Review of Law and Economics 5 (June 1985): 39–57. Sproul, Michael F. “Antitrust and Prices.” Journal of Political Economy 101 (1993): 741–754. Stigler, George J. “The Economists and the Problem of Monopoly.” In Stigler, The Economist as Preacher and Other Essays. Chicago: University of Chicago Press, 1982. Pp.38–54. Stigler, George J. “The Economists and the Problem of Monopoly.” American Economic Review Papers and Proceedings 72 (May 1982): 1–11. Telser, Lester G. “Why Should Manufacturers Want Fair Trade?” Journal of Law and Economics 3 (October 1960): 86–105. Williamson, Oliver E. “Economies as an Antitrust Defense: The Welfare Tradeoffs.” American Economic Review 58 (March 1968): 18–35.   Related Links Capitalism. Concise Encyclopedia of Economics. David Henderson, Why Predatory Pricing is Highly Unlikely. Econlib, May 2017. Pierre Lemieux, In Defense of Google. Econlib, May 2015. Richard McKenzie, In Defense of Apple. Econlib, July 2012. Roger Noll on the Economics of Sports. EconTalk, August 2012. Boudreaux on Market Failure, Government Failure, and the Economics of Antitrust Regulation. EconTalk, October 2007.   (0 COMMENTS)

/ Learn More

Airline Deregulation

The 1978 Airline Deregulation Act partially shifted control over air travel from the political to the market sphere. The Civil Aeronautics Board (CAB), which had previously controlled entry, exit, and the pricing of airline services, as well as intercarrier agreements, mergers, and consumer issues, was phased out under the CAB Sunset Act and expired officially on December 31, 1984. The economic liberalization of air travel was part of a series of “deregulation” moves based on the growing realization that a politically controlled economy served no continuing public interest. U.S. deregulation has been part of a greater global airline liberalization trend, especially in Asia, Latin America, and the European Union. Network industries, which are critical to a modern economy, include air travel, railroads, electrical power, and telecommunications. The air travel sector is an example of a network industry involving both flows and a grid. The flows are the mobile system elements: the airplanes, the trains, the power, the messages, and so on. The grid is the infrastructure over which these flows move: the airports and air traffic control system, the tracks and stations, the wires and cables, the electromagnetic spectrum, and so on. Network efficiency depends critically on the close coordination of grid and flow operating and investment decisions. Under CAB regulation, investment and operating decisions were highly constrained. CAB rules limiting routes and entry and controlling prices meant that airlines were limited to competing only on food, cabin crew quality, and frequency. As a result, both prices and frequency were high, and load factors—the percentage of the seats that were filled—were low. Indeed, in the early 1970s load factors were only about 50 percent. The air transport market today is remarkably different. Because airlines compete on price, fares are much lower. Many more people fly, allowing high frequency today also, but with much higher load factors—74 percent in 2003, for example. Airline deregulation was a monumental event. Its effects are still being felt today, as low-cost carriers (LCCs) challenge the “legacy” airlines that were in existence before deregulation (American, United, Continental, Northwest, US Air, and Delta). Indeed, the airline industry is experiencing a paradigm shift that reflects the ongoing effects of deregulation. Although deregulation affected the flows of air travel, the infrastructure grid remains subject to government control and economic distortions. Thus, airlines were only partially deregulated. Benefits of Partial Deregulation Even the partial freeing of the air travel sector has had overwhelmingly positive results. Air travel has dramatically increased and prices have fallen. After deregulation, airlines reconfigured their routes and equipment, making possible improvements in capacity utilization. These efficiency effects democratized air travel, making it more accessible to the general public. Airfares, when adjusted for inflation, have fallen 25 percent since 1991, and, according to Clifford Winston and Steven Morrison of the Brookings Institution, are 22 percent lower than they would have been had regulation continued (Morrison and Winston 2000). Since passenger deregulation in 1978, airline prices have fallen 44.9 percent in real terms according to the Air Transport Association. Robert Crandall and Jerry Ellig (1997) estimated that when figures are adjusted for changes in quality and amenities, passengers save $19.4 billion dollars per year from airline deregulation. These savings have been passed on to 80 percent of passengers accounting for 85 percent of passenger miles. The real benefits of airline deregulation are being felt today as never before, with LCCs increasingly gaining market share. The dollar savings are a direct result of allowing airlines the freedom to innovate in routes and pricing. After deregulation, the airlines quickly moved to a hub-and-spoke system, whereby an airline selected some airport (the hub) as the destination point for flights from a number of origination cities (the spokes). Because the size of the planes used varied according to the travel on that spoke, and since hubs allowed passenger travel to be consolidated in “transfer stations,” capacity utilization (“load factors”) increased, allowing fare reduction. The hub-and-spoke model survives among the legacy carriers, but the LCCs—now 30 percent of the market—typically fly point to point. The network hubs model offers consumers more convenience for routes, but point-to-point routes have proven less costly for airlines to implement. Over time, the legacy carriers and the LCCs will likely use some combination of point-to-point and network hubs to capture both economies of scope and pricing advantages. The rigid fares of the regulatory era have given way to today’s competitive price market. After deregulation, the airlines created highly complex pricing models that include the service quality/price sensitivity of various air travelers and offer differential fare/service quality packages designed for each. The new LCCs, however, have far simpler price structures—the product of consumers’ (especially business travelers’) demand for low prices, increased price transparency from online Web sites, and decreased reliance on travel agencies. As prices have decreased, air travel has exploded. The total number of passengers that fly annually has more than doubled since 1978. Travelers now have more convenient travel options with greater flight frequency and more nonstop flights. Fewer passengers must change airlines to make a connection, resulting in better travel coordination and higher customer satisfaction. Industry Problems after Deregulation Although the gains of economic liberalization have been substantial, fundamental problems plague the industry. Some of these problems are transitional, the massive adjustments required by the end of a half century of strict regulation. The regulated airline monopolies received returns on capital that were supposed to be “reasonable” (comparable to what a company might expect to receive in a competitive market), but these returns factored in high costs that often would not exist in a competitive market. For example, the airlines’ unionized workforce, established and strengthened under regulation and held in place by the Railway Labor Act, gained generous salaries and inefficient work rules compared with what would be expected in a competitive market. Problems remain in today’s market, especially with the legacy airlines. Health of the Industry The airlines have not found it easy to maintain profitability. The industry as a whole was profitable through most of the economic boom of the 1990s. As the national economy slowed in 2000, so did profitability for the legacy airlines. Consumers became more price-sensitive and gravitated toward the lower-cost carriers. High labor costs and the network hub business model hurt legacy airlines’ competitiveness. Hub-and-spoke systems decreased unit costs but created high fixed costs that required larger terminals, investments in information technology systems, and intricate revenue management systems. The LCCs have thus far successfully competed on price due to lower hourly employee wages, higher productivity, and no pension deficits. It remains to be seen whether the LCC cost and labor structures will change over time. The Air Transport Association reports that the U.S. airline industry experienced net losses of $23.2 billion from 2001 through 2003, though the LCCs largely remained profitable. While the September 11, 2001, terrorist attack and its aftermath are a major factor in the industry’s hardships, they only accelerated an already developing trend within the industry. The industry was experiencing net operating losses for many reasons, including the mild recession, severe acute respiratory syndrome (SARS), and the increase in LCC services and the decline in business fares relied on by legacy carriers. Higher fuel prices, residual labor union problems, fears of terrorism, and the intrusive measures that government now uses to clear travelers through security checkpoints are further drags on the industry. Remaining Domestic Economic Controls As a form of regulation, antitrust laws inhibit post-deregulation restructuring efforts, making it harder to bring salaries and work rules into line with the realities of a competitive marketplace. The antitrust regulatory laws inhibit the restructuring of corporations and block needed consolidation; the antitrust authorities view with suspicion efforts to retain higher prices. Historically, the CAB had antitrust jurisdiction over airline mergers. When Congress disbanded the CAB in 1985, it temporarily transferred merger review authority to the Department of Transportation (DOT). In 1989, the Justice Department assumed merger review jurisdiction from the DOT that, when combined with its antitrust authority under the Sherman Act, makes it the primary antitrust regulator of the airline industry. The Justice Department has contested past merger proposals, including Northwest’s attempt to gain a controlling interest in Continental and the merger of United Airlines and US Airways. Antitrust law also applies to international alliances, arrangements that attempt to ameliorate restrictive foreign ownership and competition laws. While labor contracts, airport asset management, and other business practices are themselves high barriers to restructuring, these difficulties are magnified by antitrust regulatory hurdles. Cabotage restrictions, discussed below, also limit competition. Reservation Systems During the regulatory era, rates were determined politically and changed infrequently. The CAB had to approve every fare, limiting the airlines’ ability to react to demand changes and to experiment with discount fares. After deregulation, airlines were free to set prices and to change them frequently. That was possible only because the airlines had earlier created computer reservation systems (CRSs) capable of keeping track of the massive inventory of seats on flights over a several-month period. The early CRSs allowed the travel agent to designate an origin-destination pair and call up all available flights. The computer screen could show only a limited number of flights at one time, of course; thus, some rule was essential to rank-order the flights shown. CRSs were available only to travel agents and, beginning in 1984, were highly regulated to ensure open access to airlines that had not developed their own CRS system. The DOT regulations restricted private agreements for guaranteeing access. However, the growth of Internet travel sites and direct access to airline Web sites created new forms of competition to the airline reservation systems. Therefore, the DOT allowed the CRS regulations to expire in 2004. Problems with Political Control of the Grid A network can be efficient only if the flows and the grid interact smoothly. The massive expansion of air travel should have resulted in comparable expansions—either in the physical infrastructure or in more sophisticated grid management. Government management of the air travel grid has resulted in political compromises that cause friction with the smooth flow across the grid. Flight delays are increasing due to a lack of aviation infrastructure and the failure to allocate air capacity efficiently. The Air Transport Association estimates that delays cost airlines and passengers more than five billion dollars per year due to the increased costs for aircraft operation and ground personnel and loss of passengers’ time. The FAA predicts that the number of passengers will increase by 60 percent and that cargo volume will double by 2010. Airports Airport construction and expansion face almost insurmountable political and regulatory hurdles. The number of federal requirements associated with airport finances has grown considerably in recent years and is tied to the awarding of grants from the federal Airport Improvement Program (AIP). Since 1978, only one major airport has been constructed (in Denver), and only a few runways have been added at congested airports. Airport construction faces significant nonpolitical barriers, such as vocal “not in my back yard” (NIMBY) opposition and environmental noise and emissions considerations. Federal law restricts the fees airports charge air carriers to amounts that are “fair and reasonable.” These fee restrictions, although promoted as a way to provide nondiscriminatory access to all aircraft, limit an airport’s ability to recover costs for air carriers’ use of airfield and terminal facilities. Allowing airports more flexibility to price takeoffs and landings based on supply and demand would also help ease congestion at overburdened airports. Air Traffic Control Air traffic control involves the allocation of capacity and has a complex history of government management. Unfortunately, the Federal Aviation Administration (FAA), which manages air traffic control, made bad upgrading decisions. The advanced system funded by the FAA was more than a decade late and never performed as hoped. The result was that the airline expansion was not met by an expanded grid, and congestion occurred. Better technology for air traffic control will help efficient navigation and routings. Global Positioning System (GPS) navigation technology holds great promise for more precise flight paths, allowing for increased airplane traffic. Ultimately, however, a privately managed system that allows for better coordination of airline investment and operation decisions will be necessary to ease congestion. Air traffic control operation is a business function distinct from the regulation of air traffic safety. Using pricing mechanisms to allocate the scarce resource of air traffic capacity would reduce congestion and more efficiently allocate resources. Implementing cost-based structures by privatizing air traffic control is a controversial and politically daunting issue in the United States, but twenty-nine nations—including Canada—have already separated their traffic systems from their regulating agency. Air traffic control privatization will likely be driven by the decreasing ability of the Airport and Airways Trust Fund to deliver the necessary financial support. Currently, the FAA rations flights by delay on a first-come, first-served basis—a system that creates overcrowding during peak hours. A system based on pricing at rates determined by voluntary contractual arrangements of market participants, not government regulators, would reduce this overcrowding. One of the results would be the use of “congestion pricing,” such as rush hour surcharges or early bird discounts. Airport Access FAA rules that limit the number of hourly takeoffs and landings—called “slot” controls—were adopted in 1968 as a temporary measure to deal with congestion and delays at major airports. These artificial capacity limitations—known as the high density rule—still exist at JFK, LaGuardia, and Reagan National. However, limiting supply through governmental fiat is a crude form of demand management. Allowing increased capacity and congestion pricing, and allowing major airports to use their slots to favor larger aircraft, would lead to better results. Remaining International and Economic Rules International Competition “Open Skies” agreements are bilateral agreements between the United States and other countries to open the aviation market to foreign access and remove barriers to competition. They give airlines the right to operate air services from any point in the United States to any point in the other country, as well as to and from third countries. The United States has Open Skies agreements with more than sixty countries, including fifteen of the twenty-five European Union nations. Open Skies agreements have been successful at removing many of the barriers to competition and allowing airlines to have foreign partners, access to international routes to and from their home countries, and freedom from many traditional forms of economic regulation. A global industry would work better with a globally minded set of rules that would allow airlines from one country (or investors of any sort) to establish airlines in another country (the right of establishment) and to operate domestic services in the territory of another country (cabotage). However, these agreements still fail to approximate the freedoms that most industries have when competing in other global markets. National Ownership National ownership laws are an archaic barrier to a more competitive air travel sector. These rules seem to reflect a concern for national security, even though many industries as strategic as the airline industry do not have such restrictions. Federal law restricts the percentage of foreign ownership in air transportation. Only U.S.-registered aircraft can transport passengers and freight domestically. Airline citizenship registration is limited to U.S. citizens or permanent residents, partnerships in which all partners are U.S. citizens, or corporations registered in the United States in which the chief executive officer and two-thirds of the directors are U.S. citizens and where U.S. citizens hold or control 75 percent of the capital stock. Only U.S. citizens are able to obtain a certificate of public convenience and necessity, a prerequisite for operation as a domestic carrier. Additional Problems Resulting from the 9/11 Response After 9/11, safety and security regulation responsibilities were given to the new Transportation Security Administration (TSA) within the Department of Homeland Security. Created just months after 9/11, the TSA is an outgrowth of the belief that only the government can be entrusted to perform certain duties, especially those related to security. No one has clearly established that a government whose employees are difficult to fire, even for incompetence, will do better than a private employer who can more easily fire incompetent workers. In September 2001, Congress passed the Air Transportation Safety and System Stabilization Act, which authorized payments of up to five billion dollars in assistance to reimburse airlines for the postattack four-day shutdown of air traffic and attributable losses through the end of 2001. It also created and authorized the Air Transportation Stabilization Board (ATSB) to provide up to ten billion dollars in loan guarantees for airlines in need of emergency capital. While the ATSB risked the kind of mission creep that is inevitable in an industry subsidy program, the deadline for applications to the ATSB has passed. Of the ten billion dollars authorized by Congress for these loan guarantees, the board actually committed less than two billion. Conclusion Air travel is a network industry, but only its flow element— the airlines—is economically liberalized. The industry is still structurally adjusting to a more competitive situation and remains subject to a large number of regulations. The capital, work rules, and compensation practices of the airline industry still reflect almost fifty years of political protection and control. We are finally seeing the kinds of internal restructuring among airlines that was expected from deregulation. Yet, government still has much to do to ensure that the airline market will thrive in the future. The FAA is a command-and-control government agency ill-suited to providing air traffic control services to a dynamic industry. Land slots and airport space should be allocated using market prices instead of through administrative fiat. International competition will increase, and rules regarding national ownership need to change accordingly. If the government deregulates the grid and transitions toward a market solution, the benefits of flow deregulation will increase, and costs for air travelers will fall even more. About the Authors Fred L. Smith Jr. is the president of, and Braden Cox is the technology counsel with, Competitive Enterprise Institute, a free-market public policy group based in Washington, D.C. Further Reading   Bailey, Elizabeth E. “Airline Deregulation Confronting the Paradoxes.” Regulation: The Cato Review of Business and Government 15, no. 3. Available online at: http://www.cato.org/pubs/regulation/regv15n3/reg15n3-bailey.html. Button, Kenneth, and Roger Stough. Air Transport Networks: Theory and Policy Implications. Northampton, Mass.: Edward Elgar, 2000. Crandall, Robert, and Jerry Ellig. Economic Deregulation and Customer Choice. Fairfax, Va.: Center for Market Processes, George Mason University, 1997. Available online at: http://www.mercatus.org/repository/docLib/MC_RSP_RP-Dregulation_970101.pdf. Doganis, Rigas. The Airport Business. New York: Routledge, 1992. Havel, Brian F. In Search of Open Skies: Law and Policy for a New Era in International Aviation. A Comparative Study of Airline Deregulation in the United States and the European Union. Boston: Kluwer Law International, 1997. Morrison, Steven A., and Clifford Winston. “The Remaining Role for Government Policy in the Deregulated Airline Industry.” In Sam Peltzman and Clifford Winston, eds., Deregulation of Network Industries: What’s Next? Washington, D.C.: AEI Brookings Joint Center for Regulatory Studies, 2000. Poole, Robert W. Jr., and Viggo Butler. Airline Deregulation: The Unfinished Revolution. December 1998. Available online at: http://cei.org/pdf/1451.pdf. Poole, Robert W. Jr., and Viggo Butler. How to Commercialize Air Traffic Control. Policy Study No. 278. Los Angeles: Reason Public Policy Institute, 2001. U.S. GAO. Airline Deregulation: Changes in Airfares, Service, and Safety at Small, Medium-Sized, and Large Communities. April 1996. Report online at: http://www.gao.gov/archive/1996/rc96079.pdf.   Related Links Antitrust. Concise Encyclopedia of Economics. Price Controls. Concise Encyclopedia of Economics. Robert P. Murphy, Ensuring- And Insuring- Airline Safety. Econlib, February 2011.         (0 COMMENTS)

/ Learn More

Agricultural Subsidy Programs

Government intervention in food and fiber commodity markets began long ago. The classic case of farm subsidy through trade barriers is the English Corn Laws, which for centuries regulated the import and export of grain in Great Britain and Ireland. They were repealed in 1846. Modern agricultural subsidy programs in the United States began with the New Deal and the Agricultural Adjustment Act of 1933. With trade barriers already in place for agricultural commodities and everything else, this law gave the government the power to set minimum prices and included government stock acquisition, land idling, and schemes to cut supplies by destroying livestock. Land idling and livestock destruction were sometimes mandatory and sometimes induced by compensation (Benedict 1953). Since the early 1930s, governments of wealthier countries around the world have used a dizzying array of schemes to support and subsidize farmers. In poor countries, where a large fraction of the population is engaged in farming, governments have tended to tax and regulate agriculture. As incomes grew and the population on farms dwindled in such countries as South Korea and Taiwan, those countries’ governments shifted from penalizing farmers to subsidizing them and protecting them from imports. These countries, along with Japan, now have among the highest subsidy and protection rates in the world. Forms of farm support also differ by country and commodity, and different forms have different impacts on agriculture and the rest of the economy. This article reviews some of the major support forms and outlines their impacts. Although I often use the terms “support” and “subsidy” interchangeably, much government support of agriculture is not in the form of direct subsidy for farmer incomes or direct subsidy for production, but is indirect. Economists have criticized farm subsidies on several counts. First, farm subsidies typically transfer income from consumers and taxpayers to relatively wealthy farmland owners and farm operators. Second, they impose net losses on society, often called deadweight losses, and have no clear broad social benefit (Alston and James 2002). Third, they impede movements toward more open international trade in commodities and thus impose net costs on the global economy (Johnson 1991; Sumner 2003). Supporters of farm subsidies have argued that such programs stabilize agricultural commodity markets, aid low-income farmers, raise unduly low returns to farm investments, aid rural development, compensate for monopoly in farm input supply and farm marketing industries, help ensure national food security, offset farm subsidies provided by other countries, and provide various other services. However, economists who have tried to substantiate any of these benefits have been unable to do so (Gardner 1992; Johnson 1991; Wright 1995). The U.S. government heavily subsidizes grains, oilseeds, cotton, sugar, and dairy products. Most other agriculture—including beef, pork, poultry, hay, fruits, tree nuts, and vegetables (accounting for about half of the total value of production)—receives only minimal government support. U.S. farm programs have cost about $20 billion per year in government budget outlays in recent years. But budget costs are not a particularly useful measure of the degree of support or subsidy. Some subsidy programs, such as import tariffs, actually generate tax revenue for the government but also impose costs on consumers that exceed the government’s revenue gain. According to Organization for Economic Cooperation and Development (OECD) figures, the average rate of “producer support estimate” for the heavily supported commodities in the United States ranges from about 55 percent of the value of production for sugar to about 22 percent for oilseeds. For the less-supported commodities the rate is typically below 5 percent. Among OECD members (a group of high-income countries), “producer support estimate” rates average about 31 percent of total revenue for the main grain, oilseed, sugar, and livestock products. These estimates aggregate into a single index a large range of government programs, including price supports and trade barriers, that transfer benefits to farm producers and landlords. This index measures the size of the transfer in money terms but does not attempt to assess the programs’ effects on production or net income. The highest average rates of support are for rice (about 80 percent), where most of the support derives from trade barriers and direct payments. Support to farmers by Japan’s and Korea’s governments is a large part of the total world subsidy for rice. The highest national average support equivalent rates, across all major commodities, are offered in Norway, Switzerland, and Iceland, with average subsidies of about 65–75 percent of the value of production, and in Japan and Korea, with support rates of 60–65 percent. The lowest subsidy rates (less than 4 percent) are found in Australia and New Zealand. The average support rate in the European Union is about 35 percent of the value of production. The forms of subsidy vary by country and commodity as well. The main forms of subsidy include: (1) direct payments to farmers and landlords; (2) price supports implemented with government purchases and storage; (3) regulations that set minimum prices by location, end use, or some other characteristic; (4) subsidies for such items as crop insurance, disaster response, credit, marketing, and irrigation water; (5) export subsidies; and (6) import barriers in the form of quotas, tariffs, or regulations. Often, supply control programs such as land-idling requirements, production quotas, or similar schemes accompany price supports or other programs. In addition, the governments of most wealthier nations provide aid for agricultural research and development, promotion, and some agricultural and rural infrastructure. The impacts of the subsidies depend on their form. Farm subsidy programs typically transfer income from consumers and taxpayers to farm operators, especially to owners of farmland and other resources used in farm production. Evidence shows clearly, for example, that farm subsidies increase the rental rate on land to which rights to receive those payments are attached. In other words, subsidies to farming are often simply subsidies to landowners. For government-created assets such as production or marketing quotas or allotments, the market value is due entirely to government program benefits. When quotas limit production, commodity prices rise and raise the value of production rights assigned to quota owners. Such quota programs have often continued for decades (six decades in the case of the U.S. tobacco quota and more than three decades in the case of the California and Canadian dairy quotas). Interestingly, though, the asset price of the marketable quota is typically only four times the annual gain from owning the quota (Johnson 1991). This means that quota owners evidently are not confident that the program benefits will continue. Farm subsidies stimulate additional production of government-favored commodities by raising incentives to use scarce land and farmer talent on some products rather than on others. The specifics of the government program determine the degree of production stimulus; real farm programs are usually much more complex than the per unit production subsidies or price supports described in textbooks. Eliminating a subsidy for just one crop would cause production of that crop to fall much more than if all crop subsidies were eliminated simultaneously. Because most farmland would remain in use, economists would expect relatively small adjustments in total U.S. agricultural production if all farm subsidies were eliminated together, although some shifts in the mix among commodities would occur. Partly to limit the increased production caused by subsidies, the United States once required farmers to idle a part of their farmland in return for the subsidy. That practice is still used in the European Union and Japan. Recently, the United States has used three complex payment schemes simultaneously for grains, oilseeds, and cotton. First, farmers receive “direct payments,” which are independent of current market prices and are based primarily on a farm’s history of production of a specific supported crop. There are a number of restrictions on the use of the land that receives these payments, but farmers receive the payments even if they plant crops other than the payment crop or leave the land idle. Second, farmers receive “countercyclical payments,” which are tied, inversely, to the market price of the payment crop, but which also allow planting flexibility. These two forms of payment do not require a farmer to plant a specific crop currently. However, because the continuation or increase of payments may depend on current production of that crop, they do provide an incentive to overproduce (Sumner 2003). The third form of subsidy payments, “marketing loan benefits,” are inversely proportional to current market prices and are tied directly to current production of a specific crop. It is difficult to measure the degree of production inducement tied to this complex array of payments; nonetheless, economists agree that without them the production of subsidized crops would decline. Among the most controversial aspects of farm subsidy programs in recent decades have been their impacts on international trade. D. Gale Johnson (1950) raised the issue more than fifty years ago. As globalization has increased, farm trade barriers and subsidies that block pursuit of agricultural comparative advantage have become more disruptive to normal trade relations and trade negotiations. Farm subsidy programs, which are used by most wealthy countries, have made multilateral trade negotiations more complex and have threatened broad-based market opening. In the early years of the General Agreement on Tariffs and Trade (GATT) (the 1940s and early 1950s), the U.S. government placed its farm subsidy programs out of reach of trade negotiations and thereby thwarted liberalization in agriculture for three decades. In the 1980s, the U.S. government began to reduce the production stimulus of its own farm programs. In trade negotiations, it advocated freer trade in agriculture and stated its willingness to eliminate its own import barriers and trade-distorting farm subsidies if other nations would do the same. European nations, Japan, and Korea resisted. Nevertheless, the GATT agreement of 1994, which created the World Trade Organization (WTO), began modest progress toward liberalization. U.S. farm subsidy legislation in 1996 was consistent with gradual reform of farm subsidies. With the passage of new and more distorting farm subsidy programs in 2002, however, the United States has been a less credible bargainer in WTO negotiations, and reductions in subsidies and trade barriers have been delayed. After the 2002 Farm Bill in the United States and initiation of the Doha round of WTO negotiations, farm subsidies became a high-profile issue for many less-developed-country participants in trade negotiations. They pointed out that the price-depressing effects of wealthy countries’ farm subsidies disadvantaged their farmers. U.S. cotton subsidies are a clear example. Some of the poorest countries in West Africa have traditionally been cotton exporters. In 2001 and 2002, they faced a world price of cotton ranging from thirty-five cents to forty-five cents per pound. Meanwhile, cotton growers in the United States, the world’s largest exporter, received seventy cents or more per pound from the subsidies plus the market price. Economists have estimated that U.S. exports of cotton would have been substantially lower, and the world price of cotton 10 to 15 percent higher, if U.S. cotton subsidies had been unavailable during this period. Reducing farm subsidies in the United States and other rich countries would help poor cotton growers and other farmers in poor countries, and, moreover, would begin a process of relying more on trade rather than aid for economic growth. Taxpayers in rich countries would gain in two ways: by paying lower subsidies to their farmers and by paying lower subsidies to people in poor countries. The WTO is the key forum for nations to pursue reforms of global agricultural policies, but this forum may not be sufficient. In wealthy nations such as the United States, farm subsidies, though large in total, are relatively minor political issues for most voters. The reason is that the cost per voter, in higher taxes and higher food prices, is small. For farmers, though, the gain per person is large. Hence, the domestic political stage is set for continued transfers from a broad constituency of voters, who pay little attention to the issue, to a much smaller group, for whom farm subsidies are vital to their short-run economic well-being. This dilemma is not unique to farm subsidies, and in fact is a central concern of political economy (see political behavior). Nonetheless, with widespread attention currently being drawn to the issue, more people are open to understanding the damaging effects of farm subsidies. About the Author Daniel A. Sumner is the Frank H. Buck Jr. Chair Professor in the Department of Agricultural and Resource Economics at the University of California, Davis, and the director of the University of California Agricultural Issues Center. He was previously the assistant secretary for economics at the U.S. Department of Agriculture. Further Reading   Alston, Julian M., and Jennifer S. James. “The Incidence of Agricultural Policy.” Chapter 33 in B. L. Gardner and G. C. Rausser, eds., Handbook of Agricultural Economics. Vol. 2. Amsterdam: Elsevier, 2002. Pp. 1689–1749. Benedict, Murray R. Farm Policies of the United States, 1790–1950: A Study of Their Origins and Development. New York: Twentieth Century Fund, 1953. Gardner, Bruce L. “Changing Economic Perspectives on the Farm Problem.” Journal of Economic Literature 30 (March 1992): 62–101. Johnson, D. Gale. Agriculture and Trade: A Study of Inconsistent Policies. New York: John Wiley and Son, 1950. Johnson, D. Gale. World Agriculture in Disarray. 2d ed. London: Macmillan, 1973. 1991. Organization of Economic Cooperation and Development. Agricultural Policies in OECD Countries: Monitoring and Evaluation. Paris: OECD, 2003. Sumner, Daniel A. “Implications of the USA Farm Bill of 2002 for Agricultural Trade and Trade Negotiations.” Australian Journal of Agricultural and Resource Economics 47, no. 1 (2003): 117–140. Wright, B. D. “Goals and Realities for Farm Policy.” In D. A. Sumner, ed., Agricultural Policy Reform in the United States. Washington, D.C.: AEI Press, 1995. Pp. 9–44.   Related Links Comparative Advantage. Concise Encyclopedia of Economics. Redistribution. Concise Encyclopedia of Economics. David Ricardo. Concise Encyclopedia of Economics. Daniel Sumner on the Political Economy of Agriculture. EconTalk, February 2015. Frank William Tausig, Some Aspects of the Tariff Question. Harvard University Press, 1915. (0 COMMENTS)

/ Learn More

Advertising

Economic analysis of advertising dates to the 1930s and 1940s, when critics attacked it as a monopolistic and wasteful practice. Defenders soon emerged who argued that advertising promotes competition and lowers the costs of providing information to consumers and distributing goods. Today, most economists side with the defenders most of the time. Advertising comes in many different forms: grocery ads that feature weekly specials, “feel-good” advertising that merely displays a corporate logo, ads with detailed technical information, and those that promise “the best.” Critics and defenders have often adopted extreme positions, attacking or defending any and all advertising. But, at the very least, it seems safe to say that the information firms convey in advertising is not systematically worse than the information volunteered in political campaigns or used car ads. Modern economics views advertising as a type of promotion, in the same vein as direct selling by salespersons and promotional price discounts. If we focus on the problems firms face in promoting their wares, rather than on advertising as an isolated phenomenon, it is easier to understand why advertising is used in some circumstances and not in others. Scope While advertising has its roots in the advance of literacy and the advent of inexpensive mass newspapers in the nineteenth century, modern advertising as we know it began early in the twentieth century with two new products, Kellogg cereals and Camel cigarettes. What is generally credited as the first product endorsement also stems from this period: Honus Wagner’s autograph was imprinted on the Louisville Slugger in 1905. Advertising as a percentage of GDP has stayed relatively constant since the 1920s, at roughly 2 percent. About 60 percent of advertising is national rather than local. Table 1 shows national and local expenditures since 1940. In 2002, newspapers accounted for some 19 percent of total advertising expenditures; magazines for 5 percent; broadcast and cable television for 23 percent; radio for 8 percent; direct mail for 19 percent; and miscellaneous techniques such as yellow pages, billboards, and the Goodyear blimp for the remaining 27 percent. Internet advertising accounted for 2 percent of total advertising expenditures. One popular argument in favor of advertising is that it provides financial support for newspapers, radio, and television. In reply, critics remark that advertiser-supported radio and television programming is of low quality because it appeals to those who are easily influenced by advertising. They also charge that advertiser-supported newspapers and magazines are too reluctant to criticize products of firms that are actual or potential advertisers. Table 1 Advertising Expenditures (billions $) National Local Total % of GDP 1940 1.2 0.9 2.1 2.11 1950 3.3 2.4 5.7 1.98 1960 7.3 4.7 12.0 2.28 1970 11.4 8.2 19.6 1.89 1980 29.8 23.7 53.5 1.91 1990 73.6 56.3 130.0 2.24 2000 151.7 95.8 247.5 2.52 2002 145.7 91.8 237.4 2.27 Sources: Statistical Abstract of the United States, 1987, 537; and 2002, 438 and 772; U.S. Historical Statistics, Colonial Times to 1970, Series T444; and Advertising Age, May 6, 1991, p. 16. Numbers may not add up due to rounding. While aggregate expenditures on advertising have remained steady as a percentage of GDP, the intensity of spending varies greatly across firms and industries (see Table 2). Many inexpensive consumer items, such as over-the-counter drugs, cosmetics, and razor blades, are heavily advertised. Advertising-to-sales ratios also are high for food products such as soft drinks, breakfast cereals, and beer. And there is remarkable stability in this pattern from country to country. A type of product that is heavily advertised in the United States tends to be heavily advertised in Europe, as well. Even within an industry, however, some firms will advertise more than others. Among pharmaceutical manufacturers, for example, Merck and Bayer spend less than 5 percent of sales on advertising, while Pfizer spends in excess of 12 percent. The differences among industries, while stable, are deceptive. For example, automakers typically spend only 1 to 2 percent of sales on advertising, but their products are heavily promoted by the sales staffs in dealer showrooms. Similarly, industrial products are not heavily advertised because trade fairs and point-of-sale promotion are often more cost-effective than advertising. Products with relatively few customers may not be advertised at all or advertised solely in specialized publications. Economic Function While discussions of advertising often emphasize persuasion and the creation of brand loyalty, economists tend to emphasize other, perhaps more important, functions. The rise of the self-service store, for example, was aided by consumer knowledge of branded goods. Before the advent of advertising, customers relied on knowledgeable shopkeepers for help in selecting products, which often were unbranded. Today, consumer familiarity with branded products is one factor making it possible for far fewer retail employees to serve the same number of customers. Table 2 Advertising-to-Sales Ratios, Top Twenty Industries, 2003 Loan brokers 38.4 Health services 32.5 Distilled and blended liquor 14.9 Miscellaneous publishing 12.9 Sugar and confectionery products 11.7 Soap, detergent, and toilet preparations 11.3 Amusement parks 10.7 Food and kindred products 10.2 Special cleaning and polishing preparations 9.7 Knitting mills 9.6 Television broadcast stations 9.3 Beverages 9.2 Water transportation 8.8 Malt beverages 8.5 Heating equipment and plumbing fixtures 8.4 Motion picture and video tape production 8.4 Rubber and plastic footwear 8.4 Games, toys, children’s vehicles, except dolls 8.2 Dolls and stuffed toys 7.8 Cable and other pay TV services 7.7 Source: Advertising Age, online at: http://www.adage.com/page.cms?pageId=1013. Note: Top twenty industries among the two hundred industries spending the most on advertising. Newly introduced products are typically advertised more heavily than established ones, as are products whose customers are constantly changing. For example, cosmetics, mouthwash, and toothpaste are marked by high rates of new product introductions because customers are willing to abandon existing products and try new ones. Viewed this way, consumer demand generates new products and the advertising that accompanies them, not the other way around. In a similar vein, “noninformative,” or image, advertising can be usefully thought of as something that customers demand along with the product. Customers often want to see themselves as athletic, adventuresome, or spontaneous, and vendors of beer, cars, and cell phones bundle the image and the physical product. When some customers are unwilling to pay for image, producers that choose not to advertise can supply them with a cheaper product. Often, the same manufacturer will respond to these differences in customer demands by producing both a high-priced, labeled, heavily advertised version of a product and a second, low-priced line as an unadvertised house brand or generic product. In baked goods, canned goods, and dairy products, for example, some manufacturers sell one version under their own nationally known label and another slightly different version under a particular grocery chain’s private label. Advertising messages obviously can be used to mislead, but a heavily advertised brand name limits the scope for deception and poor quality. A firm with a well-known brand suffers serious damage to an image that it has paid dearly to establish when a defective product reaches the consumer (see brand names). Interestingly, even under central planning, officials in the Soviet Union encouraged the use of brand names and trademarks in order to monitor which factories produced defective merchandise and to allow consumers to inform themselves about products available from various sources. Monopoly Early opinion among many economists was summarized by Henry Simons, who wrote in 1948 that “a major barrier to really competitive enterprise and efficient service to consumers is to be found in advertising—in national advertising especially, and in sales organizations which cover great national and regional areas.” Economic debate in the 1950s focused on whether advertising promotes monopoly by creating a “barrier to entry.” Heavy advertising of existing brands, many economists thought, might make consumers less likely to try new brands, thus raising the cost of entry for newcomers. Other economists speculated that advertising makes consumers less sensitive to price, allowing firms that advertise to raise their prices above competitive levels. Economic researchers addressed this issue by examining whether industries marked by heavy advertising were also more concentrated (see industrial concentration) or had higher profits. The correlation between advertising intensity and industry concentration turned out to be very low and varied from sample to sample, and it is largely ignored today. What is more, early research found that high levels of advertising in an industry were associated with unstable market shares, consistent with the idea that advertising promoted competition rather than monopoly. The idea that advertising creates monopoly was supported by studies that found high rates of return in industries with high levels of advertising. As other economists pointed out, however, the accounting rates of return used to measure profits do not treat advertising as an asset. Consequently, measured rates of return—income divided by measured assets—will often overstate profit rates for firms and industries with heavy advertising. Subsequent work showed that when attention is restricted to industries with relatively small bias in the accounting numbers, the correlation between rates of return and amount of advertising disappears. A lucky by-product of the advertising-and-profits dispute were studies that estimated depreciation rates of advertising—the rates at which advertising loses its effect. Typically, estimated rates are about 33 percent per year, though some authors have found rates as low as 5 percent. Contrary to the monopoly explanation (and to the assertion that advertising is a wasteful expense), advertising often lowers prices. In a classic study of advertising restrictions on optometrists, Lee Benham found that eyeglass prices were twenty dollars higher (in 1963 dollars) in states banning advertising than in those that did not. Bans on price advertising, but not on other kinds of advertising, resulted in prices nearly as low as in the states without any restrictions at all. Benham argued that advertising allows high-volume, low-cost retailers to communicate effectively with potential customers, even if they cannot mention price explicitly. The importance of price advertising, however, apparently varies with the way consumers typically obtain price information and make purchase decisions. An unpublished study by Al Ehrbar found gasoline prices to be significantly higher (about 6 percent, net of excise taxes) in communities that prohibited large price signs in gas stations. Regulation In the past, many professionals such as doctors, lawyers, and pharmacists succeeded in getting state legislatures to implement complete or partial bans on advertising in their professions, preventing either all advertising or advertising of prices. In recent decades court decisions have overturned these restrictions. At the federal level, the U.S. Federal Trade Commission has jurisdiction over advertising by virtue of its ability to regulate “deceptive” acts or practices. It can issue cease-and-desist orders, require corrective advertising, and mandate disclosure of certain information in ads. The Regulation of cigarette advertising has been particularly controversial. The Federal Trade Commission has required cigarette manufacturers to disclose tar and nicotine content since 1970, although it had prohibited precisely the same disclosure before that. Beginning January 1, 1971, the federal government also banned all radio and television advertising of cigarettes. While overall cigarette advertising expenditures dropped by more than 20 percent, per capita cigarette consumption remained unchanged for many years. Critics of the regulations maintain that it was the growing evidence of the harmful effects of smoking, rather than the reduction in advertising, that ultimately led to the smaller percentage of smokers in society. The critics also contend that the advertising ban may have slowed the rate at which low-tar cigarettes were used. Governmental Advertising Governments have funded or mandated advertising to reduce harmful behaviors such as smoking, drunk driving, and drug use. Researchers have not devoted much attention to such efforts. One exception is California’s Proposition 99, passed in 1988, which increased taxes on cigarettes from ten to thirty-five cents per package and earmarked 20 percent for educational programs, including an antismoking campaign. The one study that looked at this measure found that increased expenditures on antismoking measures were associated with declines in per capita cigarette sales. About the Author George Bittlingmayer is the Wagnon Distinguished Professor of Finance at the University of Kansas School of Business. He was previously an economist with the Federal Trade Commission. Further Reading   Becker, Gary S., and Kevin M. Murphy. “A Simple Theory of Advertising as Good or Bad.” Quarterly Journal of Economics 108 (November 1993): 941–964. Benham, Lee. “The Effect of Advertising on the Price of Eyeglasses.” Journal of Law and Economics 15 (October 1972): 337–352. Borden, Neil H. The Economic Effects of Advertising. Chicago: Irwin, 1942. Chaloupka, Frank J. “Public Policies and Private Anti-health Behavior.” American Economic Review 85 (May 1995): 45–49. Comanor, William S., and Thomas A. Wilson. “Advertising and Competition: A Survey.” Journal of Economic Literature 17 (June 1979): 453–476. Ekelund, Robert B. Jr., and David S. Saurman. Advertising and the Market Process. San Francisco: Pacific Research Institute for Public Policy, 1988. Hu, The-Wei, Hai-Yen Sung, and Theodore E. Keeler. “The State Antismoking Campaign and the Industry Response: The Effects of Advertising on Cigarette Consumption in California.” American Economic Review 85 (May 1995): 85–90. Landes, Elisabeth M., and Andrew M. Rosenfield. “The Durability of Advertising Revisited.” Journal of Industrial Economics 42 (September 1994): 263–276. Rubin, Paul. “Regulation of Information and Advertising.” In Barry Keating, ed., A Companion to the Economics of Regulation. London: Blackwell, 2004. Schmalensee, Richard. The Economics of Advertising. Amsterdam: North-Holland, 1972. Telser, Lester. “Advertising and Competition.” Journal of Political Economy 72 (December 1964): 537–562. Telser, Lester. “Some Aspects of the Economics of Advertising.” Journal of Business 41 (April 1968): 166–173. Related Links Capitalism. Concise Encyclopedia of Economics. O’Donohoe on Potato Chips and Salty Snacks, EconTalk, August 2011 Rory Sutherland in Alchemy. EconTalk, November 2019. (0 COMMENTS)

/ Learn More

Arts

General economic principles govern the arts. Most important, artists use scarce means to achieve ends—and therefore recognize trade-offs, the defining aspects of economic behavior. Also, many other economic aspects of the arts make the arts similar to the more typical goods and services that economists analyze. As in other economic sectors, marketplace exchange provides more choice for both consumers and producers. Adam Smith’s famous maxim that the division of labor is limited by the extent of the market applies no less to the arts. Larger markets support more diverse and numerous artistic styles. The advent of musical recording, for instance, expanded the market and enabled jazz, blues, country, ragtime, gospel, and rhythm and blues to find larger audiences. Each genre then split into diverse branches, as when rhythm and blues evolved into rock and roll, Motown, rap, soul, and so on. We find the same trends in literature. The book superstore and Amazon.com help many niche writers—not just authors of bestsellers—market their works to readers. The identification and marketing techniques of mass culture help artists to reach smaller groups of buyers, thereby giving artists a better chance to make a living from their work. In short, mass culture and niche culture are complements, not substitutes. The arts also illustrate the more general benefits of wealth, a common theme in economics. The richer the society, the more options artists have. As wealth increases, so does the number of potential buyers, allowing artists to pick and choose their projects and to walk away if they do not like the terms of the commission. The pope had to beg Michelangelo to come back to finish the Sistine Chapel because the artist had many other potential customers at the time. Artistic freedom, while rarely absolute, is a product of prosperity. Moreover, family wealth has also helped on the supply side. Many artists lived off family wealth for much of their careers. In France, for instance, Delacroix, Degas, Manet, Monet, Cézanne, Toulouse-Lautrec, Proust, Baudelaire, and Flaubert all relied on parental wealth to some extent. Some of these creators attacked the bourgeoisie of their time, in spite of the fact that a bourgeois society with its widespread wealth gave them their artistic freedom. Wealth also gives rise to charity, one source of funding for the arts. In the United States, from 1965 to 1990, the number of symphony orchestras rose from 58 to nearly 300, the number of opera companies rose from 27 to more than 150, and the number of nonprofit regional theaters rose from 22 to 500. Charitable donations are key to all these artistic forms. Individual, corporate, and foundation donors make up about 45 percent of the budget for nonprofit arts institutions. Twelve percent of their income comes from foundation grants alone, two and a half times as much as from the National Endowment for the Arts and state arts councils combined. Contrary to common opinion, the commercial incentives brought by wealth are not typically corrupting. Many artists chase profits, but the commercial and artistic impulses are not always at war. The letters of Bach, Mozart, Haydn, and Beethoven reveal that all were obsessed with earning money. Mozart wrote in one of his letters: “Believe me, my sole purpose is to make as much money as possible; for after good health it is the best thing to have.” Charlie Chaplin once noted: “I went into the business for money and the art grew out of it.” Many talented artists are motivated not just by narrow self-interest, but also by a desire to make money to help friends or to finance their creative urges. The idea that the great artists in history were starving has been overplayed. No doubt many artists earn low incomes, in part because a market economy gives so many people the chance to shoot for an artistic career: the large number of would-be artists depresses wages. But many artists have earned a good living by selling their products to audiences or winning the loyalty of patrons. Michelangelo and Raphael were wealthy men in their time; indeed, most of the Italian Renaissance artists were commercially successful. More generally, most famous artists commanded high prices in their lifetimes. Shakespeare worked in the for-profit theater world and did not need patronage. Even when artists cannot afford to make art for a living, a capitalist economy gives them the best chance of moonlighting. T. S. Eliot worked in Lloyd’s bank, James Joyce taught languages, Charles Ives and Wallace Stevens were insurance executives, and William Faulkner worked in a power plant. They all managed to create, either on the job or in their spare time. Just as technological progress has helped create new industries and more options for consumers in other areas of the economy, it does so in art also. We take cheap paper for granted, but the Renaissance arts blossomed only when paper became cheap enough for most artists to afford. The French Impressionists used new colors, based on new scientific research on chemicals, that came from the industrial revolution. Rock and roll required the electric guitar and the advanced recording studio. Whereas John Keats, Mozart, and Schubert all died young of illnesses such as tuberculosis, medical advances allow modern artists to live longer and produce more. On the other side, economist William Baumol, one of the first to write on the economics of the arts, suggested that the performing arts are “technologically stagnant,” not allowing significant productivity improvements. He noted that it still takes thirty minutes for four people to play a Mozart string quartet. As real wages rise in a growing economy, therefore, argued Baumol, the wage cost of a Mozart string quartet will rise. But Baumol ignored other aspects of technology that make such quartets more economically feasible. Modern broadcasting, for example, makes such quartets available not just to hundreds of people at a time, but to millions. And one reason this is a “golden age” for first-rate string quartets is that air travel allows them to perform around the globe and recruit members from different countries and continents. The contemporary Kronos Quartet plays both Bartok and Jimi Hendrix, relying on new ideas made possible by growing markets and technology. The issue of free trade has long been resolved in economics: trade is good (see free trade). But only recently has the globalization of art attracted much attention. Arguing contrary to the principles of Adam Smith, many critics, such as Benjamin Barber, argue that cultural trade is bringing us a culture of the least common denominator—Reebok, McDonald’s, and bad TV shows. The evidence, though, supports free trade in the arts. Trade has done much to stimulate international diversity, just as it supports diversity within the borders of a single country. The Third World has produced many notable authors and moviemakers over the last several decades. Gabriel García Márquez, Naguib Mahfouz, V. S. Naipaul, and John Woo are all products of globalized culture whose art could not have existed without significant international trade. The metal carving knife—a product of the West—is a boon for poor carvers and sculptors around the world, as acrylic paints and canvas are for new forms of the visual arts. World music blossomed in the twentieth century, most of all in open, cosmopolitan cities such as Lagos, Rio de Janeiro, and pre-Castro Havana. Many Third World creators earn their living by selling their products to wealthy Western consumers. Western purchasing power has been central to Haitian naïve art, Jamaican reggae music, Navajo weaving, and the Persian carpet boom of the late nineteenth century, among many other artistic movements. Because trade tends to make countries more similar, nations may appear less diverse. But countries become more similar in a largely beneficial way by developing a varied menu of cultural choice. One can now buy sushi in Germany and France, but this hardly counts as cultural destruction. Individuals have greater opportunity to pursue cultural paths of their choosing. So, although differences across societies decrease, diversity within a given society increases. Just as trade improves the arts, restrictions on trade damage them. French cultural protectionism, originally imposed by the Vichy government and the Nazis, was retained by the French after the war. Before that time, French culture, including French cinema, flourished under largely free trade conditions. As subsidies and protectionism have grown, French cinema has suffered increasing economic difficulties. Subsidy defenders argue either that supporting the arts is intrinsically valuable or that subsidies to art bring additional social or economic benefits. Subsidy critics challenge both presumptions: relying on the market, they say, may give us a better menu of choices. Arts in the United States receive much smaller direct subsidies than do those in Western Europe. The budget for the National Endowment for the Arts has never exceeded $170 million (and was only $115 million in 2003), an amount far less than the budget of many Hollywood movies. Yet the American arts are economically healthier in many regards because of their stronger commercial roots. Just as government regulation slowed innovation in industries such as airlines and trucking, government regulators tried to slow innovation in the arts. Fortunately, the government failed. Many of America’s most significant cultural innovations—such as jazz, Hollywood, and rock and roll—flourished in the face of government opposition. At times the American government tried to censor these art forms or meted out especially harsh legal treatment to prominent creators (e.g., Alan Freed, Chuck Berry, and James Brown). In sum, the economics of the arts reflects more general economic truths. The arts are often “special” in our hearts, but economic analysis suggests that increases in wealth, commercialization, and globalization are good both for the arts and for those who enjoy them. About the Author Tyler Cowen is a professor of economics at George Mason University and the director of both the James Buchanan Center and the Mercatus Center. Further Reading   Cowen, Tyler. Creative Destruction: How Globalization Is Shaping the World’s Cultures. Princeton: Princeton University Press, 2002. Cowen, Tyler. In Praise of Commercial Culture. Cambridge: Harvard University Press, 1998. Heilbrun, James, and Charles M. Gray. The Economics of Art and Culture: An American Perspective. Cambridge: Cambridge University Press, 1993. Related Links Tyler Cowen on Liberty, Art, Food, and Everything in Between. EconTalk, arch 2007.   (0 COMMENTS)

/ Learn More

Apartheid

The now-defunct apartheid system of South Africa presented a fascinating instance of interest-group competition for political advantage. In light of the extreme human rights abuses stemming from apartheid, it is remarkable

/ Learn More

Eugene Fama

Eugene Fama shared the 2013 Nobel Prize in Economic Sciences with Robert Shiller and Lars Peter Hansen. The three received the prize for “for their empirical analysis of stock prices.” Fama has played a key role in the development of modern finance, with major contributions to a broad range of topics within the field, beginning with his seminal work on the efficient market hypothesis (EMH) and stock market behavior, and continuing on with work on financial decision making under uncertainty, capital structure and payout policy, agency costs, the determinants of expected returns, and even banking. His major early contribution was to show that stock markets are efficient (See efficient capital markets). The term “efficient” here does not mean what it normally means in economics—namely, that benefits minus costs are maximized. Instead, it means that prices of stocks rapidly incorporate information that is publicly available. That happens because markets are so competitive: prices now move on earnings news within milliseconds. If someone were certain that a given asset’s price would rise in the future, he would buy the asset now. When a number of people try to buy the stock now, the price rises now. The result is that asset prices immediately reflect current expectations of future value. One implication of market efficiency is that trading rules, such as “buy when the price fell yesterday,” do not work. As financial economist John H. Cochrane has written, many empirical studies have shown that “trading rules, technical systems, market newsletters and so on have essentially no power beyond that of luck to forecast stock prices.” Indeed, Fama’s insight led to the development of index funds by investment management firms. Index funds do away with experts picking stocks in favor of a passive basket of the largest public companies’ stocks. Fama’s insight also has implications for bubbles—that is, asset prices that are higher than justified by market fundamentals. As Fama said in a 2010 interview, “It’s easy to say prices went down, it must have been a bubble, after the fact. I think most bubbles are twenty-twenty hindsight. . . . People are always saying that prices are too high. When they turn out to be right, we anoint them. When they turn out to be wrong we ignore them.” To determine how fully the asset market reflects available information in the real world, one must compare the expected return of an asset to the asset’s risk (both of which must be estimated). Fama called this the “joint hypothesis problem.” Testing the EMH in the real world is difficult since the researcher must stop the flow of information while allowing trading to continue. Surprisingly, soccer betting allows a simplified form of the efficient markets hypothesis (EMH) to be tested in a way that bypasses the joint hypothesis problem. During a soccer halftime, however, play ceases, so no new information is provided. In an Economic Journal article, Karen Croxson and J. James Reade1 studied the reaction of soccer betting prices to goals scored moments before halftime. Croxson and Reade found that betting continued heavily throughout halftime, but the betting prices did not change—consistent with the EMH. Fama does not claim that real-world financial markets are perfectly efficient. Under perfect efficiency, prices incorporate all information all the time. Fama studied the correlation between a stock’s long-term returns and its dividend-stock price ratio. If stock price changes did follow a truly “random walk,” there would be no correlation. This was not the case: there was a positive correlation between the dividend to stock price ratio and long-term expected returns. Some financial economists, such as Shiller, do not believe that markets are efficient. But certainly markets are at least somewhat efficient. If markets were perfectly inefficient, a firm’s characteristics would have no influence or relation to its stock prices. This is unrealistic. A firm that is on the brink of going out of business could have a capital value higher than that of Apple or Exxon. In the early 1990s, Fama and co-author Kenneth R. French developed a three-factor model of stock prices in response to the “anomaly” literature of the 1980s, which some economists saw as evidence against the EMH. The three-factor model introduces two new factors—company size and value—in addition to beta, as determinants of expected returns. Beta is a measure of risk from the well-known Capital Asset Pricing Model (CAPM). They found that, on the assumption that assets are priced efficiently, the evidence is consistent with the idea that “value stocks”—those whose share prices appear low relative to the book value of equity—and small-company stocks are riskier and, thus, earn higher returns. Their interpretation is subject to the above-mentioned joint hypothesis problem, however. Financial economists Josef Lakonishok, Andrei Shleifer, and Robert Vishny2 give evidence that value stocks are not riskier. They argue against the idea that assets are priced efficiently; their view is that the reason value strategies yield higher returns is that such strategies “exploit the suboptimal behavior of the typical investor.” Fama strongly opposed the 2008 selective bailout of Wall Street firms, arguing that, without it, financial markets would have sorted themselves out within “a week or two.” He also argued that “if it becomes the accepted norm that the government steps in every time things go bad, we’ve got a terrible adverse selection problem.” Eugene Fama earned his B.A. in Romance Languages from Tufts University in 1960. Shifting gears, he earned both an M.B.A. and a Ph.D. from the University of Chicago Graduate School of Business in 1963. He then joined the faculty of the University of Chicago Business School, which later became the Booth School of Business, where he is currently the Robert R. McCormick Distinguished Service Professor of Finance. Selected Works   1965. “The Behavior of Stock Market Prices.” The Journal of Business, Vol. 38, No. 1 (Jan., 1965), pp. 34-105. 1970. “Efficient Capital Markets: A Review of Theory and Empirical Work.” The Journal of Finance, Vol. 25, No. 2 (May, 1970), pp. 383-417. 1971. “Risk, Return, and Equilibrium.” Journal of Political Economy, Vol. 79, No. 1 (Jan-Feb., 1971), pp. 30-55. 1976. Foundations of Finance. New York: Basic Books. 1988 (with Kenneth R. French). “Permanent and Temporary Components of Stock Prices.” Journal of Political Economy, Vol. 96, No. 2 (Apr., 1988), pp. 246-273. 1992 (with Kenneth R. French). “The cross-section of expected stock returns.” Journal of Finance Vol. 47, No. 2 (June 1992), pp. 427-465.   Footnotes 1. Croxson, Karen, and J. James Reade, “Information and Efficiency: Goal Arrival in Soccer Betting,” The Economic Journal, Vol. 124, Issue 575 (Mar. 2014), pp. 62-91.   2. Lakonishok, Josef, Andrei Shleifer, and Robert W. Vishny, “Contrarian Investment, Extrapolation, and Risk,” Journal of Finance, Vol. 49, No. 5 (December 1994), 1541-1578.   (0 COMMENTS)

/ Learn More