This is my archive

bar

International Trade Agreements

Ever since Adam Smith published The Wealth of Nations in 1776, the vast majority of economists have accepted the proposition that free trade among nations improves overall economic welfare. Free trade, usually defined as the absence of tariffs, quotas, or other governmental impediments to international trade, allows each country to specialize in the goods it can produce cheaply and efficiently relative to other countries. Such specialization enables all countries to achieve higher real incomes. Although free trade provides overall benefits, removing a trade barrier on a particular good hurts the shareholders and employees of the domestic industry that produces that good. Some of the groups that are hurt by foreign competition wield enough political power to obtain protection against imports. Consequently, barriers to trade continue to exist despite their sizable economic costs. According to the U.S. International Trade Commission, for example, the U.S. gain from removing trade restrictions on textiles and apparel would have been almost twelve billion dollars in 2002 alone. This is a net economic gain after deducting the losses to firms and workers in the domestic industry. Yet, domestic textile producers have been able to persuade Congress to maintain tight restrictions on imports. While virtually all economists think free trade is desirable, they differ on how best to make the transition from tariffs and quotas to free trade. The three basic approaches to trade reform are unilateral, multilateral, and bilateral. Some countries, such as Britain in the nineteenth century and Chile and China in recent decades, have undertaken unilateral tariff reductions—reductions made independently and without reciprocal action by other countries. The advantage of unilateral free trade is that a country can reap the benefits of free trade immediately. Countries that lower trade barriers by themselves do not have to postpone reform while they try to persuade other nations to follow suit. The gains from such trade liberalization are substantial: several studies have shown that income grows more rapidly in countries open to international trade than in those more closed to trade. Dramatic illustrations of this phenomenon include China’s rapid growth after 1978 and India’s after 1991, those dates indicating when major trade reforms took place. For many countries, unilateral reforms are the only effective way to reduce domestic trade barriers. However, multilateral and bilateral approaches—dismantling trade barriers in concert with other countries—have two advantages over unilateral approaches. First, the economic gains from international trade are reinforced and enhanced when many countries or regions agree to a mutual reduction in trade barriers. By broadening markets, concerted liberalization of trade increases competition and specialization among countries, thus giving a bigger boost to efficiency and consumer incomes. Second, multilateral reductions in trade barriers may reduce political opposition to free trade in each of the countries involved. That is because groups that otherwise would oppose or be indifferent to trade reform might join the campaign for free trade if they see opportunities for exporting to the other countries in the trade agreement. Consequently, free trade agreements between countries or regions are a useful strategy for liberalizing world trade. The best possible outcome of trade negotiations is a multilateral agreement that includes all major trading countries. Then, free trade is widened to allow many participants to achieve the greatest possible gains from trade. After World War II, the United States helped found the General Agreement on Tariffs and Trade (GATT), which quickly became the world’s most important multilateral trade arrangement. The major countries of the world set up the GATT in reaction to the waves of protectionism that crippled world trade during—and helped extend—the Great Depression of the 1930s. In successive negotiating “rounds,” the GATT substantially reduced the tariff barriers on manufactured goods in the industrial countries. Since the GATT began in 1947, average tariffs set by industrial countries have fallen from about 40 percent to about 5 percent today. These tariff reductions helped promote the tremendous expansion of world trade after World War II and the concomitant rise in real per capita incomes among developed and developing nations alike. The annual gain from removal of tariff and nontariff barriers to trade as a result of the Uruguay Round Agreement (negotiated under the auspices of the GATT between 1986 and 1993) has been put at about $96 billion, or 0.4 percent of world GDP. In 1995, the GATT became the World Trade Organization (WTO), which now has more than 140 member countries. The WTO oversees four international trade agreements: the GATT, the General Agreement on Trade in Services (GATS), and agreements on trade-related intellectual property rights and trade-related investment (TRIPS and TRIMS, respectively). The WTO is now the forum for members to negotiate reductions in trade barriers; the most recent forum is the Doha Development Round, launched in 2001. The WTO also mediates disputes between member countries over trade matters. If one country’s government accuses another country’s government of violating world trade rules, a WTO panel rules on the dispute. (The panel’s ruling can be appealed to an appellate body.) If the WTO finds that a member country’s government has not complied with the agreements it signed, the member is obligated to change its policy and bring it into conformity with the rules. If the member finds it politically impossible to change its policy, it can offer compensation to other countries in the form of lower trade barriers on other goods. If it chooses not to do this, then other countries can receive authorization from the WTO to impose higher duties (i.e., to “retaliate”) on goods coming from the offending member country for its failure to comply. As a multilateral trade agreement, the GATT requires its signatories to extend most-favored-nation (MFN) status to other trading partners participating in the WTO. MFN status means that each WTO member receives the same tariff treatment for its goods in foreign markets as that extended to the “most-favored” country competing in the same market, thereby ruling out preferences for, or discrimination against, any member country. Although the WTO embodies the principle of nondiscrimination in international trade, article 24 of the GATT permits the formation of free-trade areas and “customs unions” among WTO members. A free-trade area is a group of countries that eliminate all tariffs on trade with each other but retain autonomy in determining their tariffs with nonmembers. A customs union is a group of countries that eliminate all tariffs on trade among themselves but maintain a common external tariff on trade with countries outside the union (thus technically violating MFN). The customs union exception was designed, in part, to accommodate the formation of the European Economic Community (EC) in 1958. The EC, originally formed by six European countries, is now known as the european union (EU) and includes twenty-seven European countries. The EU has gone beyond simply reducing barriers to trade among member states and forming a customs union. It has moved toward even greater economic integration by becoming a common market—an arrangement that eliminates impediments to the mobility of factors of production, such as capital and labor, between participating countries. As a common market, the EU also coordinates and harmonizes each country’s tax, industrial, and agricultural policies. In addition, many members of the EU have formed a single currency area by replacing their domestic currencies with the euro. The GATT also permits free-trade areas (FTAs), such as the European Free Trade Area, which is composed primarily of Scandinavian countries. Members of FTAs eliminate tariffs on trade with each other but retain autonomy in determining their tariffs with nonmembers. One difficulty with the WTO system has been the problem of maintaining and extending the liberal world trading system in recent years. Multilateral negotiations over trade liberalization move very slowly, and the requirement for consensus among the WTO’s many members limits how far agreements on trade reform can go. As Mike Moore, a recent director-general of the WTO, put it, the organization is like a car with one accelerator and 140 hand brakes. While multilateral efforts have successfully reduced tariffs on industrial goods, it has had much less success in liberalizing trade in agriculture, textiles, and apparel, and in other areas of international commerce. Recent negotiations, such as the Doha Development Round, have run into problems, and their ultimate success is uncertain. As a result, many countries have turned away from the multilateral process toward bilateral or regional trade agreements. One such agreement is the North American Free Trade Agreement (NAFTA), which went into effect in January 1994. Under the terms of NAFTA, the United States, Canada, and Mexico agreed to phase out all tariffs on merchandise trade and to reduce restrictions on trade in services and foreign investment over a decade. The United States also has bilateral agreements with Israel, Jordan, Singapore, and Australia and is negotiating bilateral or regional trade agreements with countries in Latin America, Asia, and the Pacific. The European Union also has free-trade agreements with other countries around the world. The advantage of such bilateral or regional arrangements is that they promote greater trade among the parties to the agreement. They may also hasten global trade liberalization if multilateral negotiations run into difficulties. Recalcitrant countries excluded from bilateral agreements, and hence not sharing in the increased trade these bring, may then be induced to join and reduce their own barriers to trade. Proponents of these agreements have called this process “competitive liberalization,” wherein countries are challenged to reduce trade barriers to keep up with other countries. For example, shortly after NAFTA was implemented, the EU sought and eventually signed a free-trade agreement with Mexico to ensure that European goods would not be at a competitive disadvantage in the Mexican market as a result of NAFTA. But these advantages must be offset against a disadvantage: by excluding certain countries, these agreements may shift the composition of trade from low-cost countries that are not party to the agreement to high-cost countries that are. Suppose, for example, that Japan sells bicycles for fifty dollars, Mexico sells them for sixty dollars, and both face a twenty-dollar U.S. tariff. If tariffs are eliminated on Mexican goods, U.S. consumers will shift their purchases from Japanese to Mexican bicycles. The result is that Americans will purchase from a higher-cost source, and the U.S. government receives no tariff revenue. Consumers save ten dollars per bicycle, but the government loses twenty dollars. Economists have shown that if a country enters such a “trade-diverting” customs union, the cost of this trade diversion may exceed the benefits of increased trade with the other members of the customs union. The net result is that the customs union could make the country worse off. Critics of bilateral and regional approaches to trade liberalization have many additional arguments. They suggest that these approaches may undermine and supplant, instead of support and complement, the multilateral WTO approach, which is to be preferred for operating globally on a nondiscriminatory basis. Hence, the long-term result of bilateralism could be a deterioration of the world trading system into competing, discriminatory regional trading blocs, resulting in added complexity that complicates the smooth flow of goods between countries. Furthermore, the reform of such issues as agricultural export subsidies cannot be dealt with effectively at the bilateral or regional level. Despite possible tensions between the two approaches, it appears that both multilateral and bilateral/regional trade agreements will remain features of the world economy. Both the WTO and agreements such as NAFTA, however, have become controversial among groups such as antiglobalization protesters, who argue that such agreements serve the interests of multinational corporations and not workers, even though freer trade has been a time-proven method of improving economic performance and raising overall incomes. To accommodate this opposition, there has been pressure to include labor and environmental standards in these trade agreements. Labor standards include provisions for minimum wages and working conditions, while environmental standards would prevent trade if environmental damage was feared. One motivation for such standards is the fear that unrestricted trade will lead to a “race to the bottom” in labor and environmental standards as multinationals search the globe for low wages and lax environmental regulations in order to cut costs. Yet there is no empirical evidence of any such race. Indeed, trade usually involves the transfer of technology to developing countries, which allows wage rates to rise, as Korea’s economy—among many others—has demonstrated since the 1960s. In addition, rising incomes allow cleaner production technologies to become affordable. The replacement of pollution-belching domestically produced scooters in India with imported scooters from Japan, for example, would improve air quality in India. Labor unions and environmentalists in rich countries have most actively sought labor and environmental standards. The danger is that enforcing such standards may simply become an excuse for rich-country protectionism, which would harm workers in poor countries. Indeed, people in poor countries, whether capitalists or laborers, have been extremely hostile to the imposition of such standards. For example, the 1999 WTO meeting in Seattle collapsed in part because developing countries objected to the Clinton administration’s attempt to include labor standards in multilateral agreements. A safe prediction is that international trade agreements will continue to generate controversy. About the Author Douglas A. Irwin is a professor of economics at Dartmouth College. He formerly served on the staff of the President’s Council of Economic Advisers and on the Federal Reserve Board. Further Reading   Bhagwati, Jagdish, ed. Going Alone: The Case for Relaxed Reciprocity in Freeing Trade. Cambridge: MIT Press, 2002. Bhagwati, Jagdish, and Arvind Panagariya eds. The Economics of Preferential Trade Agreements. Washington, D.C.: AEI Press, 1996. Irwin, Douglas A. Against the Tide: An Intellectual History of Free Trade. Princeton: Princeton University Press, 1996. Irwin, Douglas A. Free Trade Under Fire. 2d ed. Princeton: Princeton University Press, 2005. U.S. International Trade Commission. The Economic Effects of Significant U.S. Import Restraints. Fourth update. USITC Publication no. 3701. June 2004. Wacziarg, Romain, and Karen H. Welch. “Trade Liberalization and Growth: New Evidence.” NBER Working Paper no. 10152. National Bureau of Economic Research, Cambridge, Mass., 2003.   (0 COMMENTS)

/ Learn More

International Trade

On the topic of international trade, the views of economists tend to differ from those of the general public. There are three principal differences. First, many noneconomists believe that it is more advantageous to trade with other members of one’s nation or ethnic group than with outsiders. Economists see all forms of trade as equally advantageous. Second, many noneconomists believe that exports are better than imports for the economy. Economists believe that all trade is good for the economy. Third, many noneconomists believe that a country’s balance of trade is governed by the “competitiveness” of its wage rates, tariffs, and other factors. Economists believe that the balance of trade is governed by many factors, including the above, but also including differences in national saving and investment. The noneconomic views of trade all seem to stem from a common root: the tendency for human beings to emphasize tribal rivalries. For most people, viewing trade as a rivalry is as instinctive as rooting for their national team in Olympic basketball. To economists, Olympic basketball is not an appropriate analogy for international trade. Instead, we see international trade as analogous to a production technique. Opening up to trade is equivalent to adopting a more efficient technology. International trade enhances efficiency by allocating resources to increase the amount produced for a given level of effort. Classical liberals, such as Richard Cobden, believed that free trade could bring about world peace by substituting commercial relationships among individuals for competitive relationships between states.1 History of Trade Theory David Ricardo developed and published one of the first theories of international trade in 1817. “England,” he wrote,

/ Learn More

Insider Trading

“Insider trading” refers to transactions in a company’s securities, such as stocks or options, by corporate insiders or their associates based on information originating within the firm that would, once publicly disclosed, affect the prices of such securities. Corporate insiders are individuals whose employment with the firm (as executives, directors, or sometimes rank-and-file employees) or whose privileged access to the firm’s internal affairs (as large shareholders, consultants, accountants, lawyers, etc.) gives them valuable information. Famous examples of insider trading include transacting on the advance knowledge of a company’s discovery of a rich mineral ore (Securities and Exchange Commission v. Texas Gulf Sulphur Co.), on a forthcoming cut in dividends by the board of directors (Cady, Roberts & Co.), and on an unanticipated increase in corporate expenses (Diamond v. Oreamuno). Although insider trading typically yields significant profits, these transactions are still risky. Much trading by insiders, though, is due to their need for cash or to balance their portfolios. The above definition of insider trading excludes transactions in a company’s securities made on nonpublic “outside” information, such as the knowledge of forthcoming market-wide or industry developments or of competitors’ strategies and products. Such trading on information originating outside the company is generally not covered by insider trading regulation.

/ Learn More

Intellectual Property

Intellectual property is normally defined as the set of products protected under laws associated with copyright, patent, trademark, industrial design, and trade secrets. The U.S. Constitution expressly allows for intellectual property protection, albeit for a limited time, in the form of protection of “writings and discoveries” in order to promote “science and useful arts.” This article focuses on the two most important categories: copyright and patent law. Copyright, which covers the expression of ideas (e.g., through words or music), currently lasts for the rest of the author’s life plus seventy years (or ninety-five years after publication if the product is a “work-for-hire”). But the protection is very narrow. If someone else should, by a remarkable coincidence, write exactly the same song or story as you without ever coming into contact with your work, your prior copyright does not prevent him from selling his work. Copyright currently exists on a work without any effort on the part of the author to attain copyright and without any requirement of quality or originality. Patents, in contrast, last for twenty years and apply to inventions. The protection, although shorter, is broader than that of copyright. If someone else independently creates a duplicate of your invention after you have patented yours, your patent can make his invention worthless since he will not have the legal right to sell his version. This may be true even if his invention is slightly different from yours. For this reason, being the first to patent a valuable idea is very important, and “patent races,” as competitors vie to be first, can be a wasteful use of resources. Unlike copyright, getting the legal patent from the patent office requires spending resources, and before a patent is granted, the ideas that are to be patented must pass several legal hurdles regarding their originality and quality. Although expression and invention must be transformed into physical embodiments before they can have market value, they can also exist, and indeed must originally exist, in the creator’s mind. As such, traditional laws of property, which require physicality, do not apply. Traditional laws of economics, such as the assumption of scarcity, also seem not to apply because individual expressions and ideas cannot be used up. Economists have a term for goods that cannot be used up—“nonrivalrous consumption” (sometimes known as

/ Learn More

Interest Rates

The rate of interest measures the percentage reward a lender receives for deferring the consumption of resources until a future date. Correspondingly, it measures the price a borrower pays to have resources now. Suppose I have $100 today that I am willing to lend for one year at an annual interest rate of 5 percent. At the end of the year, I get back my $100 plus $5 interest (0.05 × 100), for a total of $105. The general relationship is: Money Today (1 + interest rate) = Money Next Year We can also ask a different question: What is the most I would pay today to get $105 next year? If the rate of interest is 5 percent, the most I would pay is $100. I would not pay $101, because if I had $101 and invested it at 5 percent, I would have $106 next year. Thus, we say that the value of money in the future should be discounted, and $100 is the “discounted present value” of $105 next year. The general relationship is: Money Today = Money Next Year (1 + interest rate) The higher the interest rate, the more valuable is money today and the lower is the present value of money in the future. Now, suppose I am willing to lend my money out for a second year. I lend out $105, the amount I have next year, at 5 percent and have $110.25 at the end of year two. Note that I have earned an extra $5.25 in the second year because the interest that I earned in year one also earns interest in year two. This is what we mean by the term “compound interest”—the interest that money earns also earns interest. Albert Einstein is reported to have said that compound interest is the greatest force in the world. Money left in interest-bearing investments can compound to extremely large sums. A simple rule, the rule of 72, tells how long it takes your money to double if it is invested at compound interest. The number 72 divided by the interest rate gives the approximate number of years it will take to double your money. For example, at a 5 percent interest rate, it takes about fourteen years to double your money (72 ÷ 5 = 14.4), while at an interest rate of 10 percent, it takes about seven years. There is a wonderful actual example of the power of compound interest. Upon his death in 1791, Benjamin Franklin left $5,000 to each of his favorite cities, Boston and Philadelphia. He stipulated that the money should be invested and not paid out for one hundred to two hundred years. At one hundred years, each city could withdraw $500,000; after two hundred years, they could withdraw the remainder. They did withdraw $500,000 in 1891; they invested the remainder and, in 1991, each city received approximately $20,000,000. What determines the magnitude of the interest rate in an economy? Let us consider five of the most important factors. 1. The strength of the economy and the willingness to save. Interest rates are determined in a free market where supply and demand interact. The supply of funds is influenced by the willingness of consumers, businesses, and governments to save. The demand for funds reflects the desires of businesses, households, and governments to spend more than they take in as revenues. Usually, in very strong economic expansions, businesses’ desire to invest in plants and equipment and individuals’ desire to invest in housing tend to drive interest rates up. During periods of weak economic conditions, business and housing investment falls and interest rates tend to decline. Such declines are often reinforced by the policies of the country’s central bank (the Federal Reserve in the United States), which attempts to reduce interest rates in order to stimulate housing and other interest-sensitive investments. 2. The rate of inflation. People’s willingness to lend money depends partly on the inflation rate. If prices are expected to be stable, I may be happy to lend money for a year at 4 percent because I expect to have 4 percent more purchasing power at the end of the year. But suppose the inflation rate is expected to be 10 percent. Then, all other things being equal, I will insist on a 14 percent rate on interest, ten percentage points of which compensate me for the inflation.1 Economist irving fisher pointed out this fact almost a century ago, distinguishing clearly between the real rate of interest (4 percent in the above example) and the nominal rate of interest (14 percent in the above example), which equals the real rate plus the expected inflation rate. 3. The riskiness of the borrower. I am willing to lend money to my government or to my local bank (whose deposits are generally guaranteed by the government) at a lower rate than I would lend to my wastrel nephew or to my cousin’s risky new venture. The greater the risk that my loan will not be paid back in full, the larger is the interest rate I will demand to compensate me for that risk. Thus, there is a risk structure to interest rates. The greater the risk that the borrower will not repay in full, the greater is the rate of interest. 4. The tax treatment of the interest. In most cases, the interest I receive from lending money is fully taxable. In certain cases, however, the interest is tax free. If I lend to my local or state government, the interest on my loan is free of both federal and state taxes. Hence, I am willing to accept a lower rate of interest on loans that have favorable tax treatment. 5. The time period of the loan. In general, lenders demand a higher rate of interest for loans of longer maturity. The interest rate on a ten-year loan is usually higher than that on a one-year loan, and the rate I can get on a three-year bank certificate of deposit is generally higher than the rate on a six-month certificate of deposit. But this relationship does not always hold; to understand the reasons, it is necessary to understand the basics of bond investing. Most long-term loans are made via bond instruments. A bond is simply a long-term IOU issued by a government, a corporation, or some other entity. When you invest in a bond, you are lending money to the issuer. The interest payments on the bond are often referred to as “coupon” payments because up through the 1950s, most bond investors actually clipped interest coupons from the bonds and presented them to their banks for payment. (By 1980 bonds with actual coupons had virtually disappeared.) The coupon payment is fixed for the life of the bond. Thus, if a one-thousand-dollar twenty-year bond has a fifty-dollar-per-year interest (coupon) payment, that payment never changes. But, as indicated above, interest rates do change from year to year in response to changes in economic conditions, inflation, monetary policy, and so on. The price of the bond is simply the discounted present value of the fixed interest payments and of the face value of the loan payable at maturity. Now, if interest rates rise (the discount factor is higher), then the present value, or price, of the bond will fall. This leads to three basic facts facing the bond investor: 1. If interest rates rise, bond prices fall. 2. If interest rates fall, bond prices rise. 3. The longer the period to maturity of the bond, the greater is the potential fluctuation in price when interest rates change. If you hold a bond to maturity, you need not worry if the price bounces around in the interim. But if you have to sell prior to maturity, you may receive less than you paid for the bond. The longer the maturity of the bond, the greater is the risk of loss because long-term bond prices are more volatile than shorter-term issues. To compensate for that risk of price fluctuation, longer-term bonds usually have higher interest rates than shorter-term issues. This tendency of long rates to exceed short rates is called the risk-premium theory of the yield structure. This relationship between interest rates for loans or bonds and various terms to maturity is often depicted in a graph showing interest rates on the vertical axis and term to maturity on the horizontal. The general shape of that graph is called the shape of the yield curve, and typically the curve is rising. In other words, the longer term the bond, the greater is the interest rate. This typical shape reflects the risk premium for holding longer-term debt. Long-term rates are not always higher than short-term rates, however. Expectations also influence the shape of the yield curve. Suppose, for example, that the economy has been booming and the central bank, in response, chooses a restrictive monetary policy that drives up interest rates. To implement such a policy, central banks sell short-term bonds, pushing their prices down and interest rates up. Interest rates, short term and long term, tend to rise together. But if bond investors believe such a restrictive policy is likely to be temporary, they may expect interest rates to fall in the future. In such an event, bond prices can be expected to rise, giving bondholders a capital gain. Thus long-term bonds may be particularly attractive during periods of unusually high short-term interest rates, and in bidding for these long-term bonds, investors drive their prices up and their yields down. The result is a flattening, and sometimes even an inversion, in the yield curve. Indeed, there were periods during the 1980s when U.S. Treasury securities yielded 10 percent or more and long-term interest rates (yields) were well below shorter-term rates. Expectations can also influence the yield curve in the opposite direction, making it steeper than is typical. This can happen when interest rates are unusually low, as they were in the United States in the early 2000s. In such a case, investors will expect interest rates to rise in the future, causing large capital losses to holders of long-term bonds. This would cause investors to sell long-term bonds until the prices came down enough to give them higher yields, thus compensating them for the expected capital loss. The result is long-term rates that exceed short-term rates by more than the “normal” amount. In sum, the term structure of interest rates—or, equivalently, the shape of the yield curve—is likely to be influenced both by investors’ risk preferences and by their expectations of future interest rates. About the Author Burton G. Malkiel, the Chemical Bank Chairman’s Professor of Economics at Princeton University, is the author of the widely read investment book A Random Walk down Wall Street. He was previously dean of the Yale School of Management and William S. Beinecke Professor of Management Studies there. He is also a past member of the Council of Economic Advisers and a past president of the American Finance Association. Further Reading   Fabozzi, Frank J. Bond Markets, Analysis and Strategies. 4th ed. New York: Prentice Hall, 2000. Fisher, Irving. The Theory of Interest. 1930. Reprint. Brookfield, Vt.: Pickering and Chatto, 1997. Available online at: http://www.econlib.org/library/YPDBooks/Fisher/fshToI.html Patinkin, Don. “Interest.” In International Encyclopedia of the Social Sciences. Vol. 7. New York: Macmillan, 1968.   Footnotes 1. Actually, I will insist on 14.4 percent, 4 percent to compensate me for the inflation-caused loss of principal and 0.4 percent to compensate me for the inflation-caused loss of real interest. The general relationship is given by the mathematical formula: 1 + i = (1 + r) × (1 + p), where i is the nominal interest rate (the one we observe), r is the real interest rate (the one that would exist if inflation were expected to be zero), and p is the expected inflation rate.   (0 COMMENTS)

/ Learn More

Insurance

Insurance plays a central role in the functioning of modern economies. Life insurance offers protection against the economic impact of an untimely death; health insurance covers the sometimes extraordinary costs of medical care; and bank deposits are insured by the federal government (see financial regulation). In each case, the insured pays a small premium in order to receive benefits should an unlikely but high-cost event occur. Insurance issues, traditionally a stodgy domain, have become subjects for intense debate and concern in recent years. How to provide health insurance for the significant portion of Americans not now covered is a central political issue. Some states, attempting to hold back the tide of higher costs, have placed severe limits on auto insurance rates and have even sought refunds from insurers. And ways to cover losses from terrorism have become a major issue. Temporarily, in response to the massive losses of 9/11, the federal government adopted a heavily subsidized three-year program for reinsuring terror-related building losses. (The program was extended.) In theory, the government can recoup some losses after the fact by levying a surcharge on the premiums of surviving firms. The Basics An understanding of insurance must begin with the concept of risk—that is, the variation in possible outcomes of a situation. A’s shipment of goods to Europe might arrive safely or be lost in transit. B may incur zero medical expenses in a good year, but if she is struck by a car they could be upward of $100,000. We cannot eliminate risk from life, even at extraordinary expense. Paying extra for double-hulled tankers still leaves oil spills possible. The only way to eliminate auto-related injuries is to eliminate automobiles. Thus, the effective response to risk combines two elements: efforts or expenditures to lessen the risk, and the purchase of insurance against whatever risk remains. Consider A’s shipment of, say, $1 million in goods. If the chance of loss on each trip is 3 percent, the loss will be $30,000 (3 percent of $1 million), on average. Let us assume that A can ship by a more costly method and cut the risk by one percentage point, thus saving $10,000, on average. If the additional cost of this shipping method is less than $10,000, it is a worthwhile expenditure. But if cutting risk by a further percentage point will cost $15,000, it sacrifices resources. To deal with the remaining 2 percent risk of losing $1 million, A should think about insurance. To cover administrative costs, the insurer might charge $25,000 for a risk that will incur average losses of no more than $20,000. From A’s standpoint, however, the insurance may be worthwhile because it is a comparatively inexpensive way to deal with the potential loss of $1 million. Note the important economic role of such insurance: without it, A might not be willing to risk shipping goods in the first place. In exchange for a premium, the insurer will pay a claim should a specified contingency—such as death, medical bills, or, in this instance, shipment loss—arise. The insurer—whether a corporation with diversified ownership or a mutual company made up of the insureds themselves—is able to offer such protection against financial loss by pooling the risks from a large group of similarly situated individuals or firms. The laws of probability ensure that only a tiny fraction of these insured shipments will be lost, or only a small fraction of the insured population will face expensive hospitalization in a year. If, for example, each of 100,000 individuals independently faces a 1 percent risk in a year, on average, 1,000 will have losses. If each of the 100,000 people paid a premium of $1,000, the insurance company would have collected a total of $100 million. Leaving aside administrative costs, this is enough to pay $100,000 to anyone who had a loss. But what would happen if 1,100 people had losses? The answer, fortunately, is that such an outcome is exceptionally unlikely. Insurance works through the magic of the law of large numbers. This law assures that when a large number of people face a low-probability event, the proportion experiencing the event will be close to the expected proportion. For instance, with a pool of 100,000 people who each face a 1 percent risk, the law of large numbers says that 1,100 people or more will have losses only one time in one thousand. In many cases, however, the risks to different individuals are not independent. In a hurricane, airplane crash, or epidemic, many may suffer at the same time. Insurance companies spread such risks not only across individuals, but also across good years and bad, building up reserves in the good years to deal with heavier claims in bad ones. For further protection, they also diversify across lines, selling both health and homeowners’ insurance, for example. The risks normally insured are unintentional, either due to the actions of nature or the inadvertent consequences of human activity. Terrorism creates a new model for insurance for three reasons: (1) The losses are man-made and intentional. (2) Massive numbers of people and structures could be harmed. (Theft losses fall in the first category, but not in the second.) (3) Historical experience does not provide a yardstick for assessing likely risk levels. Nuclear war presented equivalent challenges in the twentieth century. Had there been a significant nuclear war, insurance companies simply would not have paid. The losses would have been too massive to pay out of assets, and many of the assets underlying the insurance would have been destroyed. In time, appropriate insurance arrangements for this new category of massive risk will be developed. The Identity and Behavior of the Insured An economist views insurance as being like most other commodities. It obeys the laws of supply and demand, for example. However, it is unlike many other commodities in one important respect: the cost of providing insurance depends on the identity of the purchaser. A year of health insurance for an eighty-year-old costs more to provide than one for a fifty-year-old. It costs more to provide auto insurance to teenagers than to middle-aged people. If a company mistakenly sells health policies to old folks at a price appropriate for young folks, it will assuredly lose money, just as a restaurant will lose if it sells twenty-dollar steak dinners for ten dollars. The restaurant would lure lots of steak eaters. So, too, would the insurance company attract large numbers of older clients. Because of the differential cost of providing coverage, and because customers search for their lowest price, insurance companies go to great pains to set different premiums for different groups, depending on the risks each will impose. Recognizing that the identity of the purchaser affects the cost of insurance, insurers must be careful to whom they offer insurance at a particular price. Those high-risk individuals whose knowledge of their risk is better than that of the insurers will step forth to purchase, knowing that they are getting a good deal. This is a process called adverse selection, which means that the mix of purchasers will be adverse to the insurer. What leads to this adverse selection is asymmetric information: potential purchasers have more information than the sellers. The potential purchasers have “hidden” information that relates to their particular risk, and those whose information is unfavorable are thus most likely to purchase. For example, if an insurer determined that 1 percent of fifty-year-olds would die in a year, it might establish a premium of $12 per $1,000 of coverage—$10 to cover claims and $2 to cover administrative costs. The insurer might naively expect to break even. However, insureds who ate poorly or who engaged in high-risk professions or whose parents had died young might have an annual risk of mortality of 3 percent. They would be most likely to purchase insurance. Health fanatics, by contrast, might forgo life insurance because for them it is a bad deal. Through adverse selection, the insurer could end up with a group whose expected costs were, say, $20 per $1,000 rather than the $10 per $1,000 for the population as a whole; at a $12 price, the insurer would lose money. The traditional approach to the adverse selection problem is to inspect each potential insured. Individuals taking out substantial life insurance must submit to a medical exam. Fire insurance might be granted only after a check of the alarm and sprinkler systems. But no matter how careful the inspection, some information will remain hidden, and a disproportionately high number of those choosing to insure will be high risk. Therefore, insurers routinely set high rates to cope with adverse selection. Alas, such high rates discourage ordinary-risk buyers from buying insurance. Though this problem of adverse selection is best known in insurance problems, it applies broadly across economics. Thus, a company that “insures” its salesmen by offering a relatively high salary compared with commission will end up with many salesmen who are not confident of their abilities. Colleges that insure their students by offering many pass-fail courses can expect weaker students to enroll. Moral Hazard or Hidden Action Once insured, an individual has less incentive to avoid the risk of a bad outcome. A person with automobile collision insurance, for example, is more likely to venture forth on an icy night. Federal pension insurance induces companies to underfund (see pensions) and weakens the incentives for their employees to complain. Federally subsidized flood insurance encourages citizens to build homes on floodplains. Insurers use the term “moral hazard” to describe this phenomenon. It means, simply, that insured people undertake actions they would otherwise avoid. Stated in less judgmental language, people respond to incentives. In the above salesman example, not only are low-quality salesmen enticed to join, but all salesmen, even those of high quality, are given an incentive to be less productive. Ideally, the insurer would like to be able to monitor the insured’s behavior and take appropriate action. Flood insurance might not be sold to new residents of a floodplain. Collision insurance might not pay off if it can be proven that the policyholder had been drinking or had otherwise engaged in reckless behavior. But given the difficulty of monitoring many actions, insurers accept that once policies are issued, behavior will change adversely, and more claims will be made. The moral hazard problem is often encountered in areas that, at first glance, do not seem associated with traditional insurance. Products covered under optional warranties tend to get abused, as do autos that are leased with service contracts. Equity Issues The same insurance policy will have different costs for serving individuals whose behavior or underlying characteristics may differ. Because these cost differences influence pricing, some people see an equity dimension to insurance. Some think, for example, that urban drivers should not pay much more than rural drivers to protect themselves from auto liability, even though urban driving is riskier. But if prices are not allowed to vary in relation to risk, insurers will seek to avoid various classes of customers altogether, and availability will be restricted. When sellers of health insurance are not allowed to find out if potential clients are HIV-positive, for example, insurance companies often respond by refusing to insure, say, never-married men over age forty. Equity issues in insurance are addressed in a variety of ways in the real world. Most employers cross-subsidize health insurance, providing the same coverage at the same price to older, higher-risk workers and younger, lower-risk ones. Sometimes the government provides the “insurance” itself, although the federal government’s Medicare and Social Security programs are really a combined tax and subsidy scheme—one that gives a bigger benefit to those who live longer. The government’s decision not to tax employer-provided health insurance as income acts like a subsidy. In pursuit of equity, governments may set insurance rates, as many states do with auto insurance. The traditional public-interest argument for government rate regulation is that it serves to control a monopoly. But this argument fails with auto insurance: in most regulated insurance markets, there are dozens of competing insurers. Insurance rates are regulated to help some groups—usually those imposing high risks—at the expense of others. The Massachusetts auto insurance market provides an example. High-cost drivers are subsidized at the expense of all other drivers. Thus, inexperienced, occasional drivers in Massachusetts paid, on average, $1,967 for insurance in 2004 compared with $1,114 for experienced drivers. In contrast, in neighboring Connecticut, where such cross-subsidies were not imposed, the respective rates are $3,518 and $845. Such practices raise a new class of equity issues. Should the government force people who live quiet, low-risk lives to subsidize the high-risk fringe? Most people’s response to this question depends on whether they think people can control risks. Because most of us think we should not encourage people to engage in behavior that is costly to the system, we conclude, for example, that nonsmokers should not have to pay for smokers. The question becomes more complex when it comes to health care premiums for, say, gay men or recovering alcoholics, whose health care costs are likely to be greater than average. Moral judgments inevitably creep into such discussions. And sometimes the facts lead to disquieting considerations. Smokers, for example, tend to die early, reducing expected costs for Social Security. Should they, therefore, pay lower Social Security taxes? Black men have shorter lives than white men. Should black men pay lower Social Security taxes? Government’s Role in Insurance Government plays four major roles with insurance: (1) Government writes it directly, as with Social Security, terrorism reinsurance, and pension guarantees—via the Pension Benefit Guaranty Corporation (PBGC)—should a corporation fail. (2) Government subsidizes insurance: quite explicitly in some programs, such as federal flood insurance, but only de facto in other cases (e.g., the PBGC has a large projected deficit). (3) Government mandates a residual market for high risks (e.g., Florida’s program for hurricanes or many states’ programs for high-risk drivers). Governments hold down prices in such markets either by creating a state fund to cover losses or by requiring insurers who participate in the voluntary market to pick up a certain portion of this high-risk market. (4) Government regulates matters such as premiums, insurance company solvency (to make sure that insureds get paid), and permissible criteria for pricing insurance (e.g., for auto insurance, race and ethnicity are banned everywhere; Michigan bans geographic designations smaller than a city). Property liability insurance is regulated at the state level, providing many opportunities to compare the efficacy of alternative approaches. The three main regulatory approaches to pricing have been: (1) prior approval (regulators must approve rates before they go in effect); (2) use and file (companies set rates, but regulators can disallow them subsequently if they are found excessive); and (3) open competition (a market-based system in which rates are deemed not excessive as long as there is competition). Empirical studies conflict as to whether regulation leads to lower prices. Government participates far more in insurance markets than in typical markets. The two great dangers with government participation in insurance arise when, as is common, the goals for participation remain vague (e.g., promoting the insured activity, redistributing income, or spreading risk effectively), or when its expected cost is not recognized in budgets. With insurance, as with all government endeavors, the citizenry deserves to know both the rationale and the cost. Conclusion The traditional role of insurance remains the essential one recognized centuries ago: that of spreading risk among similarly situated individuals. Insurance works most effectively when losses are not under the control of individuals (thus avoiding moral hazard) and when the losses are readily determined (lest significant transactions costs associated with lawsuits become a burden). Individuals and firms insure against their most major risks—high health costs, the inability to pay depositors—which often are politically salient issues as well. Not surprisingly, government participation—as a setter of rates, as a subsidizer, and as a direct provider of insurance services—has become a major feature in insurance markets. Its highly subsidized terrorism reinsurance provides a dramatic example. Political forces may sometimes triumph over sound insurance principles, but such victories are Pyrrhic. In a sound market, we must recognize that with insurance, as with bread and steel, the cost of providing it must be paid. About the Author Richard Zeckhauser is the Frank P. Ramsey Professor of Political Economy at Harvard University’s John F. Kennedy School of Government. He writes frequently on risk-related issues. Practicing what he preaches, in 2003 and 2004 he came in second and third in two different U.S. national bridge championships. Further Reading   Arrow, Kenneth J. “The Economics of Agency.” In John W. Pratt and Richard J. Zeckhauser, eds., Principals and Agents: The Structure of Business. Boston: Harvard Business School Press, 1985. Arrow, Kenneth J. Essays in the Theory of Risk-Bearing. Amsterdam: North-Holland, 1971. Cutler, David, and Richard Zeckhauser. “The Anatomy of Health Insurance.” In Joseph P. Newhouse and Anthony Culyer, eds., The Handbook of Health Economics. New York: Elsevier, 2000. Cutler, David, and Richard Zeckhauser. “Extending the Theory to Meet the Practice of Insurance.” In Robert E. Litan and Richard Herring, eds., Brookings-Wharton Papers on Financial Services. Washington, D.C.: Brookings Institution Press, 2004. Pp. 1–53. Gollier, Christian. The Economics of Risk and Time. Cambridge: MIT Press, 2001. Huber, Peter W. Liability: The Legal Revolution and Its Consequences. New York: Basic Books, 1988.   (0 COMMENTS)

/ Learn More

International Capital Flows

International capital flows are the financial side of international trade.1 When someone imports a good or service, the buyer (the importer) gives the seller (the exporter) a monetary payment, just as in domestic transactions. If total exports were equal to total imports, these monetary transactions would balance at net zero: people in the country would receive as much in financial flows as they paid out in financial flows. But generally the trade balance is not zero. The most general description of a country’s balance of trade, covering its trade in goods and services, income receipts, and transfers, is called its current account balance. If the country has a surplus or deficit on its current account, there is an offsetting net financial flow consisting of currency, securities, or other real property ownership claims. This net financial flow is called its capital account balance. When a country’s imports exceed its exports, it has a current account deficit. Its foreign trading partners who hold net monetary claims can continue to hold their claims as monetary deposits or currency, or they can use the money to buy other financial assets, real property, or equities (stocks) in the trade-deficit country. Net capital flows comprise the sum of these monetary, financial, real property, and equity claims. Capital flows move in the opposite direction to the goods and services trade claims that give rise to them. Thus, a country with a current account deficit

/ Learn More

Inflation

Economists use the term “inflation” to denote an ongoing rise in the general level of prices quoted in units of money. The magnitude of inflation—the inflation rate—is usually reported as the annualized percentage growth of some broad index of money prices. With U.S. dollar prices rising, a one-dollar bill buys less each year. Inflation thus means an ongoing fall in the overall purchasing power of the monetary unit. Inflation rates vary from year to year and from currency to currency. Since 1950, the U.S. dollar inflation rate, as measured by the December-to-December change in the U.S. Consumer Price Index (CPI), has ranged from a low of −0.7 percent (1954) to a high of 13.3 percent (1979). Since 1991, the rate has stayed between 1.6 percent and 3.3 percent per year. Since 1950 at least eighteen countries have experienced episodes of hyperinflation, in which the CPI inflation rate has soared above 50 percent per month. In recent years, Japan has experienced negative inflation, or “deflation,” of around 1 percent per year, as measured by the Japanese CPI. Central banks in most countries today profess concern with keeping inflation low but positive. Some specify a target range for the inflation rate, typically 1–3 percent. Although economies on silver and gold standards sometimes experienced inflation, inflation rates in such economies seldom exceeded 2 percent per year, and the overall experience over the centuries was inflation of close to zero. Economies on paper-money standards, which all economies have today, have displayed much more inflation. As Peter Bernholz (2003, p. 1) points out, “the worst excesses of inflation occurred only in the 20th century” in countries where metallic standards were no longer in force. In 1971 the U.S. government cut the U.S. dollar’s last link to gold, ending its commitment to redeem dollars for gold at a fixed rate for foreign central banks. Even among countries that have avoided hyperinflation, inflation rates have generally been higher in the period after 1971. But inflation rates in most countries have been lower since 1985 than they were in 1971–1985. Measuring Inflation In the United States, the inflation rate is most commonly measured by the percentage rise in the Consumer Price Index, which is reported monthly by the Bureau of Labor Statistics (BLS). A CPI of 120 in the current period means that it now takes $120 to purchase a representative basket

/ Learn More

Industrial Revolution and the Standard of Living

Between 1760 and 1860, technological progress, education, and an increasing capital stock transformed England into the workshop of the world. The industrial revolution, as the transformation came to be known, caused a sustained rise in real income per person in England and, as its effects spread, in the rest of the Western world. Historians agree that the industrial revolution was one of the most important events in history, marking the rapid transition to the modern age, but they disagree vehemently about many aspects of the event. Of all the disagreements, the oldest one is over how the industrial revolution affected ordinary people, often called the working classes. One group, the pessimists, argues that the living standards of ordinary people fell, while another group, the optimists, believes that living standards rose. At one time, behind the debate was an ideological argument between the critics (especially Marxists) and the defenders of free markets. The critics, or pessimists, saw nineteenth-century England as Charles Dickens’s Coketown or poet William Blake’s “dark, satanic mills,” with capitalists squeezing more surplus value out of the working class with each passing year. The defenders, or optimists, saw nineteenth-century England as the birthplace of a consumer revolution that made more and more consumer goods available to ordinary people with each passing year. The ideological underpinnings of the debate eventually faded, probably because, as T. S. Ashton pointed out in 1948, the industrial revolution meant the difference between the grinding poverty that had characterized most of human history and the affluence of the modern industrialized nations. No economist today seriously disputes the fact that the industrial revolution began the transformation that has led to extraordinarily high (compared with the rest of human history) living standards for ordinary people throughout the market industrial economies. The standard-of-living debate today is not about whether the industrial revolution made people better off, but about when. The pessimists claim no marked improvement in standards of living until the 1840s or 1850s. Most optimists, by contrast, believe that living standards were rising by the 1810s or 1820s, or even earlier. The most influential recent contribution to the optimist position (and the center of much of the subsequent standard-of-living debate) is a 1983 paper by Peter Lindert and Jeffrey Williamson that produced new estimates of real wages in England for the years 1755 to 1851. These estimates are based on money wages for workers in several broad categories, including both blue-collar and white-collar occupations. The authors’ cost-of-living index attempted to represent actual working-class budgets. Lindert’s and Williamson’s analyses produced two striking results. First, they showed that real wages grew slowly between 1781 and 1819. Second, after 1819, real wages grew rapidly for all groups of workers. For all blue-collar workers—a good stand-in for the working classes—the Lindert-Williamson index number for real wages rose from 50 in 1819 to 100 in 1851. That is, real wages doubled in just thirty-two years. Other economists challenged Lindert’s and Williamson’s optimistic findings. Charles Feinstein produced an alternative series of real wages based on a different price index. In the Feinstein series, real wages rose much more slowly than in the Lindert-Williamsons series. Other researchers have speculated that the largely unmeasured effects of environmental decay more than offset any gains in well-being attributable to rising wages. Wages were higher in English cities than in the countryside, but rents

/ Learn More

Industrial Concentration

“Industrial concentration” refers to a structural characteristic of the business sector. It is the degree to which production in an industry—or in the economy as a whole—is dominated by a few large firms. Once assumed to be a symptom of “market failure,” concentration is, for the most part, seen nowadays as an indicator of superior economic performance. In the early 1970s, Yale Brozen, a key contributor to the new thinking, called the profession’s about-face on this issue “a revolution in economics.” Industrial concentration remains a matter of public policy concern even so. The Measurement of Industrial Concentration Industrial concentration was traditionally summarized by the concentration ratio, which simply adds the market shares of an industry’s four, eight, twenty, or fifty largest companies. In 1982, when new federal merger guidelines were issued, the Herfindahl-Hirschman Index (HHI) became the standard measure of industrial concentration. Suppose that an industry contains ten firms that individually account for 25, 15, 12, 10, 10, 8, 7, 5, 5, and 3 percent of total sales. The four-firm concentration ratio for this industry—the most widely used number—is 25 + 15 + 12 + 10 = 62, meaning that the top four firms account for 62 percent of the industry’s sales. The HHI, by contrast, is calculated by summing the squared market shares of all of the firms in the industry: 252 + 152 + 122 + 102 + 102 + 82 + 72 + 52 + 52 + 32 = 1,366. The HHI has two distinct advantages over the concentration ratio. It uses information about the relative sizes of all of an industry’s members, not just some arbitrary subset of the leading companies, and it weights the market shares of the largest enterprises more heavily. In general, the fewer the firms and the more unequal the distribution of market shares among them, the larger the HHI. Two four-firm industries, one containing equalsized firms each accounting for 25 percent of total sales, the other with market shares of 97, 1, 1, and 1, have the same four-firm concentration ratio (100) but very different HHIs (2,500 versus 9,412). An industry controlled by a single firm has an HHI of 1002 = 10,000, while the HHI for an industry populated by a very large number of very small firms would approach the index’s theoretical minimum value of zero. Concentration in the U.S. Economy According to the U.S. Department of Justice’s merger guidelines, an industry is considered “concentrated” if the HHI exceeds 1,800; it is “unconcentrated” if the HHI is below 1,000. Since 1982, HHIs based on the value of shipments of the fifty largest companies have been calculated and reported in the manufacturing series of the Economic Census.1 Concentration levels exceeding 1,800 are rare. The exceptions include glass containers (HHI = 2,959.9 in 1997), motor vehicles (2,505.8), and breakfast cereals (2,445.9). Cigarette manufacturing also is highly concentrated, but its HHI is not reported owing to the small number of firms in that industry, the largest four of which accounted for 89 percent of shipments in 1997. At the other extreme, the HHI for machine shops was 1.9 the same year. Whether an industry is concentrated hinges on how narrowly or broadly it is defined, both in terms of the product it produces and the extent of the geographic area it serves. The U.S. footwear manufacturing industry as a whole is very unconcentrated (HHI = 317 in 1997); the level of concentration among house slipper manufacturers is considerably higher, though (HHI = 2,053.4). Similarly, although

/ Learn More