This is my archive

bar

Interest Rates

The rate of interest measures the percentage reward a lender receives for deferring the consumption of resources until a future date. Correspondingly, it measures the price a borrower pays to have resources now. Suppose I have $100 today that I am willing to lend for one year at an annual interest rate of 5 percent. At the end of the year, I get back my $100 plus $5 interest (0.05 × 100), for a total of $105. The general relationship is: Money Today (1 + interest rate) = Money Next Year We can also ask a different question: What is the most I would pay today to get $105 next year? If the rate of interest is 5 percent, the most I would pay is $100. I would not pay $101, because if I had $101 and invested it at 5 percent, I would have $106 next year. Thus, we say that the value of money in the future should be discounted, and $100 is the “discounted present value” of $105 next year. The general relationship is: Money Today = Money Next Year (1 + interest rate) The higher the interest rate, the more valuable is money today and the lower is the present value of money in the future. Now, suppose I am willing to lend my money out for a second year. I lend out $105, the amount I have next year, at 5 percent and have $110.25 at the end of year two. Note that I have earned an extra $5.25 in the second year because the interest that I earned in year one also earns interest in year two. This is what we mean by the term “compound interest”—the interest that money earns also earns interest. Albert Einstein is reported to have said that compound interest is the greatest force in the world. Money left in interest-bearing investments can compound to extremely large sums. A simple rule, the rule of 72, tells how long it takes your money to double if it is invested at compound interest. The number 72 divided by the interest rate gives the approximate number of years it will take to double your money. For example, at a 5 percent interest rate, it takes about fourteen years to double your money (72 ÷ 5 = 14.4), while at an interest rate of 10 percent, it takes about seven years. There is a wonderful actual example of the power of compound interest. Upon his death in 1791, Benjamin Franklin left $5,000 to each of his favorite cities, Boston and Philadelphia. He stipulated that the money should be invested and not paid out for one hundred to two hundred years. At one hundred years, each city could withdraw $500,000; after two hundred years, they could withdraw the remainder. They did withdraw $500,000 in 1891; they invested the remainder and, in 1991, each city received approximately $20,000,000. What determines the magnitude of the interest rate in an economy? Let us consider five of the most important factors. 1. The strength of the economy and the willingness to save. Interest rates are determined in a free market where supply and demand interact. The supply of funds is influenced by the willingness of consumers, businesses, and governments to save. The demand for funds reflects the desires of businesses, households, and governments to spend more than they take in as revenues. Usually, in very strong economic expansions, businesses’ desire to invest in plants and equipment and individuals’ desire to invest in housing tend to drive interest rates up. During periods of weak economic conditions, business and housing investment falls and interest rates tend to decline. Such declines are often reinforced by the policies of the country’s central bank (the Federal Reserve in the United States), which attempts to reduce interest rates in order to stimulate housing and other interest-sensitive investments. 2. The rate of inflation. People’s willingness to lend money depends partly on the inflation rate. If prices are expected to be stable, I may be happy to lend money for a year at 4 percent because I expect to have 4 percent more purchasing power at the end of the year. But suppose the inflation rate is expected to be 10 percent. Then, all other things being equal, I will insist on a 14 percent rate on interest, ten percentage points of which compensate me for the inflation.1 Economist irving fisher pointed out this fact almost a century ago, distinguishing clearly between the real rate of interest (4 percent in the above example) and the nominal rate of interest (14 percent in the above example), which equals the real rate plus the expected inflation rate. 3. The riskiness of the borrower. I am willing to lend money to my government or to my local bank (whose deposits are generally guaranteed by the government) at a lower rate than I would lend to my wastrel nephew or to my cousin’s risky new venture. The greater the risk that my loan will not be paid back in full, the larger is the interest rate I will demand to compensate me for that risk. Thus, there is a risk structure to interest rates. The greater the risk that the borrower will not repay in full, the greater is the rate of interest. 4. The tax treatment of the interest. In most cases, the interest I receive from lending money is fully taxable. In certain cases, however, the interest is tax free. If I lend to my local or state government, the interest on my loan is free of both federal and state taxes. Hence, I am willing to accept a lower rate of interest on loans that have favorable tax treatment. 5. The time period of the loan. In general, lenders demand a higher rate of interest for loans of longer maturity. The interest rate on a ten-year loan is usually higher than that on a one-year loan, and the rate I can get on a three-year bank certificate of deposit is generally higher than the rate on a six-month certificate of deposit. But this relationship does not always hold; to understand the reasons, it is necessary to understand the basics of bond investing. Most long-term loans are made via bond instruments. A bond is simply a long-term IOU issued by a government, a corporation, or some other entity. When you invest in a bond, you are lending money to the issuer. The interest payments on the bond are often referred to as “coupon” payments because up through the 1950s, most bond investors actually clipped interest coupons from the bonds and presented them to their banks for payment. (By 1980 bonds with actual coupons had virtually disappeared.) The coupon payment is fixed for the life of the bond. Thus, if a one-thousand-dollar twenty-year bond has a fifty-dollar-per-year interest (coupon) payment, that payment never changes. But, as indicated above, interest rates do change from year to year in response to changes in economic conditions, inflation, monetary policy, and so on. The price of the bond is simply the discounted present value of the fixed interest payments and of the face value of the loan payable at maturity. Now, if interest rates rise (the discount factor is higher), then the present value, or price, of the bond will fall. This leads to three basic facts facing the bond investor: 1. If interest rates rise, bond prices fall. 2. If interest rates fall, bond prices rise. 3. The longer the period to maturity of the bond, the greater is the potential fluctuation in price when interest rates change. If you hold a bond to maturity, you need not worry if the price bounces around in the interim. But if you have to sell prior to maturity, you may receive less than you paid for the bond. The longer the maturity of the bond, the greater is the risk of loss because long-term bond prices are more volatile than shorter-term issues. To compensate for that risk of price fluctuation, longer-term bonds usually have higher interest rates than shorter-term issues. This tendency of long rates to exceed short rates is called the risk-premium theory of the yield structure. This relationship between interest rates for loans or bonds and various terms to maturity is often depicted in a graph showing interest rates on the vertical axis and term to maturity on the horizontal. The general shape of that graph is called the shape of the yield curve, and typically the curve is rising. In other words, the longer term the bond, the greater is the interest rate. This typical shape reflects the risk premium for holding longer-term debt. Long-term rates are not always higher than short-term rates, however. Expectations also influence the shape of the yield curve. Suppose, for example, that the economy has been booming and the central bank, in response, chooses a restrictive monetary policy that drives up interest rates. To implement such a policy, central banks sell short-term bonds, pushing their prices down and interest rates up. Interest rates, short term and long term, tend to rise together. But if bond investors believe such a restrictive policy is likely to be temporary, they may expect interest rates to fall in the future. In such an event, bond prices can be expected to rise, giving bondholders a capital gain. Thus long-term bonds may be particularly attractive during periods of unusually high short-term interest rates, and in bidding for these long-term bonds, investors drive their prices up and their yields down. The result is a flattening, and sometimes even an inversion, in the yield curve. Indeed, there were periods during the 1980s when U.S. Treasury securities yielded 10 percent or more and long-term interest rates (yields) were well below shorter-term rates. Expectations can also influence the yield curve in the opposite direction, making it steeper than is typical. This can happen when interest rates are unusually low, as they were in the United States in the early 2000s. In such a case, investors will expect interest rates to rise in the future, causing large capital losses to holders of long-term bonds. This would cause investors to sell long-term bonds until the prices came down enough to give them higher yields, thus compensating them for the expected capital loss. The result is long-term rates that exceed short-term rates by more than the “normal” amount. In sum, the term structure of interest rates—or, equivalently, the shape of the yield curve—is likely to be influenced both by investors’ risk preferences and by their expectations of future interest rates. About the Author Burton G. Malkiel, the Chemical Bank Chairman’s Professor of Economics at Princeton University, is the author of the widely read investment book A Random Walk down Wall Street. He was previously dean of the Yale School of Management and William S. Beinecke Professor of Management Studies there. He is also a past member of the Council of Economic Advisers and a past president of the American Finance Association. Further Reading   Fabozzi, Frank J. Bond Markets, Analysis and Strategies. 4th ed. New York: Prentice Hall, 2000. Fisher, Irving. The Theory of Interest. 1930. Reprint. Brookfield, Vt.: Pickering and Chatto, 1997. Available online at: http://www.econlib.org/library/YPDBooks/Fisher/fshToI.html Patinkin, Don. “Interest.” In International Encyclopedia of the Social Sciences. Vol. 7. New York: Macmillan, 1968.   Footnotes 1. Actually, I will insist on 14.4 percent, 4 percent to compensate me for the inflation-caused loss of principal and 0.4 percent to compensate me for the inflation-caused loss of real interest. The general relationship is given by the mathematical formula: 1 + i = (1 + r) × (1 + p), where i is the nominal interest rate (the one we observe), r is the real interest rate (the one that would exist if inflation were expected to be zero), and p is the expected inflation rate.   (0 COMMENTS)

/ Learn More

Insurance

Insurance plays a central role in the functioning of modern economies. Life insurance offers protection against the economic impact of an untimely death; health insurance covers the sometimes extraordinary costs of medical care; and bank deposits are insured by the federal government (see financial regulation). In each case, the insured pays a small premium in order to receive benefits should an unlikely but high-cost event occur. Insurance issues, traditionally a stodgy domain, have become subjects for intense debate and concern in recent years. How to provide health insurance for the significant portion of Americans not now covered is a central political issue. Some states, attempting to hold back the tide of higher costs, have placed severe limits on auto insurance rates and have even sought refunds from insurers. And ways to cover losses from terrorism have become a major issue. Temporarily, in response to the massive losses of 9/11, the federal government adopted a heavily subsidized three-year program for reinsuring terror-related building losses. (The program was extended.) In theory, the government can recoup some losses after the fact by levying a surcharge on the premiums of surviving firms. The Basics An understanding of insurance must begin with the concept of risk—that is, the variation in possible outcomes of a situation. A’s shipment of goods to Europe might arrive safely or be lost in transit. B may incur zero medical expenses in a good year, but if she is struck by a car they could be upward of $100,000. We cannot eliminate risk from life, even at extraordinary expense. Paying extra for double-hulled tankers still leaves oil spills possible. The only way to eliminate auto-related injuries is to eliminate automobiles. Thus, the effective response to risk combines two elements: efforts or expenditures to lessen the risk, and the purchase of insurance against whatever risk remains. Consider A’s shipment of, say, $1 million in goods. If the chance of loss on each trip is 3 percent, the loss will be $30,000 (3 percent of $1 million), on average. Let us assume that A can ship by a more costly method and cut the risk by one percentage point, thus saving $10,000, on average. If the additional cost of this shipping method is less than $10,000, it is a worthwhile expenditure. But if cutting risk by a further percentage point will cost $15,000, it sacrifices resources. To deal with the remaining 2 percent risk of losing $1 million, A should think about insurance. To cover administrative costs, the insurer might charge $25,000 for a risk that will incur average losses of no more than $20,000. From A’s standpoint, however, the insurance may be worthwhile because it is a comparatively inexpensive way to deal with the potential loss of $1 million. Note the important economic role of such insurance: without it, A might not be willing to risk shipping goods in the first place. In exchange for a premium, the insurer will pay a claim should a specified contingency—such as death, medical bills, or, in this instance, shipment loss—arise. The insurer—whether a corporation with diversified ownership or a mutual company made up of the insureds themselves—is able to offer such protection against financial loss by pooling the risks from a large group of similarly situated individuals or firms. The laws of probability ensure that only a tiny fraction of these insured shipments will be lost, or only a small fraction of the insured population will face expensive hospitalization in a year. If, for example, each of 100,000 individuals independently faces a 1 percent risk in a year, on average, 1,000 will have losses. If each of the 100,000 people paid a premium of $1,000, the insurance company would have collected a total of $100 million. Leaving aside administrative costs, this is enough to pay $100,000 to anyone who had a loss. But what would happen if 1,100 people had losses? The answer, fortunately, is that such an outcome is exceptionally unlikely. Insurance works through the magic of the law of large numbers. This law assures that when a large number of people face a low-probability event, the proportion experiencing the event will be close to the expected proportion. For instance, with a pool of 100,000 people who each face a 1 percent risk, the law of large numbers says that 1,100 people or more will have losses only one time in one thousand. In many cases, however, the risks to different individuals are not independent. In a hurricane, airplane crash, or epidemic, many may suffer at the same time. Insurance companies spread such risks not only across individuals, but also across good years and bad, building up reserves in the good years to deal with heavier claims in bad ones. For further protection, they also diversify across lines, selling both health and homeowners’ insurance, for example. The risks normally insured are unintentional, either due to the actions of nature or the inadvertent consequences of human activity. Terrorism creates a new model for insurance for three reasons: (1) The losses are man-made and intentional. (2) Massive numbers of people and structures could be harmed. (Theft losses fall in the first category, but not in the second.) (3) Historical experience does not provide a yardstick for assessing likely risk levels. Nuclear war presented equivalent challenges in the twentieth century. Had there been a significant nuclear war, insurance companies simply would not have paid. The losses would have been too massive to pay out of assets, and many of the assets underlying the insurance would have been destroyed. In time, appropriate insurance arrangements for this new category of massive risk will be developed. The Identity and Behavior of the Insured An economist views insurance as being like most other commodities. It obeys the laws of supply and demand, for example. However, it is unlike many other commodities in one important respect: the cost of providing insurance depends on the identity of the purchaser. A year of health insurance for an eighty-year-old costs more to provide than one for a fifty-year-old. It costs more to provide auto insurance to teenagers than to middle-aged people. If a company mistakenly sells health policies to old folks at a price appropriate for young folks, it will assuredly lose money, just as a restaurant will lose if it sells twenty-dollar steak dinners for ten dollars. The restaurant would lure lots of steak eaters. So, too, would the insurance company attract large numbers of older clients. Because of the differential cost of providing coverage, and because customers search for their lowest price, insurance companies go to great pains to set different premiums for different groups, depending on the risks each will impose. Recognizing that the identity of the purchaser affects the cost of insurance, insurers must be careful to whom they offer insurance at a particular price. Those high-risk individuals whose knowledge of their risk is better than that of the insurers will step forth to purchase, knowing that they are getting a good deal. This is a process called adverse selection, which means that the mix of purchasers will be adverse to the insurer. What leads to this adverse selection is asymmetric information: potential purchasers have more information than the sellers. The potential purchasers have “hidden” information that relates to their particular risk, and those whose information is unfavorable are thus most likely to purchase. For example, if an insurer determined that 1 percent of fifty-year-olds would die in a year, it might establish a premium of $12 per $1,000 of coverage—$10 to cover claims and $2 to cover administrative costs. The insurer might naively expect to break even. However, insureds who ate poorly or who engaged in high-risk professions or whose parents had died young might have an annual risk of mortality of 3 percent. They would be most likely to purchase insurance. Health fanatics, by contrast, might forgo life insurance because for them it is a bad deal. Through adverse selection, the insurer could end up with a group whose expected costs were, say, $20 per $1,000 rather than the $10 per $1,000 for the population as a whole; at a $12 price, the insurer would lose money. The traditional approach to the adverse selection problem is to inspect each potential insured. Individuals taking out substantial life insurance must submit to a medical exam. Fire insurance might be granted only after a check of the alarm and sprinkler systems. But no matter how careful the inspection, some information will remain hidden, and a disproportionately high number of those choosing to insure will be high risk. Therefore, insurers routinely set high rates to cope with adverse selection. Alas, such high rates discourage ordinary-risk buyers from buying insurance. Though this problem of adverse selection is best known in insurance problems, it applies broadly across economics. Thus, a company that “insures” its salesmen by offering a relatively high salary compared with commission will end up with many salesmen who are not confident of their abilities. Colleges that insure their students by offering many pass-fail courses can expect weaker students to enroll. Moral Hazard or Hidden Action Once insured, an individual has less incentive to avoid the risk of a bad outcome. A person with automobile collision insurance, for example, is more likely to venture forth on an icy night. Federal pension insurance induces companies to underfund (see pensions) and weakens the incentives for their employees to complain. Federally subsidized flood insurance encourages citizens to build homes on floodplains. Insurers use the term “moral hazard” to describe this phenomenon. It means, simply, that insured people undertake actions they would otherwise avoid. Stated in less judgmental language, people respond to incentives. In the above salesman example, not only are low-quality salesmen enticed to join, but all salesmen, even those of high quality, are given an incentive to be less productive. Ideally, the insurer would like to be able to monitor the insured’s behavior and take appropriate action. Flood insurance might not be sold to new residents of a floodplain. Collision insurance might not pay off if it can be proven that the policyholder had been drinking or had otherwise engaged in reckless behavior. But given the difficulty of monitoring many actions, insurers accept that once policies are issued, behavior will change adversely, and more claims will be made. The moral hazard problem is often encountered in areas that, at first glance, do not seem associated with traditional insurance. Products covered under optional warranties tend to get abused, as do autos that are leased with service contracts. Equity Issues The same insurance policy will have different costs for serving individuals whose behavior or underlying characteristics may differ. Because these cost differences influence pricing, some people see an equity dimension to insurance. Some think, for example, that urban drivers should not pay much more than rural drivers to protect themselves from auto liability, even though urban driving is riskier. But if prices are not allowed to vary in relation to risk, insurers will seek to avoid various classes of customers altogether, and availability will be restricted. When sellers of health insurance are not allowed to find out if potential clients are HIV-positive, for example, insurance companies often respond by refusing to insure, say, never-married men over age forty. Equity issues in insurance are addressed in a variety of ways in the real world. Most employers cross-subsidize health insurance, providing the same coverage at the same price to older, higher-risk workers and younger, lower-risk ones. Sometimes the government provides the “insurance” itself, although the federal government’s Medicare and Social Security programs are really a combined tax and subsidy scheme—one that gives a bigger benefit to those who live longer. The government’s decision not to tax employer-provided health insurance as income acts like a subsidy. In pursuit of equity, governments may set insurance rates, as many states do with auto insurance. The traditional public-interest argument for government rate regulation is that it serves to control a monopoly. But this argument fails with auto insurance: in most regulated insurance markets, there are dozens of competing insurers. Insurance rates are regulated to help some groups—usually those imposing high risks—at the expense of others. The Massachusetts auto insurance market provides an example. High-cost drivers are subsidized at the expense of all other drivers. Thus, inexperienced, occasional drivers in Massachusetts paid, on average, $1,967 for insurance in 2004 compared with $1,114 for experienced drivers. In contrast, in neighboring Connecticut, where such cross-subsidies were not imposed, the respective rates are $3,518 and $845. Such practices raise a new class of equity issues. Should the government force people who live quiet, low-risk lives to subsidize the high-risk fringe? Most people’s response to this question depends on whether they think people can control risks. Because most of us think we should not encourage people to engage in behavior that is costly to the system, we conclude, for example, that nonsmokers should not have to pay for smokers. The question becomes more complex when it comes to health care premiums for, say, gay men or recovering alcoholics, whose health care costs are likely to be greater than average. Moral judgments inevitably creep into such discussions. And sometimes the facts lead to disquieting considerations. Smokers, for example, tend to die early, reducing expected costs for Social Security. Should they, therefore, pay lower Social Security taxes? Black men have shorter lives than white men. Should black men pay lower Social Security taxes? Government’s Role in Insurance Government plays four major roles with insurance: (1) Government writes it directly, as with Social Security, terrorism reinsurance, and pension guarantees—via the Pension Benefit Guaranty Corporation (PBGC)—should a corporation fail. (2) Government subsidizes insurance: quite explicitly in some programs, such as federal flood insurance, but only de facto in other cases (e.g., the PBGC has a large projected deficit). (3) Government mandates a residual market for high risks (e.g., Florida’s program for hurricanes or many states’ programs for high-risk drivers). Governments hold down prices in such markets either by creating a state fund to cover losses or by requiring insurers who participate in the voluntary market to pick up a certain portion of this high-risk market. (4) Government regulates matters such as premiums, insurance company solvency (to make sure that insureds get paid), and permissible criteria for pricing insurance (e.g., for auto insurance, race and ethnicity are banned everywhere; Michigan bans geographic designations smaller than a city). Property liability insurance is regulated at the state level, providing many opportunities to compare the efficacy of alternative approaches. The three main regulatory approaches to pricing have been: (1) prior approval (regulators must approve rates before they go in effect); (2) use and file (companies set rates, but regulators can disallow them subsequently if they are found excessive); and (3) open competition (a market-based system in which rates are deemed not excessive as long as there is competition). Empirical studies conflict as to whether regulation leads to lower prices. Government participates far more in insurance markets than in typical markets. The two great dangers with government participation in insurance arise when, as is common, the goals for participation remain vague (e.g., promoting the insured activity, redistributing income, or spreading risk effectively), or when its expected cost is not recognized in budgets. With insurance, as with all government endeavors, the citizenry deserves to know both the rationale and the cost. Conclusion The traditional role of insurance remains the essential one recognized centuries ago: that of spreading risk among similarly situated individuals. Insurance works most effectively when losses are not under the control of individuals (thus avoiding moral hazard) and when the losses are readily determined (lest significant transactions costs associated with lawsuits become a burden). Individuals and firms insure against their most major risks—high health costs, the inability to pay depositors—which often are politically salient issues as well. Not surprisingly, government participation—as a setter of rates, as a subsidizer, and as a direct provider of insurance services—has become a major feature in insurance markets. Its highly subsidized terrorism reinsurance provides a dramatic example. Political forces may sometimes triumph over sound insurance principles, but such victories are Pyrrhic. In a sound market, we must recognize that with insurance, as with bread and steel, the cost of providing it must be paid. About the Author Richard Zeckhauser is the Frank P. Ramsey Professor of Political Economy at Harvard University’s John F. Kennedy School of Government. He writes frequently on risk-related issues. Practicing what he preaches, in 2003 and 2004 he came in second and third in two different U.S. national bridge championships. Further Reading   Arrow, Kenneth J. “The Economics of Agency.” In John W. Pratt and Richard J. Zeckhauser, eds., Principals and Agents: The Structure of Business. Boston: Harvard Business School Press, 1985. Arrow, Kenneth J. Essays in the Theory of Risk-Bearing. Amsterdam: North-Holland, 1971. Cutler, David, and Richard Zeckhauser. “The Anatomy of Health Insurance.” In Joseph P. Newhouse and Anthony Culyer, eds., The Handbook of Health Economics. New York: Elsevier, 2000. Cutler, David, and Richard Zeckhauser. “Extending the Theory to Meet the Practice of Insurance.” In Robert E. Litan and Richard Herring, eds., Brookings-Wharton Papers on Financial Services. Washington, D.C.: Brookings Institution Press, 2004. Pp. 1–53. Gollier, Christian. The Economics of Risk and Time. Cambridge: MIT Press, 2001. Huber, Peter W. Liability: The Legal Revolution and Its Consequences. New York: Basic Books, 1988.   (0 COMMENTS)

/ Learn More

International Capital Flows

International capital flows are the financial side of international trade.1 When someone imports a good or service, the buyer (the importer) gives the seller (the exporter) a monetary payment, just as in domestic transactions. If total exports were equal to total imports, these monetary transactions would balance at net zero: people in the country would receive as much in financial flows as they paid out in financial flows. But generally the trade balance is not zero. The most general description of a country’s balance of trade, covering its trade in goods and services, income receipts, and transfers, is called its current account balance. If the country has a surplus or deficit on its current account, there is an offsetting net financial flow consisting of currency, securities, or other real property ownership claims. This net financial flow is called its capital account balance. When a country’s imports exceed its exports, it has a current account deficit. Its foreign trading partners who hold net monetary claims can continue to hold their claims as monetary deposits or currency, or they can use the money to buy other financial assets, real property, or equities (stocks) in the trade-deficit country. Net capital flows comprise the sum of these monetary, financial, real property, and equity claims. Capital flows move in the opposite direction to the goods and services trade claims that give rise to them. Thus, a country with a current account deficit

/ Learn More

Inflation

Economists use the term “inflation” to denote an ongoing rise in the general level of prices quoted in units of money. The magnitude of inflation—the inflation rate—is usually reported as the annualized percentage growth of some broad index of money prices. With U.S. dollar prices rising, a one-dollar bill buys less each year. Inflation thus means an ongoing fall in the overall purchasing power of the monetary unit. Inflation rates vary from year to year and from currency to currency. Since 1950, the U.S. dollar inflation rate, as measured by the December-to-December change in the U.S. Consumer Price Index (CPI), has ranged from a low of −0.7 percent (1954) to a high of 13.3 percent (1979). Since 1991, the rate has stayed between 1.6 percent and 3.3 percent per year. Since 1950 at least eighteen countries have experienced episodes of hyperinflation, in which the CPI inflation rate has soared above 50 percent per month. In recent years, Japan has experienced negative inflation, or “deflation,” of around 1 percent per year, as measured by the Japanese CPI. Central banks in most countries today profess concern with keeping inflation low but positive. Some specify a target range for the inflation rate, typically 1–3 percent. Although economies on silver and gold standards sometimes experienced inflation, inflation rates in such economies seldom exceeded 2 percent per year, and the overall experience over the centuries was inflation of close to zero. Economies on paper-money standards, which all economies have today, have displayed much more inflation. As Peter Bernholz (2003, p. 1) points out, “the worst excesses of inflation occurred only in the 20th century” in countries where metallic standards were no longer in force. In 1971 the U.S. government cut the U.S. dollar’s last link to gold, ending its commitment to redeem dollars for gold at a fixed rate for foreign central banks. Even among countries that have avoided hyperinflation, inflation rates have generally been higher in the period after 1971. But inflation rates in most countries have been lower since 1985 than they were in 1971–1985. Measuring Inflation In the United States, the inflation rate is most commonly measured by the percentage rise in the Consumer Price Index, which is reported monthly by the Bureau of Labor Statistics (BLS). A CPI of 120 in the current period means that it now takes $120 to purchase a representative basket

/ Learn More

Industrial Revolution and the Standard of Living

Between 1760 and 1860, technological progress, education, and an increasing capital stock transformed England into the workshop of the world. The industrial revolution, as the transformation came to be known, caused a sustained rise in real income per person in England and, as its effects spread, in the rest of the Western world. Historians agree that the industrial revolution was one of the most important events in history, marking the rapid transition to the modern age, but they disagree vehemently about many aspects of the event. Of all the disagreements, the oldest one is over how the industrial revolution affected ordinary people, often called the working classes. One group, the pessimists, argues that the living standards of ordinary people fell, while another group, the optimists, believes that living standards rose. At one time, behind the debate was an ideological argument between the critics (especially Marxists) and the defenders of free markets. The critics, or pessimists, saw nineteenth-century England as Charles Dickens’s Coketown or poet William Blake’s “dark, satanic mills,” with capitalists squeezing more surplus value out of the working class with each passing year. The defenders, or optimists, saw nineteenth-century England as the birthplace of a consumer revolution that made more and more consumer goods available to ordinary people with each passing year. The ideological underpinnings of the debate eventually faded, probably because, as T. S. Ashton pointed out in 1948, the industrial revolution meant the difference between the grinding poverty that had characterized most of human history and the affluence of the modern industrialized nations. No economist today seriously disputes the fact that the industrial revolution began the transformation that has led to extraordinarily high (compared with the rest of human history) living standards for ordinary people throughout the market industrial economies. The standard-of-living debate today is not about whether the industrial revolution made people better off, but about when. The pessimists claim no marked improvement in standards of living until the 1840s or 1850s. Most optimists, by contrast, believe that living standards were rising by the 1810s or 1820s, or even earlier. The most influential recent contribution to the optimist position (and the center of much of the subsequent standard-of-living debate) is a 1983 paper by Peter Lindert and Jeffrey Williamson that produced new estimates of real wages in England for the years 1755 to 1851. These estimates are based on money wages for workers in several broad categories, including both blue-collar and white-collar occupations. The authors’ cost-of-living index attempted to represent actual working-class budgets. Lindert’s and Williamson’s analyses produced two striking results. First, they showed that real wages grew slowly between 1781 and 1819. Second, after 1819, real wages grew rapidly for all groups of workers. For all blue-collar workers—a good stand-in for the working classes—the Lindert-Williamson index number for real wages rose from 50 in 1819 to 100 in 1851. That is, real wages doubled in just thirty-two years. Other economists challenged Lindert’s and Williamson’s optimistic findings. Charles Feinstein produced an alternative series of real wages based on a different price index. In the Feinstein series, real wages rose much more slowly than in the Lindert-Williamsons series. Other researchers have speculated that the largely unmeasured effects of environmental decay more than offset any gains in well-being attributable to rising wages. Wages were higher in English cities than in the countryside, but rents

/ Learn More

Industrial Concentration

“Industrial concentration” refers to a structural characteristic of the business sector. It is the degree to which production in an industry—or in the economy as a whole—is dominated by a few large firms. Once assumed to be a symptom of “market failure,” concentration is, for the most part, seen nowadays as an indicator of superior economic performance. In the early 1970s, Yale Brozen, a key contributor to the new thinking, called the profession’s about-face on this issue “a revolution in economics.” Industrial concentration remains a matter of public policy concern even so. The Measurement of Industrial Concentration Industrial concentration was traditionally summarized by the concentration ratio, which simply adds the market shares of an industry’s four, eight, twenty, or fifty largest companies. In 1982, when new federal merger guidelines were issued, the Herfindahl-Hirschman Index (HHI) became the standard measure of industrial concentration. Suppose that an industry contains ten firms that individually account for 25, 15, 12, 10, 10, 8, 7, 5, 5, and 3 percent of total sales. The four-firm concentration ratio for this industry—the most widely used number—is 25 + 15 + 12 + 10 = 62, meaning that the top four firms account for 62 percent of the industry’s sales. The HHI, by contrast, is calculated by summing the squared market shares of all of the firms in the industry: 252 + 152 + 122 + 102 + 102 + 82 + 72 + 52 + 52 + 32 = 1,366. The HHI has two distinct advantages over the concentration ratio. It uses information about the relative sizes of all of an industry’s members, not just some arbitrary subset of the leading companies, and it weights the market shares of the largest enterprises more heavily. In general, the fewer the firms and the more unequal the distribution of market shares among them, the larger the HHI. Two four-firm industries, one containing equalsized firms each accounting for 25 percent of total sales, the other with market shares of 97, 1, 1, and 1, have the same four-firm concentration ratio (100) but very different HHIs (2,500 versus 9,412). An industry controlled by a single firm has an HHI of 1002 = 10,000, while the HHI for an industry populated by a very large number of very small firms would approach the index’s theoretical minimum value of zero. Concentration in the U.S. Economy According to the U.S. Department of Justice’s merger guidelines, an industry is considered “concentrated” if the HHI exceeds 1,800; it is “unconcentrated” if the HHI is below 1,000. Since 1982, HHIs based on the value of shipments of the fifty largest companies have been calculated and reported in the manufacturing series of the Economic Census.1 Concentration levels exceeding 1,800 are rare. The exceptions include glass containers (HHI = 2,959.9 in 1997), motor vehicles (2,505.8), and breakfast cereals (2,445.9). Cigarette manufacturing also is highly concentrated, but its HHI is not reported owing to the small number of firms in that industry, the largest four of which accounted for 89 percent of shipments in 1997. At the other extreme, the HHI for machine shops was 1.9 the same year. Whether an industry is concentrated hinges on how narrowly or broadly it is defined, both in terms of the product it produces and the extent of the geographic area it serves. The U.S. footwear manufacturing industry as a whole is very unconcentrated (HHI = 317 in 1997); the level of concentration among house slipper manufacturers is considerably higher, though (HHI = 2,053.4). Similarly, although

/ Learn More

Information

Since about 1970, an important strand of economic research, sometimes referred to as information economics, has explored the extent to which markets and other institutions process and convey information. Many of the problems of markets and other institutions result from costly information, and many of their features are responses to costly information. Many of the central theories and principles in economics are based on assumptions about perfect information. Among these, three stand out: efficiency, full employment of resources, and uniform prices. Efficiency At least since Adam Smith, most economists have believed that competitive markets are efficient, and that firms, in pursuing their own interests, enhance the public good “as if by an invisible hand.” A major achievement of economic science during the first half of the twentieth century was finding the precise sense in which that result is true. This result, known as the Fundamental Theorem of Welfare Economics, provides a rigorous analytic basis for the presumption that competitive markets allocate resources efficiently. In the 1980s economists made clear the hidden information assumptions underlying that theorem. They showed that in a wide variety of situations where information is costly (indeed, almost always), government interventions could make everyone better off if government officials had the right incentives. At the very least these results have undermined the long-standing presumption that markets are necessarily efficient. Full Employment of Resources A central result (or assumption) of standard economic theory is that resources are fully employed. The economy has a variety of mechanisms (savings and inventories provide buffers; price adjustments act as shock absorbers) that are supposed to dampen the effects of any shocks the economy experiences. In fact, for the past two hundred years economies have experienced large fluctuations, and there has been massive unemployment in the slumps. Though the great depression of the 1930s was the most recent prolonged and massive episode, the American economy suffered major recessions from 1979 to 1982, and many European economies experienced prolonged high unemployment rates during the 1980s. Information economics has explained why unemployment may persist and why fluctuations are so large. The failure of wages to fall so that unemployed workers can find jobs has been explained by efficiency wage theories, which argue that the productivity of workers increases with higher wages (both because employees work harder and because employers can recruit a higher-quality labor force). If information about their workers’ output were costless, employers would not pay such high wages because they could costlessly monitor output and pay accordingly. But because monitoring is costly, employers pay higher wages to give workers an incentive not to shirk. While efficiency wage theory helps explain why unemployment may persist, other theories that focus on the implications of imperfect information in the capital markets can help explain economic volatility. One strand of this theory focuses on the fact that many of the market’s mechanisms for distributing risk, which are critical to an economy’s ability to adjust to economic shocks, are imperfect because of costly information. Most notable in this respect is the failure of equity markets. In recent years less than 10 percent of new capital has been raised via equity markets. Information economics explains why. First, issuers of equity generally know more about the value of the shares than buyers do, and are more inclined to sell when they think buyers are overvaluing their shares. But most potential buyers know that this incentive exists and, therefore, are wary of buying. Second, shareholders have only limited control over managers. Information about what management is doing, or should be doing, to maximize shareholder value is costly. Thus, shareholders often limit the amount of “free cash” managers have to play with by imposing sufficient debt burdens to put managers’ “backs to the wall.” Managers must then exert strong efforts to meet those debt obligations and lenders will carefully scrutinize firms’ behavior. The fact that firms cannot (or choose not to) raise capital via equity markets means that if firms wish to invest more than their cash flow allows—or if they wish to produce more than they can finance out of their current working capital—they must turn to credit markets, and to banks in particular. From the firm’s perspective, borrowing has one major disadvantage: it imposes a fixed obligation on the firm. If it fails to meet that obligation, the firm can go bankrupt. (By contrast, an all-equity firm cannot go bankrupt.) Firms normally take actions to reduce the likelihood of bankruptcy by acting in a risk-averse manner. Risk-averse behavior, in turn, has two important consequences. First, it means that a firm’s behavior is affected by its net-worth position. When its financial position is adversely affected, it cuts back on all its activities (since there is some risk associated with virtually all activities);

/ Learn More

Information and Prices

Modern economists excel at identifying theoretical reasons why markets might fail. While these theories may temper uncritical views of the market, it is important to note that markets do, in fact, work incredibly well. Indeed, markets work so thoroughly and quietly that their success too often goes unnoticed. Consider that the number of different ways to arrange, even in a single dimension, a mere twenty items is far greater than the number of seconds in ten billion years. Now consider that the world contains trillions of different resources: my labor, iron ore, Hong Kong harbor, the stage at the Met, countless stands of pine trees, fertile Russian plains, orbiting satellites, automobile factories—the list is endless. The number of different ways to use, combine, and recombine these resources is unimaginably colossal. And almost all of these ways are useless. It would be a mistake, for example, to combine Arnold Schwarzenegger with medical equipment and have him perform brain surgery. Likewise, it would be a genuine shame to use the fruit of Chateau Petrus’s vines to make grape juice. Only a tiny fraction of all the possible ways to allocate resources is useful. How can we discover these ways? Random chance clearly will not work. Nor will central planning—which is really just a camouflaged method of relying on random chance. It is impossible for a central planning body even to survey the full set of possible resource arrangements, much less to rank these according to how well each will serve human purposes. That citizens of modern market societies eat and bathe regularly; wear clean clothes; drive automobiles; fly to Rome, Italy, or Branson, Missouri, for holidays; and chat routinely on cell phones is powerful evidence that our economy is amazingly well arranged. An effective means must be at work to ensure that some of the relatively very few patterns of resource use that are beneficial are actually used (rather than any of the 99.9999999+ percent of resource-use patterns that would be either useless or calamitous). The decentralized price system is that means. Critical to its functioning is the institution of private property with its associated duties and rights, including the duty to avoid physically harming and taking other people’s property, and the right to exchange property and its fruits at terms agreed on voluntarily. Each person seeks to use every parcel of his property in ways that yield him maximum benefit, either by consuming it most effectively according to his own subjective judgment or by employing it most effectively (“profitably”) in production. Market prices are vital to making such decisions. Vital Role of Prices Market prices are vital because they condense, in as objective a form as possible, information on the value of alternative uses of each parcel of property. Nearly every parcel of property has alternative uses. For example, a plot of land can be used to site a pumpkin patch, a restaurant, a suite of physicians’ offices, or any of many other things. If this plot of land is to be used beneficially rather than wastefully, those responsible for deciding how it will be used must be able to determine the likely worth of each possible alternative. Making such determinations requires reliable information. And market prices are a marvelously compact and reliable source of such information. Offers on the land from potential buyers or renters combine with the current owner’s assessment of the value of the land to him to create a price for the land. Each potential user values the land by at least as much as he is willing to bid. The more intense the bidding, the more likely that each bid will reflect the maximum value each bidder places on the land. Of course, the market prices of goods or services that can be produced with the land are an especially important source of information exploited by potential users of the land to determine how much each will bid. If the land’s current owner cannot use it in a way that promises him as much value as he can get by selling it, he will sell to the buyer offering the highest price. If a commercial developer purchases the land as a site for doctors’ offices, it is because this buyer observed that the rents for office space currently paid by physicians are sufficiently high to justify his purchase of the land, construction of the buildings, and purchase and assembly of all other inputs necessary to create a suite of medical offices.

/ Learn More

Innovation

“Innovation”: creativity; novelty; the process of devising a new idea or thing, or improving an existing idea or thing. Although the word carries a positive connotation in American culture, innovation, like all human activities, has costs as well as benefits. These costs and benefits have preoccupied economists, political philosophers, and artists for centuries. Nature and Effects Innovation can turn new concepts into realities, creating wealth and power. For example, someone who discovers a cure for a disease has the power to withhold it, give it away, or sell it to others.1 Innovations can also disrupt the status quo, as when the invention of the automobile eliminated the need for horse-powered transportation. joseph schumpeter coined the term “creative destruction” to describe the process by which innovation causes a free market economy to evolve.2 Creative destruction occurs when innovations make long-standing arrangements obsolete, freeing resources to be employed elsewhere, leading to greater economic efficiency. For example, when a business manager installs a new machine that replaces manual laborers, the laborers who lose their jobs are now free to put their labor into another enterprise, resulting in more productivity. In fact, in many cases, the number of jobs available will actually increase because the machinery is introduced. Henry Hazlitt provides the example of cotton-spinning machinery introduced in England in the 1760s.3 At the time, the English textile industry employed some 7,900 people, and many workers protested the introduction of machinery out of fear for their livelihoods. But in 1787 there were 320,000 workers in the English textile industry. Although the introduction of machinery caused temporary discomfort to some workers, the machinery increased the aggregate wealth of society by decreasing the cost of production. Amazingly, concerns over technology and job loss in the textile industry continue today. One report notes that the introduction of new machinery in American textile mills between 1972 and 1992 coincided with a greater than 30 percent decrease in the number of textile jobs. However, that decrease was offset by the creation of new jobs. The authors conclude that “there is substantial entry into the industries, job creation rates are high, and productivity dynamics suggest surviving plants have emerged all the stronger while it has been the less productive plants that have exited.”4 According to Schumpeter, the process of technological change in a free market consists of three parts: invention (conceiving a new idea or process), innovation (arranging the economic requirements for implementing an invention), and diffusion (whereby people observing the new discovery adopt or imitate it). These stages can be observed in the history of several famous innovations. The Xerox photocopier was invented by Chester Carlson,5 a patent attorney frustrated by the difficulty of copying legal documents.6 After several years of tedious work, Carlson and a physicist friend successfully photocopied a phrase on October 22, 1938. But industry and government were not interested in further development of the invention. In 1944, the nonprofit Battelle Corporation,7 dedicated to helping inventors, finally showed interest. It and the Haloid Company (later called Xerox) invested in further development. Haloid announced the successful development of a photocopier on October 22, 1948, but the first commercially available copier was not sold until 1950. After another $16 million was invested in developing the photocopier concept, the Xerox 915 became the first simple push-button plain-paper copier. An immense success, it earned Carlson more than $150 million.8 In the following years, competing firms began selling copiers, and other inventions, such as the fax machine, adapted the technology.

/ Learn More

Health Care

Is Health Care Different? Health care is different from other goods and services: the health care product is ill-defined, the outcome of care is uncertain, large segments of the industry are dominated by nonprofit providers, and payments are made by third parties such as the government and private insurers. Many of these factors are present in other industries as well, but in no other industry are they all present. It is the interaction of these factors that tends to make health care unique. Even so, it is easy to make too much of the distinctiveness of the health care industry. Various players in the industry—consumers and providers, to name two—respond to incentives just as in other industries. Federal and state governments are a major health care spender. Together they account for 46 percent of national health care expenditures; nearly three-quarters of this is attributable to Medicare and Medicaid. Private health insurance pays for more than 35 percent of spending, and out-of-pocket consumer expenditures account for another 14 percent.1 Traditional national income accounts substantially understate the role of government spending in the health care sector. Most Americans under age sixty-five receive their health insurance through their employers. This form of employee compensation is not subject to income or payroll taxes, and as a result, the tax code subsidizes employer purchase of employee health insurance. The Joint Economic Committee of the U.S. Congress estimated that in 2002, the federal tax revenue forgone as a result of this tax “subsidy” equaled $137 billion.2 Risk and Insurance Risk of illness and the attendant cost of care lead to the demand for health insurance. Conventional economics argues that the probability of purchasing health insurance will be greater when the consumer is particularly risk averse, when the potential loss is large, when the probability of loss is neither too large nor too small, and when incomes are lower. The previously mentioned tax incentive for the purchase of health insurance increases the chances that health insurance will be purchased. Indeed, the presence of a progressive income tax system implies that higher income consumers will buy even more insurance. The 2002 Current Population Survey reports that nearly 83 percent of the under-age-sixty-five population in the United States had health insurance. More than three-quarters of these people had coverage through an employer, fewer than 10 percent purchased coverage on their own, and the remainder had coverage through a government program. Virtually all of those aged sixty-five and older had coverage through Medicare. Nonetheless, approximately 43.3 million Americans did not have health insurance in 2002.3 The key effect of health insurance is to lower the out-of-pocket price of health services. Consumers purchase goods and services up to the point where the marginal benefit of the item is just equal to the value of the resources given up. In the absence of insurance a consumer may pay sixty dollars for a physician visit. With insurance the consumer is responsible for paying only a small portion of the bill, perhaps only a ten-dollar copay. Thus, health insurance gives consumers an incentive to use health services that have only a very small benefit even if the full cost of the service (the sum of what the consumer and the insurer must pay) is much greater. This overuse of medical care in response to an artificially low price is an example of “moral hazard” (see insurance). Strong evidence of the moral hazard from health insurance comes from the RAND Health Insurance Experiment, which randomly assigned families to health insurance plans with various coinsurance and deductible amounts. Over the course of the study, those required to pay none of the bill used 37 percent more physician services than those who paid 25 percent of the bill. Those with “free care” used 67 percent more than those who paid virtually all of the bill. Prescription drugs were about as price sensitive as physician services. Hospital services were less price sensitive, but ambulatory mental health services were substantially more responsive to lower prices than were physician visits.4 Is the Spending Worth It? National health care spending in 2002 was $1.55 trillion, 14.9 percent of GDP. By comparison, the manufacturing sector constituted only 12.9 percent of GDP. Adjusted for inflation, health care spending in the United States increased by nearly 102 percent over the 1993-2002 period. Hospital services reflect 31 percent of spending; professional services, 22 percent; and drugs, medical supplies, and equipment reflect nearly 14 percent. David Cutler and Mark McClellan note that between 1950 and 1990 the present value of per person medical spending in the United States increased by $35,000 and life expectancy increased by seven years. An additional year of life is conventionally valued at $100,000, and so, using a 3 percent real interest rate, the present value of the extra years is $135,000. Thus the extra spending on medical care is worth the cost if medical spending accounts for more than one-quarter ($35,000/$130,000) of the increase in longevity. Researchers have found that the substantial improvements in the treatment of heart attacks and low-birth-weight births over this period account, just by themselves, for one-quarter of the overall mortality reduction. Thus, the increased health spending seems to have been worth the cost.5 This does not mean that there is no moral hazard. Much spending is on things that have no effect on mortality and little effect on quality of life, and these are encouraged when the patient pays only a fraction of the bill. Taxes and Employer-Sponsored Health Insurance There are three reasons why most people under age sixtyfive get their health insurance through an employer. First, employed people, on average, are healthier than those who are unemployed; therefore, they have fewer insurance claims. Second, the sales and administrative costs of group policies are lower. Third, health insurance premiums paid by an employer are not taxed. Thus, employers and their employees have a strong incentive to substitute broader and deeper health insurance coverage for money wages. Someone in the 27 percent federal income tax bracket, paying 5 percent state income tax and 7.65 percent in Social Security and Medicare taxes, would find that an extra dollar of employer-sponsored health insurance effectively costs him less than sixty-one cents. Workers, not employers, ultimately pay for the net-of-taxes cost of employer-sponsored health insurance. Employees are essentially paid the value of what they produce. Compensation can take many forms: money wages, vacation days, pensions, and health insurance coverage. If health insurance is added to the compensation bundle or if the health insurance becomes more expensive, something else must be removed from the bundle. Perhaps the pension plan is reduced; perhaps a wage increase is smaller than it otherwise would have been. A recent study demonstrates the effects of rising insurance premiums on wages and other benefits in a large firm. This firm provided employees with wages and “benefits credits” that they could spend on health insurance, pensions, vacation days, and so on. Workers could trade wages for additional benefits credits, and vice versa. Health insurance premiums on all plans increased each year. When all health insurance premiums increased, the workers switched to relatively less expensive health plans, took fewer other benefits, and reduced their take-home pay. A 10 percent increase in health insurance premiums led to increased insurance expenditures of only 5.2 percent because many workers shifted to relatively cheaper health plans offered by the employer. The bulk of these higher expenditures (71 percent) was paid for with lower take-home pay; 29 percent by giving up some other benefits.6 Thus, if insurance premiums increased, on average, by $200, the typical worker spent $104 more on coverage and paid for this by reducing take-home pay by $74 and giving up $30 in other benefits. These so-called compensating wage differentials, reductions in wages due to higher nonwage benefits, have important policy implications. They imply, for example, that a governmental requirement that all employers provide health insurance will result in lower wages for the affected workers. Growth and Effects of Managed Care The health care industry has undergone fundamental changes since 1990 as a result, in large part, of the growth of managed care. As recently as 1993, 49 percent of insured workers had coverage through a conventional insurance plan; in 2002 only 5 percent did so. The rest were in health maintenance organizations (HMOs), preferred provider organizations (PPOs), or other forms of managed care. Unlike conventional insurance plans, managed care plans provide coverage only for care received from a selected set of providers in a community. The basic idea with managed care is to limit the moral hazard that comes from overuse of health care, thus keeping insurance premiums lower than otherwise and potentially making the insured person, his employer, and the insurance company better off. An HMO typically provides coverage only if the care is delivered by a member of its hospital, physician, or pharmacy panel. PPOs allow subscribers to use nonpanel providers, but only if the subscriber pays a higher out-of-pocket price. Conventional plans allow subscribers to use any licensed provider in the community, usually for the same out-of-pocket price. Managed care changed the nature of competition among providers. Prior to the growth of managed care, hospitals competed for patients (and their physicians) by providing higher-quality care, more amenities, and more services. This so-called medical arms race resulted in the unusual economic circumstance that more hospitals in a market resulted in higher, not lower, prices. Conventional insurers (as well as government programs) essentially paid providers on a cost basis. The more that was spent, the more that was received. So providers rationally competed along dimensions that mattered. Managed care changed this by the use of “selective contracting.” Not every provider in the community got a contract from the managed care plan. Contracts were awarded based on quality, amenities, services, and price. Research has demonstrated that in the presence of selective contracting, the usual laws of economics apply: the presence of more providers in a market results in lower prices, more idle capacity results in lower prices, and a larger market share on the part of an insurer results in lower prices paid to providers. As a consequence, health care costs increased less rapidly than they otherwise would have and health care markets have become much more competitive.7 Managed care savings have been called illusionary. The plans have been accused of enrolling healthier individuals and providing less intense care. It is true that managed care plans disproportionately attract healthier subscribers. If this was all there was to managed care, the differences in costs between managed care and conventional coverage would be illusionary. However, a 2001 study demonstrates that the innovation offered by managed care is its ability to negotiate lower prices. The authors examined the mix of enrollees, the service intensity, and the prices paid for care among Massachusetts public employees in conventional and HMO plans. The focus was on enrollees with one of eight medical conditions. Across these eight conditions, the HMOs had per capita plan costs that were $107 lower, on average. Fifty-one percent of the difference was attributable to the younger, healthier individuals the HMOs enrolled; 5 percent was attributable to less-intense treatments; and 45 percent of the difference was attributable to lower negotiated prices. The conventional plan paid more than $72,600, on average, for coronary artery bypass graft surgery while the HMO plans in the study, on average, paid less than $52,000.8 Selective contracting arguably led to the slower rate of increase in health insurance premiums through the mid-1990s. Since that time insurance premiums have increased more rapidly. Health economists believe that this change is a result of consumers’ unwillingness to accept the limited provider choice that comes with selective contracting, as well as from the reduction in competition that has resulted from consolidation in the health care industry. Government-Provided Health Insurance Medicare is a federal tax-subsidy program that provides health insurance for some forty million persons aged sixtyfive and older in the United States. Medicare Part A, which provides hospital and limited nursing home care, is funded by payroll taxes imposed on both employees and employers. Part B covers physician services. Beneficiaries pay 25 percent of these costs through a monthly premium; the other 75 percent of Part B costs is paid from general tax revenues. Part C, now called “Medicare-Advantage,” allows beneficiaries to join Medicare-managed care plans. These plans are paid from Part A and Part B revenues. Part D is the new Medicare prescription drug program enacted in 2003 but not fully implemented until 2006. In 1983 Medicare began paying hospitals on a diagnosis-related group (DRG) basis; that is, payments were made for more than five hundred specific inpatient diagnoses. Prior to DRGs, hospitals were paid on an allowable cost basis. The DRG system changed the economic incentives facing hospitals, reduced the average length of stay, and reduced Medicare expenditures relative to the old system. In 1999 Medicare began paying physicians based on a fee schedule derived from a resource-based relative value scale (RBRVS) that ranks procedures based on their complexity, effort, and practice costs. As such, the RBRVS harkens back to the discredited labor theory of value (see marxism). Medicare payments, therefore, do not necessarily reflect market prices and are likely to over- or underpay providers relative to a market or competitive bidding approach. Thus, it is not surprising that physicians have argued that the system pays less than costs and some have begun to refuse to accept new Medicare patients. Moreover, the Medicare program effectively prohibits physicians from accepting payments higher than the fee schedule from Medicare beneficiaries. The result is a system of price controls that will result in shortages whenever the fee schedule is below the market-clearing price. Medicaid, a federal-state health care program for the poor, covers more than forty million people. The federal government pays 50-85 percent of the cost of the program depending on the relative per capita income of the state. States have considerable flexibility in determining eligibility and the extent of coverage within broad federal guidelines. Medicaid is essentially three distinct programs—one for low-income pregnant women and children, one for the disabled, and one for nursing home care for the elderly. Approximately 47 percent of recipients are children, but the aged and disabled receive more than 70 percent of the payments. Much of this is due to nursing home expenditures; Medicaid provides approximately 40 percent of nursing home revenue. State governments have gamed the system to obtain federal matching Medicaid funds. The state would tax a hospital or nursing home based on Medicaid days of care or the number of licensed beds. It would then match the taxes with federal matching dollars at a ratio of two to one or three to one, and essentially return the taxed dollars to the provider. When the federal government said this was not permissible, the states dropped the taxes and asked for “provider contributions” from the hospitals, nursing homes, and so on. Most states used the new federal money for health care services. Others simply reduced general fund expenditures by the amount of the new federal dollars—essentially using federal Medicaid dollars to fund road construction and other state functions. Neither “taxes” nor “contributions” may now be used. The states do, however, funnel state mental health and other state health program dollars through Medicaid to take advantage of the matching grants. The expansion of the Medicaid program, particularly for children, also has had the effect of crowding out private coverage. One estimate suggests that for each two new Medicaid children enrolled, one child lost private coverage.9 Regulation and the Health Care Market The health care industry is one of the most heavily regulated industries in the United States. These regulations stem from efforts to ensure quality, to facilitate the government’s role as purchaser of care, and to respond to provider efforts to increase the demand for their services. Hospitals and nursing homes are licensed by the state and must comply with quality and staffing requirements to maintain eligibility for participation in federal programs. Physicians and other health professionals are licensed by the states. Prescription drugs and medical devices are regulated by the Food and Drug Administration (see pharmaceuticals: economics and regulation). Some state governments require government permission before allowing a hospital or nursing home to be built or extensively changed. All of the above regulations restrict supply and raise the price of health care; interestingly, those who lobby for such regulations are medical providers, not consumers, presumably because they want to limit competition. Some state governments limit the extent to which managed care plans may selectively contract with providers. All state governments have imposed laws governing the content of insurance packages and the factors that may be used to determine insurance rates. While these may enhance quality, they do impose costs that raise the price of health insurance and increase the number of uninsured. In testimony before the Joint Economic Committee of the Congress, one analyst reported the annual net cost of regulation in the health care industry to be $128 billion.10 Industry Structure In 2002, there were 4,949 nonfederal short-term hospitals in the United States. Over the last decade the hospital sector has been consolidating: the number of hospitals declined by 6.4 percent and hospital beds per capita declined by more than 18 percent.11 In addition, the sector has been reorganizing itself into systems of hospitals that are commonly owned or managed. Nearly 46 percent of hospitals were part of a system in 2002, up from only 32 percent in 1994. The hospital sector has long been dominated by not-for-profit organizations. Only 14.4 percent of the industry is legally for-profit; this ratio has been constant for the last decade. There is some evidence that the consolidation and reorganization have been a reaction to the competition generated by the selective contracting actions of managed care. In 2001, the average cost of a stay at a government hospital was $7,400—24 percent more than at a private for-profit hospital. A study released in 2000 found that for-profit hospitals offer better-quality care.12 There were 272 private sector physicians per 100,000 population in the United States in 2002, an 8 percent increase since 1993, but a decline since 2000. There has been a steady decline in the proportion of physicians in solo practice; by 2001 more than three-quarters of physicians were in group practice or were employees.13 Physicians have been accused of inducing demand for their services because of the information asymmetry they hold relative to their patients. However, this argument has lost much of its impact in the last decade. Physicians’ inflation-adjusted average income has declined. Primary care physician incomes declined by 6.4 percent between 1995 and 1999, and specialist income declined by 4 percent.14 Industry Outlook The industry is faced with rising health care costs and an increasing number of uninsured. In the private sector the cost increases have led to an interest in consumer-directed health care. The idea is to provide health insurance payments only for expenditures in excess of a high deductible. The expectation is that consumers who must pay the full price for most health services will buy such services only when the expected benefits are at least equal to the full costs. Others see the reemergence of more aggressive selective contracting by managed care firms as a way to keep costs under control. The government is expected to be more aggressive in promoting competition among providers as well. The retirement of the baby boom generation will put more pressure on Medicare. Indeed, the Medicare trustees reported in 2004 that the costs of the Medicare program will exceed those of Social Security by 2024. Medicare Part A—hospital coverage—is estimated to be unable to cover its expenses starting in 2019.15 Interestingly, the 5 percent of Medicare fee-for-service beneficiaries who die each year account for one-fourth of all Medicare inpatient expenditures.16 Tax increases, benefit reductions, and/or wholesale reform of the program will have to occur. The number of uninsured will increase if health insurance continues to be more expensive. Some have proposed expansions of existing public programs; others have proposed “refundable” tax credits as a means of subsidizing targeted groups.17 Still others argue for reductions in regulations and a greater reliance on consumer-directed health plans as a means of lowering costs and expanding insurance coverage (see health insurance). The Inefficiency of Socialized Medicine Patricia M. Danzon Although other countries with more centralized government control over health budgets appear to have controlled costs more successfully, that does not mean that they have produced a more efficient result. In any case, reported statistics may be misleading. Efficient resource allocation requires that resources be spent on medical care as long as the marginal benefit exceeds the marginal cost. Marginal benefits are very hard to measure, but certainly they include more subjective values than the crude measures of morbidity and mortality that are widely used in international comparisons. In addition to forgone benefits, government health care systems have hidden costs. Any insurance system, public or private, must raise revenues, pay providers, control moral hazard, and bear some nondiversifiable risk. In a private insurance market such as that in the United States, the costs of performing these functions can be measured by insurance overhead costs of premium collection, claims administration, and return on capital. Public monopoly insurers must also perform these functions, but their costs tend to be hidden and do not appear in health expenditure accounts. Tax financing entails deadweight costs that have been estimated at more than seventeen cents per dollar raised—far higher than the 1 percent of premiums required by private insurers to collect premiums. The use of tight physician fee schedules gives doctors incentives to reduce their own time and other resources per patient visit; patients must therefore make multiple visits to receive the same total care. But these hidden patient time costs do not appear in standard measures of health care spending. Both economic theory and a careful review of the evidence that goes beyond simple accounting measures suggest that a government monopoly of financing and provision achieves a less efficient allocation of resources to medical care than would a well-designed private market system. The performance of the current U.S. health care system does not provide a guide to the potential functioning of a well-designed private market system. Cost and waste in the current U.S. system are unnecessarily high because of tax and regulatory policies that impede efficient cost control by private insurers, while at the same time the system fails to provide for universal coverage. Excerpt from Patricia M. Danzon, “Health Care Industry,” in David R. Henderson, ed., The Fortune Encyclopedia of Economics (New York: Warner Books, 1993), 679-680. About the Author Michael A. Morrisey is a professor of health economics in the School of Public Health and director of the Lister Hill Center for Health Policy at the University of Alabama at Birmingham. Further Reading   Dranove, David. The Economic Evolution of American Health Care: From Marcus Welby to Managed Care. Princeton: Princeton University Press, 2000. Morrisey, Michael A. “Competition in Hospital and Health Insurance Markets: A Review and Research Agenda.” Health Services Research 36, no. 1, pt. 2 (2001): 191-221. Morrisey, Michael A. Cost Shifting in Health Care: Separating Evidence from Rhetoric. Washington, D.C.: AEI Press, 1994. Pauly, Mark V. Health Benefits at Work: An Economic and Political Analysis of Employment-Based Health Insurance. Ann Arbor: University of Michigan Press, 2000. Pauly, Mark V., and John S. Hoff. Responsible Tax Credits for Health Insurance. Washington, D.C.: AEI Press, 2002.   Footnotes 1. Katharine Levit et al., “Health Spending Rebound Continues in 2002,” Health Affairs 23, no. 1 (2004): 147-159.   2. U.S. Congress, Joint Economic Committee, “How the Tax Exclusion Shaped Today’s Private Health Insurance Market,” December 17, 2003.   3. Paul Fronstin, “Sources of Health Insurance and Characteristics of the Uninsured: Analysis of the March 2003 Current Population Survey,” EBRI Issue Brief, no. 264 (Washington, D.C.: Employee Benefit Research Institute, 2003).   4. Joseph P. Newhouse et al., Free for All? Lessons from the RAND Health Insurance Experiment (Cambridge: Harvard University Press, 1993).   5. David A. Cutler and Mark McClellan, “Is Technology Change in Medicine Worth It?” Health Affairs 20, no.5 (2001): 11-29.   6. Dana P. Goldman, N. Sood, and Arlene A. Leibowitz, “The Reallocation of Compensation in Response to Health Insurance Premium Increases,” NBER Working Paper no. 9540, National Bureau of Economic Research, Cambridge, Mass., 2003.   7. Michael A. Morrisey, “Competition in Hospital and Health Insurance Markets: A Review and Research Agenda,” Health Services Research 36, no. 1, pt. 2 (2001): 191-221.   8. Daniel Altman et al., “Enrollee Mix, Treatment Intensity, and Cost in Competing Indemnity and HMO Plans,” Journal of Health Economics 22, no. 1 (2003): 23-45.   9. David Cutler and Jonathan Gruber, “Medicaid and Private Health Insurance: Evidence and Implications,” Health Affairs 16, no. 1 (1997): 194-200.   10. Christopher J. Conover, Testimony before the Joint Economic Committee, U.S. Congress, May 13, 2004.   11. American Hospital Association, Hospital Statistics 2004 (Chicago: AHA, 2004).   12. Mark McClellan and Douglas Staiger, “Comparing Hospital Quality at For-Profit and Not-for-Profit Hospitals,” NBER Working Paper no. 7324, National Bureau of Economic Research, Cambridge, Mass., 2000.   13. Kaiser Family Foundation, Trends and Indicators in the Changing Health Care Marketplace, 2004 Update, May 19, 2004, online at: http://www.kff.org/insurance/7031/index.cfm.   14. Marie C. Reed and Paul B. Ginsburg, Behind the Times: Physician Income, 1995-1999, Center for Studying Health System Change, Data Bulletin 24, March 2003.   15. Centers for Medicare and Medicaid Services, 2004 Annual Report of the Board of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds, March 23, 2004, online at: http://www.cms.hhs.gov/publications/trusteesreport/2004/secib.asp.   16. Amber E. Barnato, Mark B. McClellan, Christopher R. Kagay, and Alan M. Garber, “Trends in Inpatient Treatment Intensity Among Medicare Beneficiaries at the End of Life,” Health Services Research 39, no. 2 (2004): 363-376.   17. Mark V. Pauly and John S. Hoff, Responsible Tax Credits for Health Insurance (Washington, D.C.: AEI Press, 2002).   (0 COMMENTS)

/ Learn More