This is my archive

bar

Poverty in America

The United States produces more per capita than any other industrialized country, and in recent years governments at various levels have spent about $350 billion per year, or about 3.5 percent of gross domestic product, on programs serving low-income families.1 Despite this, measured poverty is more prevalent in the United States than in most of the rest of the industrialized world. In the mid-1990s, the U.S. poverty rate was twice as high as in Scandinavian countries, and one-third higher than in other European countries and Japan.2 Poverty is also as prevalent now as it was in 1973, when the incidence of poverty in America reached a postwar low of 11.1 percent. According to the Census Bureau, 37 million Americans were poor in 2005, just over 12.5 percent of the population.3 These official figures represent the number of people whose annual family income is less than an absolute “poverty line” developed by the federal government in the mid-1960s. The poverty line is roughly three times the annual cost of a nutritionally adequate diet. It varies by family size and is updated every year to reflect changes in the consumer price index. In 2005, the poverty line for a family of four was $19,971.4 Many researchers believe that the official method of measuring poverty is flawed. Some argue that poverty is a state of relative economic deprivation, that it depends not on whether income is lower than some arbitrary level but on whether it falls far below the incomes of others in the same society. But if we define poverty to mean relative economic deprivation, then no matter how wealthy everyone is, there will always be poverty. Others point out that the official measure errs by omission. For example, official poverty figures take no account of refundable tax credits or the value of noncash transfers such as food stamps and housing vouchers, which serve as income for certain purchases. Incorporating these factors into family income would have reduced the measured poverty rate by an estimated 1.9 percentage points (or by approximately 16 percent) in 2002.5 Official poverty figures also ignore work-related expenses that affect families’ disposable incomes. Child care is a case in point. Isabel Sawhill and Adam Thomas estimated that deducting this expense from family incomes would have increased the measured poverty rate by up to one percentage point (or 8 percent) in 1998.6 Also, smaller, more fragmented households are more common today than a few decades ago, suggesting that some poor households were formed for the privacy and autonomy of their members. To the extent that some people have willingly sacrificed their access to the economic resources of parents, spouses, or adult children, some of the increase in poverty may actually represent an improvement in well-being. Another problem with the official measure arises from the dynamic nature of poverty. Most Americans who experience poverty do so only temporarily. In the four years from 1996 through 1999, only 2 percent of the population was poor for two years or more.7 During the same period, 34 percent of the population was poor for at least two months.8 In short, persistent poverty is relatively uncommon. In recent years, income mobility has fallen slightly. According to one estimate, 40 percent of families occupied the same position in the income distribution at the beginning and end of the 1990s, compared with 36 percent in the 1970s.9 Another criticism of the poverty measures is that they are based on income rather than on consumption. Consumption spending may be a better measure of well-being than reported income is, although data from the consumer expenditure survey have their own limitations. Daniel Slesnick found, using consumption spending, that the poverty rate fell from 31 percent in 1949 to 13 percent in 1965 and to 2 percent at the end of the 1980s. One rough indicator of the decline in poverty is the range of items that most poor homes now contain—from color TVs to VCRs to washing machines to microwaves—compared with the relative lack of these items in poor homes in the early 1970s.10 Despite their flaws, the official figures are widely used to measure poverty. According to the Census Bureau, the poverty rate declined from 22.2 percent in 1960 to 12.6 percent in 2005. Most of this decline occurred in the 1960s. By 1970, the poverty rate had fallen to the current level of 12.6 percent. It then hovered between 11 and 13 percent in the 1970s, fluctuating primarily with the state of the economy.11 A longer-term perspective leaves a more positive impression. For example, according to one estimate by Christine Ross, Sheldon Danziger, and Eugene Smolensky, more than two-thirds of the population in 1939 was poor by today’s standards.12 The trend in poverty masks the divergent experiences of poverty among various demographic groups. The poverty rate among the elderly, for example, declined dramatically from 35.2 percent in 1959 to 10.1 percent in 2005 and is now lower than for any other age group. The poverty rate among children declined between 1959 and 1970, increased to 22.7 percent in 1993, and then fell steadily to 17.6 percent in 2005; it remains higher than poverty rates among other age groups. The poverty rate among black households has also declined over the last forty years, but at 24.9 percent in 2005 remains more than twice as high as the rate among white households. The poverty rate for households headed by women declined from 49.4 percent in 1959 to 28.7 percent in 2005, but is still much higher than for other types of house-holds. This higher incidence of reported poverty, together with the rising share of households headed by women, has led to what researchers call the “feminization of poverty.” Between 1959 and 2005, the proportion of the poor in female-headed households rose from 17.8 percent to 31.1 percent.13 Some of these women (about 13 percent) live with unrelated men or have unreported income from casual jobs that enable them to cope, but there is little doubt that the growth of single-parent families has contributed importantly to the rise in poverty. Researchers have suggested a number of plausible explanations for both positive and negative trends in poverty. These explanations include changes in the composition of households, economic growth, immigration, efforts to increase the education and skills of the poor, and the structure and generosity of the welfare system. The rapid growth of households headed by women and unrelated individuals, who typically cannot earn as much as married-couple families, has left a larger share of the population in poverty. This demographic trend has increased poverty rates among children. The proportion of children living in female-headed households doubled between 1970 and 2003, rising from 11.6 percent in 1970 to 23.6 in 2003.14 Had that proportion remained constant since 1970, the child poverty rate would have been about 4.4 percentage points lower in 1998.15 The ebb and flow of the economy also influences the incidence of poverty. Researchers have found that recessions have a disproportionate impact on the poor because they cause rising unemployment, a reduction in work hours, and stagnant family incomes. The relationship between the changes in the unemployment rate and the poverty rate was stronger during the 1960s and 1990s than during the 1970s and 1980s.16 But economic downturns have been accompanied by rising poverty rates during each of the six recessions in the past thirty years.17 Increased immigration and the characteristics of immigrants also affect poverty. Immigration increases the poverty rate because newly arrived immigrants are, on average, poorer than native-born citizens. Of the foreign-born population in 1999, 16.8 percent were poor, compared with 11.2 percent of native-born citizens.18 After declining during the 1930s and 1940s, the foreign-born population surged from 4.7 percent of the American population in 1970 to 10.4 percent in 2000.19 Immigration may also indirectly influence the incidence of poverty, because a surge in immigrants with minimal training tends to depress incomes among native workers at the bottom. For example, George Borjas attributed half of the drop in the relative wage of high school dropouts between 1980 and 1995 to immigration.20 Training and compensatory education programs such as the Job Corps and Head Start, designed as part of the War on Poverty to increase the skills of the poor, may also have reduced poverty. Many of these programs have not been carefully evaluated, but some of those that have are modestly successful. For example, some early education programs have had a positive effect on poor children, helping them to complete school, avoid crime, and achieve higher test scores.21 Some employment and training programs have raised earnings for adult women, although these programs have been less helpful to adult men and young people.22 Finally, safety-net programs have contributed to the decline. These are typically divided into two categories: public assistance programs, such as Temporary Assistance for Needy Families, food stamps, and Medicaid, which were designed to help people who are already poor; and social insurance programs such as Social Security, unemployment insurance, and Medicare, which were designed to prevent poverty when events such as layoff or retirement threaten a household’s well-being. Expenditures on these programs totaled roughly $1,279 billion in 2002, up 160 percent in real terms since 1975.23 However, much of this spending was for noncash assistance (especially health care) that improves the well-being of the poor but has no effect on measured poverty. The antipoverty effectiveness of these programs is typically measured by counting the number of people with pretransfer incomes below the poverty line whose incomes are raised above the poverty line by income transfers. According to government estimates, social insurance and public assistance programs moved nearly half of the pretransfer poor above the poverty line in 2002. This implies that these programs reduce the poverty rate by ten percentage points.24 By ignoring the incentive effects these programs have on recipients, however, the above analysis overstates the success of safety-net programs. Specifically, means-tested cash transfers such as Aid to Families with Dependent Children (AFDC), which decline as the welfare recipient earns more reported income, have long been understood to be antiwork and antifamily. This criticism of the program led to its reform in 1996. Under the revised law, called Temporary Assistance for Needy Families (TANF), welfare mothers are required to work and federal benefits are limited to five years. Aided by a strong economy and more generous assistance for the working poor in the form of an expanded Earned Income Tax Credit and other measures, welfare reform led to a sharp fall in caseloads in the late 1990s. Employment rates among single mothers rose and child poverty fell. In addition, after increasing for decades, the share of births to unmarried mothers has leveled off and teen birth rates have declined. (The reasons for these changes in fertility are not well understood and may or may not be related to welfare reform.) Although some families are worse off as a result of welfare reform, the majority of former welfare mothers have been able to earn enough to improve their economic situation. The longer-term effects of welfare reform, especially those that might be expected in a less robust economy, are more uncertain and are likely to depend, to some extent, on the provision of additional supports such as child care for low-income working families. U.S. poverty, measured by income, ebbs and flows with the state of the economy and with demographic shifts, especially immigration and the growth of single-parent families. Policy measures—whether in the form of direct income support or education and skills training of the poor—have swum against these strong tides and have had a mixed record of success. Since the mid-1990s, policies that have both required and supported work as the best strategy for reducing poverty have had considerable success. About the Author Isabel V. Sawhill is a senior fellow and the Cabot Family Chair at the Brookings Institution and was previously associate director of the Office of Management and Budget during President Bill Clinton’s administration. Further Reading   Blank, Rebecca. It Takes a Nation: A New Agenda for Fighting Poverty. New York: Russell Sage Foundation; Princeton: Princeton University Press, 1997. Citro, Constance, and Robert T. Michael, eds. Measuring Poverty: A New Approach. Washington, D.C.: National Academy Press, 1995. Danziger, Sheldon H., and Robert Haveman, eds. Understanding Poverty. New York: Russell Sage Foundation; Cambridge: Harvard University Press, 2001. Sawhill, Isabel, R. Kent Weaver, Ron Haskins, and Andrea Kane, eds. Welfare Reform and Beyond: The Future of the Safety Net. Washington, D.C.: Brookings Institution, 2002. Slesnick, Daniel T. “Gaining Ground: Poverty in the Postwar United States.” Journal of Political Economy 101, no. 1 (1993): 1–38.   Footnotes * The author is grateful to Melissa Cox for extensive research assistance on this article.   1. Committee on Ways and Means, U.S. House of Representatives, 2004 Green Book: Background Material and Data on Programs Within the Jurisdiction of the Committee on Ways and Means, tables I-5 and K-9, online at: http://waysandmeans.house.gov/Documents.asp?section=813.   2. Michael Forster and Mark Pearson, “Income Distribution and Poverty in the OECD Area: Trends and Driving Forces,” OECD Economic Studies, no. 1 (2002): 13.   3. Carmen De Navas-Walt, Bernadette D. Proctor, and Cheryl Hill Lee, “Income, Poverty and Health Insurance in the United States: 2005,” U.S. Census Bureau, August 2006, p. 6, online at: http://www.census.gov/prod/2006pubs/p60-231.pdf.   4. U.S. Census Bureau, “Poverty Thresholds for 2002 by Size of Family and Number of Related Children Under 18 Years,” online at: http://www.census.gov/hhes/www/poverty/threshld/thresh05.html.   5. Unpublished data supplied by Wendell Primus, Committee on Ways and Means, U.S. House of Representatives.   6. Isabel Sawhill and Adam Thomas, “A Hand up for the Bottom Third: Toward a New Agenda for Low-Income Working Families,” Brookings Institution, 2001, online at: www.brook.edu/views/papers/sawhill/20010522.pdf.   7. John Iceland, “Dynamics of Economic Well-Being: Poverty 1996–1999,” U.S. Census Bureau, July 2003, p. 4, online at: http://www.census.gov/prod/2003pubs/p70-91.pdf.   8. Ibid.   9. Katherine Bradbury and Jane Katz, “Are Lifetime Incomes Growing More Unequal?” Federal Reserve Bank of Boston Regional Review (4th Quarter 2002): 4.   10. W. Michael Cox and Richard Alm, Myths of Rich and Poor (New York: Basic Books, 1999), p. 15.   11. De Navas-Walt, et al., “Income, Poverty and Health Insurance in the United States: 2005,” p. 46.   12. Christine Ross, Sheldon Danziger, and Eugene Smolensky, “The Level and Trend of Poverty in the United States, 1939–1979,” Demography 24 (1987): 587–600.   13. De Navas-Walt, et al., “Income, Poverty and Health Insurance in the United States: 2005,” p. 14.   14. U.S. Census Bureau, Historical Poverty Table 10: Related Children in Female Householder Families as a Proportion of All Related Children, by Poverty Status: 1959 to 2003, online at: http://www.census.gov/hhes/poverty/histpov/hstpov10.html.   15. Adam Thomas and Isabel Sawhill, “For Richer or for Poorer: Marriage as an Antipoverty Strategy,” Journal for Policy Analysis and Management 21, no. 4 (2002): 587–599.   16. Robert Haveman, “Poverty and the Distribution of Economic Well-Being Since the 1960s,” in George L. Perry and James Tobin, eds., Economic Events, Ideas, and Policies (Washington, D.C.: Brookings Institution, 2000), p. 281.   17. Proctor and Dalaker, “Poverty in the United States: 2002,” p. 3.   18. U.S. Census Bureau, “Profile of the Foreign-Born Population in the United States: 2000,” December 2001, P23-206, p. 6.   19. Ibid., p. 9.   20. George J. Borjas, ed., Issues in the Economics of Immigration, National Bureau of Economic Research Conference Report (Chicago: University of Chicago Press, 2000), p. 6.   21. James J. Heckman, “Policies to Foster Human Capital,” Research in Economics 54, no. 1 (2000): 3–56.   22. Judith M. Gueron and Gayle Hamilton, “The Role of Education and Training in Welfare Reform,” Welfare Reform and Beyond Policy Brief no. 20, April 2002.   23. Committee on Ways and Means, U.S. House of Representatives, 2004 Green Book, tables I-5 and K-9.   24. Unpublished data supplied by Wendell Primus, Committee on Ways and Means, U.S. House of Representatives.   (0 COMMENTS)

/ Learn More

Price Controls

Governments have been trying to set maximum or minimum prices since ancient times. The Old Testament prohibited interest on loans to fellow Israelites; medieval governments fixed the maximum price of bread; and in recent years, governments in the United States have fixed the price of gasoline, the rent on apartments in New York City, and the wage of unskilled labor, to name a few. At times, governments go beyond fixing specific prices and try to control the general level of prices, as was done in the United States during both world wars and the Korean War, and by the Nixon administration from 1971 to 1973. The appeal of price controls is understandable. Even though they fail to protect many consumers and hurt others, controls hold out the promise of protecting groups that are particularly hard-pressed to meet price increases. Thus, the prohibition against usury—charging high interest on loans—was intended to protect someone forced to borrow out of desperation; the maximum price for bread was supposed to protect the poor, who depended on bread to survive; and rent controls were supposed to protect those who were renting when the demand for apartments exceeded the supply, and landlords were preparing to “gouge” their tenants. Despite the frequent use of price controls, however, and despite their appeal, economists are generally opposed to them, except perhaps for very brief periods during emergencies. In a survey published in 1992, 76.3 percent of the economists surveyed agreed with the statement: “A ceiling on rents reduces the quality and quantity of housing available.” A further 16.6 percent agreed with qualifications, and only 6.5 percent disagreed. The results were similar when the economists were asked about general controls: only 8.4 percent agreed with the statement: “Wage-price controls are a useful policy option in the control of inflation.” An additional 17.7 percent agreed with qualifications, but a sizable majority, 73.9 percent, disagreed (Alston et al. 1992, p. 204). The reason most economists are skeptical about price controls is that they distort the allocation of resources. To paraphrase a remark by Milton Friedman, economists may not know much, but they do know how to produce a shortage or surplus. Price ceilings, which prevent prices from exceeding a certain maximum, cause shortages. Price floors, which prohibit prices below a certain minimum, cause surpluses, at least for a time. Suppose that the supply and demand for wheat flour are balanced at the current price, and that the government then fixes a lower maximum price. The supply of flour will decrease, but the demand for it will increase. The result will be excess demand and empty shelves. Although some consumers will be lucky enough to purchase flour at the lower price, others will be forced to do without. Because controls prevent the price system from rationing the available supply, some other mechanism must take its place. A queue, once a familiar sight in the controlled economies of Eastern Europe, is one possibility. When the United States set maximum prices for gasoline in 1973 and

/ Learn More

Prisoners’ Dilemma

The prisoners’ dilemma is the best-known game of strategy in social science. It helps us understand what governs the balance between cooperation and competition in business, in politics, and in social settings. In the traditional version of the game, the police have arrested two suspects and are interrogating them in separate rooms. Each can either confess, thereby implicating the other, or keep silent. No matter what the other suspect does, each can improve his own position by confessing. If the other confesses, then one had better do the same to avoid the especially harsh sentence that awaits a recalcitrant holdout. If the other keeps silent, then one can obtain the favorable treatment accorded a state’s witness by confessing. Thus, confession is the dominant strategy (see game theory) for each. But when both confess, the outcome is worse for both than when both keep silent. The concept of the prisoners’ dilemma was developed by RAND Corporation scientists Merrill Flood and Melvin Dresher and was formalized by Albert W. Tucker, a Princeton mathematician. The prisoners’ dilemma has applications to economics and business. Consider two firms, say Coca-Cola and Pepsi, selling similar products. Each must decide on a pricing strategy. They best exploit their joint market power when both charge a high price; each makes a profit of ten million dollars per month. If one sets a competitive low price, it wins a lot of customers away from the rival. Suppose its profit rises to twelve million dollars, and that of the rival falls to seven million. If both set low prices, the profit of each is nine million dollars. Here, the low-price

/ Learn More

Pensions

A private pension plan is an organized program to provide retirement income for a firm’s workers. Some 56.7 percent of full-time, full-year wage and salary workers in the United States participate in employment-based pension plans (EBRI Issue Brief, October 2003). Private trusteed pension plans receive special tax treatment and are subject to eligibility, coverage, and benefit standards. Private pensions have become an important financial intermediary in the United States, with assets totaling $3.0 trillion at year-end 2002, while state and local government retirement funds totaled $1.967 trillion. By comparison, all New York Stock Exchange (NYSE) listed stocks totaled $9.557 trillion at year-end 2002. In other words, private and local government pension plan assets are large enough to purchase about 60 percent of all stocks listed on the NYSE. For individuals, future pension benefits provided by employers substitute for current wages and personal saving. A person would be indifferent between pension benefits and personal saving for retirement if each provided the same retirement income at the same cost of forgone current consumption. Tax advantages, however, create a bias in favor of saving through organized pension plans administered by the employee’s firm and away from direct saving. For a firm, pension plans serve two primary functions: first, pension benefits substitute for wages; second, pensions can provide firms with a source of financing because pension benefits need not require current cash payments. The current U.S. tax code provides additional advantages for using pension plans to finance operations. Basic Features of U.S. Pension Plans Virtually all private plans satisfy federal requirements for favorable tax treatment. The tax advantages are three: (1) pension costs of a firm are, within limits, tax deductible;

/ Learn More

Opportunity Cost

When economists refer to the “opportunity cost” of a resource, they mean the value of the next-highest-valued alternative use of that resource. If, for example, you spend time and money going to a movie, you cannot spend that time at home reading a book, and you cannot spend the money on something else. If your next-best alternative to seeing the movie is reading the book, then the opportunity cost of seeing the movie is the money spent plus the pleasure you forgo by not reading the book. The word “opportunity” in “opportunity cost” is actually redundant. The cost of using something is already the value of the highest-valued alternative use. But as contract lawyers and airplane pilots know, redundancy can be a virtue. In this case, its virtue is to remind us that the cost of using a resource arises from the value of what it could be used for instead. This simple concept has powerful implications. It implies, for example, that even when governments subsidize college education, most students still pay more than half of the cost. Take a student who annually pays $4,000 in tuition at a state college. Assume that the government subsidy to the college amounts to $8,000 per student. It looks as if the cost is $12,000 and the student pays less than half. But looks can be deceiving. The true cost is $12,000 plus the income the student forgoes by attending school rather than working. If the student could have earned $20,000 per year, then the true cost of the year’s schooling is $12,000 plus $20,000, for a total of $32,000. Of this $32,000 total, the student pays $24,000 ($4,000 in tuition plus $20,000 in forgone earnings). In other words, even with a hefty state subsidy, the student pays 75 percent of the whole cost. This explains why college students at state universities, even though they may grouse when the state government raises tuitions by, say, 10 percent, do not desert college in droves. A 10 percent increase in a $4,000 tuition is only $400, which is less than a 2 percent increase in the student’s overall cost (see human capital). What about the cost of room and board while attending school? This is not a true cost of attending school at all because whether or not the student attends school, the student still has expenses for room and board. About the Author David R. Henderson is the editor of this encyclopedia. He is a research fellow with Stanford University’s Hoover Institution and an associate professor of economics at the Naval Postgraduate School in Monterey, California. He was formerly a senior economist with President Ronald Reagan’s Council of Economic Advisers. Further Reading Alchian, Armen. “Cost.” In Encyclopedia of the Social Sciences. New York: Macmillan. Vol. 3, pp. 404–415. Buchanan, J. M. Cost and Choice. Chicago: Markham. 1969. Republished as Midway Reprint. Chicago: University of Chicago Press, 1977. Available online at: http://www.econlib.org/library/Buchanan/buchCv6.html (0 COMMENTS)

/ Learn More

OPEC

Few observers and even few experts remember that the Organization of Petroleum Exporting Countries (OPEC) was created in response to the 1959 imposition of import quotas on crude oil and refined products by the United States. In 1959, the U.S. government established the Mandatory Oil Import Quota program (MOIP), which restricted the amount of imported crude oil and refined products allowed into the United States and gave preferential treatment to oil imports from Canada, Mexico, and, somewhat later, Venezuela. This partial exclusion of Persian Gulf oil from the U.S. market depressed prices for Middle Eastern oil; as a result, oil prices “posted” (paid to the selling nations) were reduced in February 1959 and August 1960. In September 1960, four Persian Gulf nations (Iran, Iraq, Kuwait, and Saudi Arabia) and Venezuela formed OPEC in order to obtain higher prices for crude oil. By 1973, eight other nations (Algeria, Ecuador, Gabon, Indonesia, Libya, Nigeria, Qatar, and the United Arab Emirates) had joined OPEC; Ecuador withdrew at the end of 1992, and Gabon withdrew in 1994. The collective effort to raise oil prices was unsuccessful during the 1960s; real (i.e., inflation-adjusted) world market prices for crude oil fell from $9.78 (in 2004 dollars) in 1960 to $7.08 in 1970. However, real prices began to rise slowly in 1971 and then increased sharply in late 1973 and 1974, from roughly $10.00 per barrel to more than $36.00 per barrel in the wake of the 1973 Arab-Israeli (“Yom Kippur”) War. Despite what many noneconomists believe, the 1973–1974 price increase was not caused by the oil “embargo” (refusal to sell) that the Arab members of OPEC directed at the United States and the Netherlands. Instead, OPEC reduced its production of crude oil, raising world market prices sharply. The embargo against the United States and the Netherlands had no effect whatsoever: people in both nations were able to obtain oil at the same prices as people in all other nations. This failure of the embargo was predictable, in that oil is a “fungible” commodity that can be resold among buyers. An embargo by sellers is an attempt to raise prices for some buyers but not others. Only one price can prevail in the world market, however, because differences in prices will lead to arbitrage: that is, a higher price in a given market will induce other buyers to resell oil into the high-price market, thus equalizing prices worldwide. Nor, as is commonly believed, did OPEC cause oil shortages and gasoline lines in the United States. Instead, the shortages were caused by price and allocation controls on crude oil and refined products, imposed originally by President Richard Nixon in 1971 as part of the Economic Stabilization Program. Although the price controls allowed the price of crude oil to rise, it was not allowed to rise to free-market levels. Thus, the price controls caused the amount people wanted to consume to exceed the amount available at the legal maximum prices. Shortages were the inevitable result. Moreover, the allocation controls distorted the distribution of supplies; the government based allocations on consumption patterns observed before the sharp increase in prices. The higher prices, for example, reduced long-distance driving and agricultural fuel consumption, but the use of historical consumption patterns resulted in a relative oversupply of gasoline in rural areas and a relative undersupply in urban ones, thus exacerbating the effects of the price controls themselves. Countries whose governments did not impose price controls, such as (then West) Germany and Switzerland, did not experience shortages and queues. OPEC is in many ways a cartel—a group of producers that attempts to restrict output in order to raise prices above the competitive level. The decision-making center of OPEC is the Conference, comprising national delegations

/ Learn More

New Keynesian Economics

New Keynesian economics is the school of thought in modern macroeconomics that evolved from the ideas of John Maynard Keynes. Keynes wrote The General Theory of Employment, Interest, and Money in the 1930s, and his influence among academics and policymakers increased through the 1960s. In the 1970s, however, new classical economists such as Robert Lucas, Thomas J. Sargent, and Robert Barro called into question many of the precepts of the Keynesian revolution. The label “new Keynesian” describes those economists who, in the 1980s, responded to this new classical critique with adjustments to the original Keynesian tenets. The primary disagreement between new classical and new Keynesian economists is over how quickly wages and prices adjust. New classical economists build their macroeconomic theories on the assumption that wages and prices are flexible. They believe that prices “clear” markets—balance supply and demand—by adjusting quickly. New Keynesian economists, however, believe that market-clearing models cannot explain short-run economic fluctuations, and so they advocate models with “sticky” wages and prices. New Keynesian theories rely on this stickiness of wages and prices to explain why involuntary unemployment exists and why monetary policy has such a strong influence on economic activity. A long tradition in macroeconomics (including both Keynesian and monetarist perspectives) emphasizes that monetary policy affects employment and production in the short run because prices respond sluggishly to changes in the money supply. According to this view, if the money supply falls, people spend less money and the demand for goods falls. Because prices and wages are inflexible and do not fall immediately, the decreased spending causes a drop in production and layoffs of workers. New classical economists criticized this tradition because it lacks a coherent theoretical explanation for the sluggish behavior of prices. Much new Keynesian research attempts to remedy this omission. Menu Costs and Aggregate-Demand Externalities One reason prices do not adjust immediately to clear markets is that adjusting prices is costly. To change its prices, a firm may need to send out a new catalog to customers, distribute new price lists to its sales staff, or, in the case of a restaurant, print new menus. These costs of price adjustment, called “menu costs,” cause firms to adjust prices intermittently rather than continuously. Economists disagree about whether menu costs can help explain short-run economic fluctuations. Skeptics point out that menu costs usually are very small. They argue that these small costs are unlikely to help explain recessions, which are very costly for society. Proponents reply that “small” does not mean “inconsequential.” Even though menu costs are small for the individual firm, they could have large effects on the economy as a whole. Proponents of the menu-cost hypothesis describe the situation as follows. To understand why prices adjust slowly, one must acknowledge that changes in prices have externalities—that is, effects that go beyond the firm and its customers. For instance, a price reduction by one firm benefits other firms in the economy. When a firm lowers the price it charges, it lowers the average price level slightly and thereby raises real income. (Nominal income is determined by the money supply.) The stimulus from higher income, in turn, raises the demand for the products of all firms. This macroeconomic impact of one firm’s price adjustment on the demand for all other firms’ products is called an “aggregate-demand externality.” In the presence of this aggregate-demand externality, small menu costs can make prices sticky, and this stickiness can have a large cost to society. Suppose General Motors announces its prices and then, after a fall in the money supply, must decide whether to cut prices. If it did so, car buyers would have a higher real income and would therefore buy more products from other companies as well. But the benefits to other companies are not what General Motors cares about. Therefore, General Motors would sometimes fail to pay the menu cost and cut its price, even though the price cut is socially desirable. This is an example in which sticky prices are undesirable for the economy as a whole, even though they may be optimal for those setting prices. The Staggering of Prices New Keynesian explanations of sticky prices often emphasize that not everyone in the economy sets prices at the same time. Instead, the adjustment of prices throughout the economy is staggered. Staggering complicates the setting of prices because firms care about their prices relative to those charged by other firms. Staggering can make the overall level of prices adjust slowly, even when individual prices change frequently. Consider the following example. Suppose, first, that price setting is synchronized: every firm adjusts its price on the first of every month. If the money supply and aggregate demand rise on May 10, output will be higher from May 10 to June 1 because prices are fixed during this interval. But on June 1 all firms will raise their prices in response to the higher demand, ending the three-week boom. Now suppose that price setting is staggered: half the firms set prices on the first of each month and half on the fifteenth. If the money supply rises on May 10, then half of the firms can raise their prices on May 15. Yet because half of the firms will not be changing their prices on the fifteenth, a price increase by any firm will raise that firm’s relative price, which will cause it to lose customers. Therefore, these firms will probably not raise their prices very much. (In contrast, if all firms are synchronized, all firms can raise prices together, leaving relative prices unaffected.) If the May 15 price setters make little adjustment in their prices, then the other firms will make little adjustment when their turn comes on June 1, because they also want to avoid relative price changes. And so on. The price level rises slowly as the result of small price increases on the first and the fifteenth of each month. Hence, staggering makes the price level sluggish, because no firm wishes to be the first to post a substantial price increase. Coordination Failure Some new Keynesian economists suggest that recessions result from a failure of coordination. Coordination problems can arise in the setting of wages and prices because those who set them must anticipate the actions of other wage and price setters. Union leaders negotiating wages are concerned about the concessions other unions will win. Firms setting prices are mindful of the prices other firms will charge. To see how a recession could arise as a failure of coordination, consider the following parable. The economy is made up of two firms. After a fall in the money supply, each firm must decide whether to cut its price. Each firm wants to maximize its profit, but its profit depends not only on its pricing decision but also on the decision made by the other firm. If neither firm cuts its price, the amount of real money (the amount of money divided by the price level) is low, a recession ensues, and each firm makes a profit of only fifteen dollars. If both firms cut their price, real money balances are high, a recession is avoided, and each firm makes a profit of thirty dollars. Although both firms prefer to avoid a recession, neither can do so by its own actions. If one firm cuts its price while the other does not, a recession follows. The firm making the price cut makes only five dollars, while the other firm makes fifteen dollars. The essence of this parable is that each firm’s decision influences the set of outcomes available to the other firm. When one firm cuts its price, it improves the opportunities available to the other firm, because the other firm can then avoid the recession by cutting its price. This positive impact of one firm’s price cut on the other firm’s profit opportunities might arise because of an aggregate-demand externality. What outcome should one expect in this economy? On the one hand, if each firm expects the other to cut its price, both will cut prices, resulting in the preferred outcome in which each makes thirty dollars. On the other hand, if each firm expects the other to maintain its price, both will maintain their prices, resulting in the inferior solution, in which each makes fifteen dollars. Hence, either of these outcomes is possible: there are multiple equilibria. The inferior outcome, in which each firm makes fifteen dollars, is an example of a coordination failure. If the two firms could coordinate, they would both cut their price and reach the preferred outcome. In the real world, unlike in this parable, coordination is often difficult because the number of firms setting prices is large. The moral of the story is that even though sticky prices are in no one’s interest, prices can be sticky simply because price setters expect them to be. Efficiency Wages Another important part of new Keynesian economics has been the development of new theories of unemployment. Persistent unemployment is a puzzle for economic theory. Normally, economists presume that an excess supply of labor would exert a downward pressure on wages. A reduction in wages would in turn reduce unemployment by raising the quantity of labor demanded. Hence, according to standard economic theory, unemployment is a self-correcting problem. New Keynesian economists often turn to theories of what they call efficiency wages to explain why this market-clearing mechanism may fail. These theories hold that high wages make workers more productive. The influence of wages on worker efficiency may explain the failure of firms to cut wages despite an excess supply of labor. Even though a wage reduction would lower a firm’s wage bill, it would also—if the theories are correct—cause worker productivity and the firm’s profits to decline. There are various theories about how wages affect worker productivity. One efficiency-wage theory holds that high wages reduce labor turnover. Workers quit jobs for many reasons—to accept better positions at other firms, to change careers, or to move to other parts of the country. The more a firm pays its workers, the greater their incentive to stay with the firm. By paying a high wage, a firm reduces the frequency of quits, thereby decreasing the time spent hiring and training new workers. A second efficiency-wage theory holds that the average quality of a firm’s workforce depends on the wage it pays its employees. If a firm reduces wages, the best employees may take jobs elsewhere, leaving the firm with less-productive employees who have fewer alternative opportunities. By paying a wage above the equilibrium level, the firm may avoid this adverse selection, improve the average quality of its workforce, and thereby increase productivity. A third efficiency-wage theory holds that a high wage improves worker effort. This theory posits that firms cannot perfectly monitor the work effort of their employees and that employees must themselves decide how hard to work. Workers can choose to work hard, or they can choose to shirk and risk getting caught and fired. The firm can raise worker effort by paying a high wage. The higher the wage, the greater is the cost to the worker of getting fired. By paying a higher wage, a firm induces more of its employees not to shirk, and thus increases their productivity. A New Synthesis During the 1990s, the debate between new classical and new Keynesian economists led to the emergence of a new synthesis among macroeconomists about the best way to explain short-run economic fluctuations and the role of monetary and fiscal policies. The new synthesis attempts to merge the strengths of the competing approaches that preceded it. From the new classical models it takes a variety of modeling tools that shed light on how households and firms make decisions over time. From the new Keynesian models it takes price rigidities and uses them to explain why monetary policy affects employment and production in the short run. The most common approach is to assume monopolistically competitive firms (firms that have market power but compete with other firms) that change prices only intermittently. The heart of the new synthesis is the view that the economy is a dynamic general equilibrium system that deviates from an efficient allocation of resources in the short run because of sticky prices and perhaps a variety of other market imperfections. In many ways, this new synthesis forms the intellectual foundation for the analysis of monetary policy at the Federal Reserve and other central banks around the world. Policy Implications Because new Keynesian economics is a school of thought regarding macroeconomic theory, its adherents do not necessarily share a single view about economic policy. At the broadest level, new Keynesian economics suggests—in contrast to some new classical theories—that recessions are departures from the normal efficient functioning of markets. The elements of new Keynesian economics—such as menu costs, staggered prices, coordination failures, and efficiency wages—represent substantial deviations from the assumptions of classical economics, which provides the intellectual basis for economists’ usual justification of laissez-faire. In new Keynesian theories recessions are caused by some economy-wide market failure. Thus, new Keynesian economics provides a rationale for government intervention in the economy, such as countercyclical monetary or fiscal policy. This part of new Keynesian economics has been incorporated into the new synthesis that has emerged among macroeconomists. Whether policymakers should intervene in practice, however, is a more difficult question that entails various political as well as economic judgments. About the Author N. Gregory Mankiw is a professor of economics at Harvard University. From 2003 to 2005, he was the chairman of President George W. Bush’s Council of Economic Advisers. Further Reading   Clarida, Richard, Jordi Gali, and Mark Gertler. “The Science of Monetary Policy: A New Keynesian Perspective.” Journal of Economic Literature 37 (1999): 1661–1707. Goodfriend, Marvin, and Robert King. “The New Neoclassical Synthesis and the Role of Monetary Policy.” In Ben S. Bernanke and Julio Rotemberg, eds., NBER Macroeconomics Annual 1997. Cambridge: MIT Press, 1997. Pp. 231–283. Mankiw, N. Gregory, and David Romer, eds. New Keynesian Economics. 2 vols. Cambridge: MIT Press, 1991.   (0 COMMENTS)

/ Learn More

New Classical Macroeconomics

After Keynesian Macroeconomics The new classical macroeconomics is a school of economic thought that originated in the early 1970s in the work of economists centered at the Universities of Chicago and Minnesota—particularly, Robert Lucas (recipient of the Nobel Prize in 1995), Thomas Sargent, Neil Wallace, and Edward Prescott (corecipient of the Nobel Prize in 2004). The name draws on John Maynard Keynes’s evocative contrast between his own macroeconomics and that of his intellectual forebears. Keynes had knowingly stretched a point by lumping his contemporaries, a. c. pigou and Alfred Marshall, in with the older classical political economists, such as David Ricardo, and calling them all “classical.” According to Keynes, the classics saw the price system in a free economy as efficiently guiding the mutual adjustment of supply and demand in all markets, including the labor market. Unemployment could arise only because of a market imperfection—the intervention of the government or the action of labor unions—and could be eliminated through removing the imperfection. In contrast, Keynes shifted the focus of his analysis away from individual markets to the whole economy. He argued that even without market imperfections, aggregate demand (equal, in a closed economy, to consumption plus investment plus government expenditure) might fall short of the aggregate productive capacity of its labor and capital (plant, equipment, raw material, and infrastructure). In such a situation, unemployment is largely involuntary—that is, workers may be unemployed even though they are willing to work at a wage lower than the wage the firms pay their current workers. Later Keynesian economists achieved a measure of reconciliation with the classics. paul samuelson argued for a “neoclassical synthesis” in which classical economics was viewed as governing resource allocation when the economy was kept, through judicious government policy, at full employment. Other Keynesian economists sought to explain consumption, investment, the demand for money, and other key elements of the aggregate Keynesian model in a manner consistent with the assumption that individuals behave optimally. This was the program of “microfoundations for macroeconomics.” Origins of the New Classical Macroeconomics Although its name suggests a rejection of Keynesian economics and a revival of classical economics, the new classical macroeconomics began with Lucas’s and Leonard Rapping’s attempt to provide microfoundations for the Keynesian labor market. Lucas and Rapping applied the rule that equilibrium in a market occurs when quantity supplied equals quantity demanded. This turned out to be a radical step. Because involuntary unemployment is exactly the situation in which the amount of labor supplied exceeds the amount demanded, their analysis leaves no room at all for involuntary unemployment. Keynes’s view was that recessions occur when aggregate demand falls—largely as the result of a fall in private investment—causing firms to produce below their capacity. Producing less, firms need fewer workers, and thus employment falls. Firms, for reasons that Keynesian economists continue to debate, fail to cut wages to as low a level as job seekers will accept, and so involuntary unemployment rises. The new classicals reject this step as irrational. Involuntary unemployment would present firms with an opportunity to raise profits by paying workers a lower wage. If firms failed to take the opportunity, then they would not be optimizing. Employed workers should not be able to resist such wage cuts effectively since the unemployed stand ready to take their places at the lower wage. Keynesian economics would appear, then, to rest either on market imperfections or on irrationality, both of which Keynes denied. These criticisms of Keynesian economics illustrate the two fundamental tenets of the new classical macroeconomics. First, individuals are viewed as optimizers: given the prices, including wage rates, they face and the assets they hold, including their education and training (or “human capital”), they choose the best options available. Firms maximize profits; people maximize utility. Second, to a first approximation, prices adjust, changing the incentives to individuals, and thereby their choices, to align quantities supplied and demanded. Business Cycles business cycles pose a special challenge for new classical economists: How are large fluctuations in output compatible with the two fundamental tenets of their doctrine?

/ Learn More

Phillips Curve

The Phillips curve represents the relationship between the rate of inflation and the unemployment rate. Although he had precursors, A. W. H. Phillips’s study of wage inflation and unemployment in the United Kingdom from 1861 to 1957 is a milestone in the development of macroeconomics. Phillips found a consistent inverse relationship: when unemployment was high, wages increased slowly; when unemployment was low, wages rose rapidly. Phillips conjectured that the lower the unemployment rate, the tighter the labor market and, therefore, the faster firms must raise wages to attract scarce labor. At higher rates of unemployment, the pressure abated. Phillips’s “curve” represented the average relationship between unemployment and wage behavior over the business cycle. It showed the rate of wage inflation that would result if a particular level of unemployment persisted for some time. Economists soon estimated Phillips curves for most developed economies. Most related general price inflation, rather than wage inflation, to unemployment. Of course, the prices a company charges are closely connected to the wages it pays. Figure 1 shows a typical Phillips curve fitted to data for the United States from 1961 to 1969. The close fit between the estimated curve and the data encouraged many economists, following the lead of Paul Samuelson and Robert Solow, to treat the Phillips curve as a sort of menu of policy options. For example, with an unemployment rate of 6 percent, the government might stimulate the economy to lower unemployment to 5 percent. Figure 1 indicates that the cost, in terms of higher inflation, would be a little more than half a percentage point. But if the government initially faced lower rates of unemployment, the costs would be considerably higher: a reduction in unemployment from 5 to 4 percent would imply more than twice as big an increase in the rate of inflation—about one and a quarter percentage points. At the height of the Phillips curve’s popularity as a guide to policy, Edmund Phelps and Milton Friedman independently challenged its theoretical underpinnings. They argued that well-informed, rational employers and workers would pay attention only to real wages—the inflation-adjusted purchasing power of money wages. In their view, real wages would adjust to make the supply of labor equal

/ Learn More

Pharmaceuticals: Economics and Regulation

Pharmaceuticals are unique in their combination of extensive government control and extreme economics, that is, high fixed costs of development and relatively low incremental costs of production. Regulation The Food and Drug Administration (FDA) is the U.S. government agency charged with ensuring the safety and efficacy of the medicines available to Americans. The government’s control over medicines has grown in the last hundred years from literally nothing to far-reaching, and now pharmaceuticals are among the most-regulated products in this country. The two legislative acts that are the main source of the FDA’s powers both followed significant tragedies. In 1937, to make a palatable liquid version of its new antibiotic drug sulfanilamide, the Massengill Company carelessly used the solvent diethylene glycol, which is also used as an antifreeze.1 Elixir Sulfanilamide killed 107 people, mostly children, before it was quickly recalled; Massengill was successfully sued and the chemist responsible committed suicide. This tragedy led to the Food, Drug, and Cosmetic Act of 1938, which required that drugs be proven safe prior to marketing.2 In the next infamous tragedy, more than ten thousand European babies were born deformed after their mothers took thalidomide as a tranquilizer to alleviate morning sickness.3 This led to the Kefauver-Harris Amendments of 1962, which required that efficacy be proven prior to marketing. Note that even though thalidomide’s problem was clearly one of safety, an issue for which the FDA already had regulations, the laws were changed to add proof of efficacy. Many people are unaware that most of the drugs, foods, herbs, and dietary supplements that Americans consume have been neither assessed nor approved by the FDA. Some are beyond the scope of the FDA’s regulatory authority—if no specific health claims are made—and some are simply approved drugs being used in ways the FDA has not approved. Such “off-label” uses by physicians are widespread and can reach up to 90 percent in some therapeutic areas.4 Although the FDA tolerates off-label usage, it forbids pharmaceutical companies from promoting such applications of their products. Problems, sometimes serious, can arise even after FDA approval. Baycol (cerivastatin), Seldane (terfenadine), Vioxx (rofecoxib), and “Fen Phen” (fenfluramine and phentermine) are well-known examples of FDA-approved drugs that their manufacturers voluntarily withdrew after the drugs were found to be dangerous to some patients. Xalatan (latanoprost) for glaucoma caused 3–10 percent of users’ blue eyes to turn permanently brown. This amazing side effect was uncovered only after the drug was approved as “safe and effective.” One group of researchers estimated that 106,000 people died in 1994 alone from adverse reactions to drugs the FDA deemed “safe.”5 One problem with the 1962 Kefauver-Harris Amendments was the additional decade of regulatory delay they created for new drugs. For example, one researcher estimated that ten thousand people died unnecessarily each year while beta blockers languished at the FDA, even though they had already been approved in Europe. The FDA has taken a “guilty until proven innocent” approach rather than weighing the costs and benefits of such delays. Just how cautious should the FDA be? Thalidomide and sulfanilamide demonstrate the potential benefit of delays, while a disease such as lung cancer, which kills an American every three minutes, highlights the costs. In 1973, economist Sam Peltzman examined the pre- and post-1962 market to estimate the effect of the FDA’s new powers and found that the number of new drugs had been reduced by 60 percent. He also found little evidence to suggest a decline in the proportion of inefficacious drugs reaching the market.6 From 1963 through 2003, the number of new drugs approved each year approximately doubled, but pharmaceutical R&D expenditures grew by a factor of twenty.7 One result of the FDA approach is the very high, perhaps excessive, level of evidence required before drugs can be marketed legally. In December 2003, an FDA advisory committee declined to endorse the use of aspirin for preventing initial myocardial infarctions (MIs), or heart attacks.8 Does this mean that aspirin, which is approved for prevention of second heart attacks, does not work to prevent first heart attacks? No. One of the panelists, Dr. Joseph Knapka, stated: “As a scientist, I vote no. As a heart patient, I would probably say yes.” In other words, he had two standards. One standard is the scientific proof that aspirin works beyond any reasonable doubt. By this standard, the data on fifty-five thousand patients fall short.9 The other standard is measured by our choices in the real world. By this standard, aspirin passes easily. “The question today isn’t, does aspirin work? We know it works, and we certainly know it works in a net benefit to risk positive sense in the secondary prevention setting,” said panelist Thomas Fleming, chairman and professor of the Department of Biostatistics at the University of Washington, who also voted no.10 When our medical options are left to the scientific experts at a government agency, that agency has a bias toward conservatism. The FDA is acutely aware that of the two ways it can fail, approving a bad drug is significantly worse for its employees than failing to approve a good drug. Approving a bad drug may kill or otherwise harm patients, and an investigation of the approval process will lead to finger pointing. As former FDA employee Henry Miller put it, “This kind of mistake is highly visible and has immediate consequences—the media pounces, the public denounces, and Congress pronounces.”11 Such an outcome is highly emotional and concrete, while not approving a good drug is intellectual and abstract. Who would have benefited and by how much? Who will know enough to complain that she was victimized by being denied such a medicine? The FDA’s approach also curtails people’s freedom. The available medicines are what the FDA experts think we should have, not what we think we should have. It is common to picture uneducated patients blindly stumbling about the complexities of medical technology. While this certainly happens, it is mitigated by the expertise of caregivers (such as physicians), advisers (such as medical thought leaders), and watchdogs (such as the media), which comprise a surprisingly large support group. Of course, not all patients make competent decisions at all times, but FDA regulation treats all patients as incompetent. A medicine that may work for one person at a certain dose at a certain time for a given disease may not work if any of the variables changes. Thalidomide, though unsafe for fetuses, is currently being studied for a wide range of important diseases and was even approved by the FDA in 1998, after four decades of being banned, for a painful skin condition of leprosy.12 Similarly, finasteride is used in men to shrink enlarged prostate glands and to prevent baldness, but women are forbidden even to work in the finasteride factory due to the risk to fetuses. Also, the FDA pulled Propulsid (cisapride), a heartburn drug, from the market in March 2000 after eighty people who took it died from an irregular heartbeat. But for patients with cerebral palsy Propulsid is a miracle drug that allows them to digest food without extreme pain.13 What is a poison for one person may be a lifesaver for another. Economists have long recognized that good decisions cannot be made without considering the affected person’s unique characteristics. But the FDA has little knowledge of a given individual’s tolerance for pain, fear of death, or health status. So the decisions the FDA makes on behalf of individuals are imperfect because the agency lacks fundamental information (see information and prices). Economist Ludwig von Mises made this same argument in its universal form when he identified the Achilles’ heel of socialism: centralized governments are usually incapable of making good decisions for their citizens because they lack most of the relevant information. Some economists have proposed that the FDA continue to evaluate and approve new drugs, but that the drugs be made available—if the manufacturer wishes—during the approval process.14 The FDA could rate or grade drugs and put stern warnings on unapproved drugs and drugs that appear to be riskier. Economists expect that cautious drug companies and patients would simply wait for FDA approval, while some patients would take their chances. Such a solution is pareto optimal, in that everyone is at least as satisfied as under the current system. Cautious patients get the safety of FDA approval while patients who do not want to wait don’t have to. Economics A study by Joseph DiMasi, an economist at the Tufts Center for the Study of Drug Development in Boston, found that the cost of getting one new drug approved was $802 million in 2000 U.S. dollars.15 Most new drugs cost much less, but his figure adds in each successful drug’s prorated share of failures. Only one out of fifty drugs eventually reaches the market. Why are drugs so expensive to develop? The main reason for the high cost is the aforementioned high level of proof required by the Food and Drug Administration. Before it will approve a new drug, the FDA requires pharmaceutical companies to carefully test it in animals and then humans in the standard phases 0, I, II, and III process. The path through the FDA’s review process is slow and expensive. The ten to fifteen years required to get a drug through the testing and approval process leaves little remaining time on a twenty-year patent. Although new medicines are hugely expensive to bring to market, they are cheap to manufacture. In this sense, they are like DVD movies and computer software. This means that a drug company, to be profitable or simply to break even, must price its drugs well above its production costs. The company that wishes to maximize profits will set high prices for those who are willing to pay a lot and low prices that at least cover production costs for those willing to pay a little. That is why, for example, Merck priced its anti-AIDS drug, Crixivan, to poor countries in Africa and Latin America at $600 while charging relatively affluent Americans $6,099 for a year’s supply. This type of customer segmentation—similar to that of airlines—is part of the profit-maximizing strategy for medicines. In general, good customer segmentation is difficult to accomplish. Therefore, the most common type of pharmaceutical segmentation is charging a lower price in poorer countries and giving the product free to poor people in the United States through patient assistance programs. What complicates the picture is socialized medicine, which exists in almost every country outside the United States—and even, with Medicare and Medicaid, in the United States. Because governments in countries with socialized medicine tend to be the sole bargaining agent in dealing with drug companies, these governments often set prices that are low by U.S. standards. To some extent, this comes about because these governments have monopsony power—that is, monopoly power on the buyer’s side—and they use this power to get good deals. These governments are, in effect, saying that if they cannot buy it cheaply, their citizens cannot get it. These low prices also come about because governments sometimes threaten drug companies with compulsory licensing (breaking a patent) to get a low price. This has happened most recently in South Africa and Brazil with AIDS drugs. This violation of intellectual property rights can bring a seemingly powerful drug company into quick compliance. When faced with a choice between earning nothing and earning something, most drug companies choose the latter. The situation is a prisoners’ dilemma. Everyone’s interest is in giving drug companies an adequate incentive to invest in new drugs. To do so, drug companies must be able to price their drugs well above production costs to a large segment of the population. But each individual government’s narrow self-interest is to set a low price on drugs and let people in other countries pay the high prices that generate the return on R&D investments. Each government, in other words, has an incentive to be a free rider. And that is what many governments are doing. The temptation is to cease having Americans bear more than their share of drug development by having the U.S. government set low prices also. But if Americans also try to free ride, there may not be a ride. Governments are not the only bulk purchasers. The majority of pharmaceuticals in the United States are purchased by managed-care organizations (MCOs), hospitals, and governments, which use their market power to negotiate better prices. These organizations often do not take physical possession of the drugs; most pills never pass through the MCO’s hands, but instead go from manufacturer to wholesaler to pharmacy to patient. Therefore, manufacturers rebate money—billions of dollars—to compensate for purchases made at list prices. Managed-care rebates are given with consideration; they are the result of contracts that require performance. For example, a manufacturer will pay an HMO a rebate if it can keep a drug’s prescription market share above the national level. These rebates average 10–40 percent of sales. The net result is that the neediest Americans, frequently those without insurance, pay the highest prices, while the most powerful health plans and government agencies pay the lowest. Pharmaceutical companies would like to help poor people in the United States, but the federal government and, to a much lesser extent, health plans have tied their hands. Drug companies can and do give drugs away free through patient assistance programs, but they cannot sell them at very low prices because the federal government requires drug companies to give the huge Medicaid program their “best prices.” If a drug company sells to even one customer at a very low price, it also has to sell at the same price to the 5–40 percent of its customers covered by Medicaid. Drug prices are regularly attacked as “too high.” Yet, cheaper over-the-counter drugs, natural medicines, and generic versions of off-patent drugs are ubiquitous, and many health plans steer patients toward them. Economic studies have shown that even the newer, more expensive drugs are usually worth their price and are frequently cheaper than other alternatives. One study showed that each dollar spent on vaccines reduced other health care costs by $10. Another study showed that for each dollar spent on newer drugs, $6.17 was saved.16 Therefore, health plans that aggressively limited their drug spending ended up spending more over all. Most patients do not pay retail prices because they have some form of insurance. In 2003, before a law was passed that subsidizes drugs for seniors, 75–80 percent of seniors had prescription drug insurance. Insured people pay either a flat copayment, often based on tiers (copayment levels set by managed-care providers that involve a low payment for generic drugs and a higher payment for brand-name drugs) or a percentage of the prescription cost. On average, seniors spend more on entertainment than they do on drugs and medical supplies combined. But for the uninsured who are also poor and sick, drug prices can be a devastating burden. The overlap of the 20–25 percent who lack drug insurance and the 10 percent who pay more than five thousand dollars per year—approximately 2 percent are in both groups—is where we find the stories of people skimping on food to afford their medications. The number of people in both groups is actually lower than 2 percent because of the numerous patient assistance programs offered by pharmaceutical companies. For all the talk of lower drug prices, what people really want is lower risk through good insurance. Insurance lowers an individual’s risk and, consequently, increases the demand for pharmaceuticals. By spending someone else’s money for a good chunk of every pharmaceutical purchase, individuals become less price sensitive. A two-hundred-dollar prescription for a new medicine is forty times as expensive as a five-dollar generic, but its copay may be only three times the generic’s copay. The marginal cost to patients of choosing the expensive product is reduced, both in absolute and relative terms, and patients are thus more likely to purchase the expensive drug and make purchases they otherwise would have skipped. The data show that those with insurance consume 40–100 percent more than those without insurance. Drugs account for a small percentage of overall health-care spending. In fact, branded pharmaceuticals are about 7 percent and generics 3 percent of total U.S. health-care costs.17 The tremendous costs involved with illnesses—even if they are not directly measured—are the economic and human costs of the diseases themselves, not the drugs. About the Author Charles L. Hooper is president of Objective Insights, a company that consults for pharmaceutical and biotech companies. He is a visiting fellow with the Hoover Institution. Further Reading   Bast, Joseph L., Richard C. Rue, and Stuart A. Wesbury Jr. Why We Spend Too Much on Health Care and What We Can Do About It. Chicago: Heartland Institute, 1993. DiMasi, Joseph A., Ronald W. Hansen, and Henry G. Grabowski. “The Price of Innovation: New Estimates of Drug Development Costs.” Journal of Health Economics 22, no. 2 (2003): 151–185. Higgs, Robert, ed. Hazardous to Our Health? FDA Regulation of Health Care Products. Oakland, Calif.: Independent Institute, 1995. Hilts, Philip J. Protecting America’s Health: The FDA, Business, and One Hundred Years of Regulation. New York: Alfred A. Knopf, 2003. Klein, Daniel B., and Alexander Tabarrok. FDAReview.org. Oakland, Calif.: Independent Institute. Online at: http://www.fdareview.org/. Miller, Henry I. To America’s Health: A Proposal to Reform the Food and Drug Administration. Stanford, Calif.: Hoover Institution Press, 2000.   Footnotes 1. Philip J. Hilts, Protecting America’s Health: The FDA, Business, and One Hundred Years of Regulation (New York: Alfred A. Knopf, 2003), pp. 89–90.   2. Daniel B. Klein and Alexander Tabarrok, FDAReview.org, Independent Institute, online under “History” at: http://www.FDAReview.org/history.shtml#fifth.   3. “THALOMID (Thalidomide): Balancing the Benefits and the Risks,” Celgene Corporation, p. 2, online at: www.sanmateo.org/rimm/Tali_benefits_risks_celgene.pdf.   4. Alexander Tabarrok, “The Anomaly of Off-Label Drug Prescriptions,” Independent Institute Working Paper no. 10, December 1999.   5. Lazarov, Jason, et al. “Incidence of Adverse Drug Reactions in Hospitalized Patients.” Journal of the American Medical Association 279, no. 15 (1998): 1200–1205.   6. Peltzman, Sam. An Evaluation of Consumer Protection Legislation: The 1962 Drug Amendments. Journal of Political Economy 81, no. 5 (1973): 1049–1091.   7. Parexel’s Pharmaceutical R&D Statistical Sourcebook 2004–2005 (Waltham, Mass.: Parexel International Corporation, 2004), p. 9.   8. “Broader Use for Aspirin Fails to Win Backing,” Wall Street Journal, December 9, 2003, p. D9.   9. This 55,000 is the total number of patients tested in five published clinical trials of the use of aspirin to prevent initial non-fatal myocardial infraction.   10. Food and Drug Administration, Center for Drug Evaluation and Research, Cardiovascular and Renal Drugs Advisory Committee meeting, Monday, December 8, 2003, Gaithersburg, Md.   11. Henry I. Miller, M.D., To America’s Health: A Proposal to Reform the Food and Drug Administration (Stanford, Calif.: Hoover Institution Press, 2000), p. 42.   12. “FDA Gives Restricted Approval to Thalidomide,” CNN News, July 16, 1998.   13. “Drug Ban Brings Misery to Patient,” Associated Press, November 11, 2000.   14. Klein and Tabarrok, FDAReview.org, online under “Reform Options” at http://www.fdareview.org/reform.shtml#5; David R. Henderson, The Joy of Freedom: An Economist’s Odyssey (New York: Prentice Hall, 2002), pp. 206–207, 278–279.   15. Joseph A. DiMasi, Ronald W. Hansen, and Henry G. Grabowski, “The Price of Innovation: New Estimates of Drug Development Costs,” Journal of Health Economics 22 (2003): 151–185.   16. Frank R. Lichtenberg, “Benefits and Costs of Newer Drugs: An Update,” NBER Working Paper no. 8996, National Bureau of Economic Research, Cambridge, Mass., 2002.   17. The Centers for Medicare and Medicaid Services (CMS), January 8, 2004.   (0 COMMENTS)

/ Learn More