This is my archive

bar

Pollution Controls

There is general agreement that we must control pollution of our air, water, and land, but there is considerable dispute over how controls should be designed and how much control is enough. The pollution control mechanisms adopted in the United States have tended toward detailed regulation of technology, leaving polluters little choice in how to achieve the environmental goals. This “command-and-control” strategy needlessly increases the cost of pollution controls and may even slow our progress toward a cleaner environment. In 1970, popular concern about environmental degradation coalesced into a major political force, resulting in President Richard Nixon’s creation of the federal Environmental Protection Agency (EPA) and the first of the major federal attempts to regulate pollution directly—the Clean

/ Learn More

Political Behavior

The fact of scarcity, which exists everywhere, guarantees that people will compete for resources. Markets are one way to organize and channel this competition. Politics is another. People use both markets and politics to get resources allocated to the ends they favor. Even in a democracy, however, political activity is startlingly different from voluntary exchange in markets. People can accomplish many things in politics that they could not accomplish in the private sector. Some of these are vital to the broader community’s welfare, such as control of health-threatening air pollution from myriad sources affecting millions of individuals or the provision of national defense. Other public-sector actions, such as subsidies to farmers and restrictions on the number of taxicabs in a city, provide narrow benefits that fall far short of their costs. In democratic politics, rules typically give a majority coalition power over the entire society. These rules replace the rule of willing consent and voluntary exchange that exists in the marketplace. In politics, people’s goals are similar to the goals they have as consumers, producers, and resource suppliers in the private sector, but people participate instead as voters, politicians, bureaucrats, and lobbyists. In the political system, as in the marketplace, people are sometimes (but not always) selfish. In all cases, they are narrow: how much they know and how much they care about other people’s goals is necessarily limited. An advocate of the homeless working in the political arena typically lobbies for a shift of funding (reflecting a move of real resources) from other missions to help poor people who lack housing. The views of such a person, while admirable, are narrow. He or she prefers that the government (and other givers) allocate more resources to meet his or her goals, even though it means fewer resources for the goals of others. Similarly, a dedicated professional, such as the director of the National Park Service, however unselfish, pushes strongly for shifting government funds away from other uses and toward expanding and improving the national park system. His or her priority is to get more resources allocated to parks, even if goals espoused by others, such as helping the poor, necessarily suffer. Passionate demands for funding and for legislative favors (inevitably at the expense of other people’s goals) come from every direction. Political rules determine how these competing demands, which far exceed the government’s ability to provide them, will be arbitrated. The rules of the political

/ Learn More

Present Value

Present value is the value today of an amount of money in the future. If the appropriate interest rate is 10 percent, then the present value of $100 spent or earned one year from now is $100 divided by 1.10, which is about $91. This simple example illustrates the general truth that the present value of a future amount is less than that actual future amount. If the appropriate interest rate is only 4 percent, then the present value of $100 spent or earned one year from now is $100 divided by 1.04, or about $96. This illustrates the fact that the lower the interest rate, the higher the present value. The present value of $100 spent or earned twenty years from now is, using an interest rate of 10 percent, $100/(1.10)20, or about $15. In other words, the present value of an amount far in the future is a small fraction of the amount. The fact that a dollar one year from now is less than a

/ Learn More

Poverty in America

The United States produces more per capita than any other industrialized country, and in recent years governments at various levels have spent about $350 billion per year, or about 3.5 percent of gross domestic product, on programs serving low-income families.1 Despite this, measured poverty is more prevalent in the United States than in most of the rest of the industrialized world. In the mid-1990s, the U.S. poverty rate was twice as high as in Scandinavian countries, and one-third higher than in other European countries and Japan.2 Poverty is also as prevalent now as it was in 1973, when the incidence of poverty in America reached a postwar low of 11.1 percent. According to the Census Bureau, 37 million Americans were poor in 2005, just over 12.5 percent of the population.3 These official figures represent the number of people whose annual family income is less than an absolute “poverty line” developed by the federal government in the mid-1960s. The poverty line is roughly three times the annual cost of a nutritionally adequate diet. It varies by family size and is updated every year to reflect changes in the consumer price index. In 2005, the poverty line for a family of four was $19,971.4 Many researchers believe that the official method of measuring poverty is flawed. Some argue that poverty is a state of relative economic deprivation, that it depends not on whether income is lower than some arbitrary level but on whether it falls far below the incomes of others in the same society. But if we define poverty to mean relative economic deprivation, then no matter how wealthy everyone is, there will always be poverty. Others point out that the official measure errs by omission. For example, official poverty figures take no account of refundable tax credits or the value of noncash transfers such as food stamps and housing vouchers, which serve as income for certain purchases. Incorporating these factors into family income would have reduced the measured poverty rate by an estimated 1.9 percentage points (or by approximately 16 percent) in 2002.5 Official poverty figures also ignore work-related expenses that affect families’ disposable incomes. Child care is a case in point. Isabel Sawhill and Adam Thomas estimated that deducting this expense from family incomes would have increased the measured poverty rate by up to one percentage point (or 8 percent) in 1998.6 Also, smaller, more fragmented households are more common today than a few decades ago, suggesting that some poor households were formed for the privacy and autonomy of their members. To the extent that some people have willingly sacrificed their access to the economic resources of parents, spouses, or adult children, some of the increase in poverty may actually represent an improvement in well-being. Another problem with the official measure arises from the dynamic nature of poverty. Most Americans who experience poverty do so only temporarily. In the four years from 1996 through 1999, only 2 percent of the population was poor for two years or more.7 During the same period, 34 percent of the population was poor for at least two months.8 In short, persistent poverty is relatively uncommon. In recent years, income mobility has fallen slightly. According to one estimate, 40 percent of families occupied the same position in the income distribution at the beginning and end of the 1990s, compared with 36 percent in the 1970s.9 Another criticism of the poverty measures is that they are based on income rather than on consumption. Consumption spending may be a better measure of well-being than reported income is, although data from the consumer expenditure survey have their own limitations. Daniel Slesnick found, using consumption spending, that the poverty rate fell from 31 percent in 1949 to 13 percent in 1965 and to 2 percent at the end of the 1980s. One rough indicator of the decline in poverty is the range of items that most poor homes now contain—from color TVs to VCRs to washing machines to microwaves—compared with the relative lack of these items in poor homes in the early 1970s.10 Despite their flaws, the official figures are widely used to measure poverty. According to the Census Bureau, the poverty rate declined from 22.2 percent in 1960 to 12.6 percent in 2005. Most of this decline occurred in the 1960s. By 1970, the poverty rate had fallen to the current level of 12.6 percent. It then hovered between 11 and 13 percent in the 1970s, fluctuating primarily with the state of the economy.11 A longer-term perspective leaves a more positive impression. For example, according to one estimate by Christine Ross, Sheldon Danziger, and Eugene Smolensky, more than two-thirds of the population in 1939 was poor by today’s standards.12 The trend in poverty masks the divergent experiences of poverty among various demographic groups. The poverty rate among the elderly, for example, declined dramatically from 35.2 percent in 1959 to 10.1 percent in 2005 and is now lower than for any other age group. The poverty rate among children declined between 1959 and 1970, increased to 22.7 percent in 1993, and then fell steadily to 17.6 percent in 2005; it remains higher than poverty rates among other age groups. The poverty rate among black households has also declined over the last forty years, but at 24.9 percent in 2005 remains more than twice as high as the rate among white households. The poverty rate for households headed by women declined from 49.4 percent in 1959 to 28.7 percent in 2005, but is still much higher than for other types of house-holds. This higher incidence of reported poverty, together with the rising share of households headed by women, has led to what researchers call the “feminization of poverty.” Between 1959 and 2005, the proportion of the poor in female-headed households rose from 17.8 percent to 31.1 percent.13 Some of these women (about 13 percent) live with unrelated men or have unreported income from casual jobs that enable them to cope, but there is little doubt that the growth of single-parent families has contributed importantly to the rise in poverty. Researchers have suggested a number of plausible explanations for both positive and negative trends in poverty. These explanations include changes in the composition of households, economic growth, immigration, efforts to increase the education and skills of the poor, and the structure and generosity of the welfare system. The rapid growth of households headed by women and unrelated individuals, who typically cannot earn as much as married-couple families, has left a larger share of the population in poverty. This demographic trend has increased poverty rates among children. The proportion of children living in female-headed households doubled between 1970 and 2003, rising from 11.6 percent in 1970 to 23.6 in 2003.14 Had that proportion remained constant since 1970, the child poverty rate would have been about 4.4 percentage points lower in 1998.15 The ebb and flow of the economy also influences the incidence of poverty. Researchers have found that recessions have a disproportionate impact on the poor because they cause rising unemployment, a reduction in work hours, and stagnant family incomes. The relationship between the changes in the unemployment rate and the poverty rate was stronger during the 1960s and 1990s than during the 1970s and 1980s.16 But economic downturns have been accompanied by rising poverty rates during each of the six recessions in the past thirty years.17 Increased immigration and the characteristics of immigrants also affect poverty. Immigration increases the poverty rate because newly arrived immigrants are, on average, poorer than native-born citizens. Of the foreign-born population in 1999, 16.8 percent were poor, compared with 11.2 percent of native-born citizens.18 After declining during the 1930s and 1940s, the foreign-born population surged from 4.7 percent of the American population in 1970 to 10.4 percent in 2000.19 Immigration may also indirectly influence the incidence of poverty, because a surge in immigrants with minimal training tends to depress incomes among native workers at the bottom. For example, George Borjas attributed half of the drop in the relative wage of high school dropouts between 1980 and 1995 to immigration.20 Training and compensatory education programs such as the Job Corps and Head Start, designed as part of the War on Poverty to increase the skills of the poor, may also have reduced poverty. Many of these programs have not been carefully evaluated, but some of those that have are modestly successful. For example, some early education programs have had a positive effect on poor children, helping them to complete school, avoid crime, and achieve higher test scores.21 Some employment and training programs have raised earnings for adult women, although these programs have been less helpful to adult men and young people.22 Finally, safety-net programs have contributed to the decline. These are typically divided into two categories: public assistance programs, such as Temporary Assistance for Needy Families, food stamps, and Medicaid, which were designed to help people who are already poor; and social insurance programs such as Social Security, unemployment insurance, and Medicare, which were designed to prevent poverty when events such as layoff or retirement threaten a household’s well-being. Expenditures on these programs totaled roughly $1,279 billion in 2002, up 160 percent in real terms since 1975.23 However, much of this spending was for noncash assistance (especially health care) that improves the well-being of the poor but has no effect on measured poverty. The antipoverty effectiveness of these programs is typically measured by counting the number of people with pretransfer incomes below the poverty line whose incomes are raised above the poverty line by income transfers. According to government estimates, social insurance and public assistance programs moved nearly half of the pretransfer poor above the poverty line in 2002. This implies that these programs reduce the poverty rate by ten percentage points.24 By ignoring the incentive effects these programs have on recipients, however, the above analysis overstates the success of safety-net programs. Specifically, means-tested cash transfers such as Aid to Families with Dependent Children (AFDC), which decline as the welfare recipient earns more reported income, have long been understood to be antiwork and antifamily. This criticism of the program led to its reform in 1996. Under the revised law, called Temporary Assistance for Needy Families (TANF), welfare mothers are required to work and federal benefits are limited to five years. Aided by a strong economy and more generous assistance for the working poor in the form of an expanded Earned Income Tax Credit and other measures, welfare reform led to a sharp fall in caseloads in the late 1990s. Employment rates among single mothers rose and child poverty fell. In addition, after increasing for decades, the share of births to unmarried mothers has leveled off and teen birth rates have declined. (The reasons for these changes in fertility are not well understood and may or may not be related to welfare reform.) Although some families are worse off as a result of welfare reform, the majority of former welfare mothers have been able to earn enough to improve their economic situation. The longer-term effects of welfare reform, especially those that might be expected in a less robust economy, are more uncertain and are likely to depend, to some extent, on the provision of additional supports such as child care for low-income working families. U.S. poverty, measured by income, ebbs and flows with the state of the economy and with demographic shifts, especially immigration and the growth of single-parent families. Policy measures—whether in the form of direct income support or education and skills training of the poor—have swum against these strong tides and have had a mixed record of success. Since the mid-1990s, policies that have both required and supported work as the best strategy for reducing poverty have had considerable success. About the Author Isabel V. Sawhill is a senior fellow and the Cabot Family Chair at the Brookings Institution and was previously associate director of the Office of Management and Budget during President Bill Clinton’s administration. Further Reading   Blank, Rebecca. It Takes a Nation: A New Agenda for Fighting Poverty. New York: Russell Sage Foundation; Princeton: Princeton University Press, 1997. Citro, Constance, and Robert T. Michael, eds. Measuring Poverty: A New Approach. Washington, D.C.: National Academy Press, 1995. Danziger, Sheldon H., and Robert Haveman, eds. Understanding Poverty. New York: Russell Sage Foundation; Cambridge: Harvard University Press, 2001. Sawhill, Isabel, R. Kent Weaver, Ron Haskins, and Andrea Kane, eds. Welfare Reform and Beyond: The Future of the Safety Net. Washington, D.C.: Brookings Institution, 2002. Slesnick, Daniel T. “Gaining Ground: Poverty in the Postwar United States.” Journal of Political Economy 101, no. 1 (1993): 1–38.   Footnotes * The author is grateful to Melissa Cox for extensive research assistance on this article.   1. Committee on Ways and Means, U.S. House of Representatives, 2004 Green Book: Background Material and Data on Programs Within the Jurisdiction of the Committee on Ways and Means, tables I-5 and K-9, online at: http://waysandmeans.house.gov/Documents.asp?section=813.   2. Michael Forster and Mark Pearson, “Income Distribution and Poverty in the OECD Area: Trends and Driving Forces,” OECD Economic Studies, no. 1 (2002): 13.   3. Carmen De Navas-Walt, Bernadette D. Proctor, and Cheryl Hill Lee, “Income, Poverty and Health Insurance in the United States: 2005,” U.S. Census Bureau, August 2006, p. 6, online at: http://www.census.gov/prod/2006pubs/p60-231.pdf.   4. U.S. Census Bureau, “Poverty Thresholds for 2002 by Size of Family and Number of Related Children Under 18 Years,” online at: http://www.census.gov/hhes/www/poverty/threshld/thresh05.html.   5. Unpublished data supplied by Wendell Primus, Committee on Ways and Means, U.S. House of Representatives.   6. Isabel Sawhill and Adam Thomas, “A Hand up for the Bottom Third: Toward a New Agenda for Low-Income Working Families,” Brookings Institution, 2001, online at: www.brook.edu/views/papers/sawhill/20010522.pdf.   7. John Iceland, “Dynamics of Economic Well-Being: Poverty 1996–1999,” U.S. Census Bureau, July 2003, p. 4, online at: http://www.census.gov/prod/2003pubs/p70-91.pdf.   8. Ibid.   9. Katherine Bradbury and Jane Katz, “Are Lifetime Incomes Growing More Unequal?” Federal Reserve Bank of Boston Regional Review (4th Quarter 2002): 4.   10. W. Michael Cox and Richard Alm, Myths of Rich and Poor (New York: Basic Books, 1999), p. 15.   11. De Navas-Walt, et al., “Income, Poverty and Health Insurance in the United States: 2005,” p. 46.   12. Christine Ross, Sheldon Danziger, and Eugene Smolensky, “The Level and Trend of Poverty in the United States, 1939–1979,” Demography 24 (1987): 587–600.   13. De Navas-Walt, et al., “Income, Poverty and Health Insurance in the United States: 2005,” p. 14.   14. U.S. Census Bureau, Historical Poverty Table 10: Related Children in Female Householder Families as a Proportion of All Related Children, by Poverty Status: 1959 to 2003, online at: http://www.census.gov/hhes/poverty/histpov/hstpov10.html.   15. Adam Thomas and Isabel Sawhill, “For Richer or for Poorer: Marriage as an Antipoverty Strategy,” Journal for Policy Analysis and Management 21, no. 4 (2002): 587–599.   16. Robert Haveman, “Poverty and the Distribution of Economic Well-Being Since the 1960s,” in George L. Perry and James Tobin, eds., Economic Events, Ideas, and Policies (Washington, D.C.: Brookings Institution, 2000), p. 281.   17. Proctor and Dalaker, “Poverty in the United States: 2002,” p. 3.   18. U.S. Census Bureau, “Profile of the Foreign-Born Population in the United States: 2000,” December 2001, P23-206, p. 6.   19. Ibid., p. 9.   20. George J. Borjas, ed., Issues in the Economics of Immigration, National Bureau of Economic Research Conference Report (Chicago: University of Chicago Press, 2000), p. 6.   21. James J. Heckman, “Policies to Foster Human Capital,” Research in Economics 54, no. 1 (2000): 3–56.   22. Judith M. Gueron and Gayle Hamilton, “The Role of Education and Training in Welfare Reform,” Welfare Reform and Beyond Policy Brief no. 20, April 2002.   23. Committee on Ways and Means, U.S. House of Representatives, 2004 Green Book, tables I-5 and K-9.   24. Unpublished data supplied by Wendell Primus, Committee on Ways and Means, U.S. House of Representatives.   (0 COMMENTS)

/ Learn More

Price Controls

Governments have been trying to set maximum or minimum prices since ancient times. The Old Testament prohibited interest on loans to fellow Israelites; medieval governments fixed the maximum price of bread; and in recent years, governments in the United States have fixed the price of gasoline, the rent on apartments in New York City, and the wage of unskilled labor, to name a few. At times, governments go beyond fixing specific prices and try to control the general level of prices, as was done in the United States during both world wars and the Korean War, and by the Nixon administration from 1971 to 1973. The appeal of price controls is understandable. Even though they fail to protect many consumers and hurt others, controls hold out the promise of protecting groups that are particularly hard-pressed to meet price increases. Thus, the prohibition against usury—charging high interest on loans—was intended to protect someone forced to borrow out of desperation; the maximum price for bread was supposed to protect the poor, who depended on bread to survive; and rent controls were supposed to protect those who were renting when the demand for apartments exceeded the supply, and landlords were preparing to “gouge” their tenants. Despite the frequent use of price controls, however, and despite their appeal, economists are generally opposed to them, except perhaps for very brief periods during emergencies. In a survey published in 1992, 76.3 percent of the economists surveyed agreed with the statement: “A ceiling on rents reduces the quality and quantity of housing available.” A further 16.6 percent agreed with qualifications, and only 6.5 percent disagreed. The results were similar when the economists were asked about general controls: only 8.4 percent agreed with the statement: “Wage-price controls are a useful policy option in the control of inflation.” An additional 17.7 percent agreed with qualifications, but a sizable majority, 73.9 percent, disagreed (Alston et al. 1992, p. 204). The reason most economists are skeptical about price controls is that they distort the allocation of resources. To paraphrase a remark by Milton Friedman, economists may not know much, but they do know how to produce a shortage or surplus. Price ceilings, which prevent prices from exceeding a certain maximum, cause shortages. Price floors, which prohibit prices below a certain minimum, cause surpluses, at least for a time. Suppose that the supply and demand for wheat flour are balanced at the current price, and that the government then fixes a lower maximum price. The supply of flour will decrease, but the demand for it will increase. The result will be excess demand and empty shelves. Although some consumers will be lucky enough to purchase flour at the lower price, others will be forced to do without. Because controls prevent the price system from rationing the available supply, some other mechanism must take its place. A queue, once a familiar sight in the controlled economies of Eastern Europe, is one possibility. When the United States set maximum prices for gasoline in 1973 and

/ Learn More

Prisoners’ Dilemma

The prisoners’ dilemma is the best-known game of strategy in social science. It helps us understand what governs the balance between cooperation and competition in business, in politics, and in social settings. In the traditional version of the game, the police have arrested two suspects and are interrogating them in separate rooms. Each can either confess, thereby implicating the other, or keep silent. No matter what the other suspect does, each can improve his own position by confessing. If the other confesses, then one had better do the same to avoid the especially harsh sentence that awaits a recalcitrant holdout. If the other keeps silent, then one can obtain the favorable treatment accorded a state’s witness by confessing. Thus, confession is the dominant strategy (see game theory) for each. But when both confess, the outcome is worse for both than when both keep silent. The concept of the prisoners’ dilemma was developed by RAND Corporation scientists Merrill Flood and Melvin Dresher and was formalized by Albert W. Tucker, a Princeton mathematician. The prisoners’ dilemma has applications to economics and business. Consider two firms, say Coca-Cola and Pepsi, selling similar products. Each must decide on a pricing strategy. They best exploit their joint market power when both charge a high price; each makes a profit of ten million dollars per month. If one sets a competitive low price, it wins a lot of customers away from the rival. Suppose its profit rises to twelve million dollars, and that of the rival falls to seven million. If both set low prices, the profit of each is nine million dollars. Here, the low-price

/ Learn More

Pensions

A private pension plan is an organized program to provide retirement income for a firm’s workers. Some 56.7 percent of full-time, full-year wage and salary workers in the United States participate in employment-based pension plans (EBRI Issue Brief, October 2003). Private trusteed pension plans receive special tax treatment and are subject to eligibility, coverage, and benefit standards. Private pensions have become an important financial intermediary in the United States, with assets totaling $3.0 trillion at year-end 2002, while state and local government retirement funds totaled $1.967 trillion. By comparison, all New York Stock Exchange (NYSE) listed stocks totaled $9.557 trillion at year-end 2002. In other words, private and local government pension plan assets are large enough to purchase about 60 percent of all stocks listed on the NYSE. For individuals, future pension benefits provided by employers substitute for current wages and personal saving. A person would be indifferent between pension benefits and personal saving for retirement if each provided the same retirement income at the same cost of forgone current consumption. Tax advantages, however, create a bias in favor of saving through organized pension plans administered by the employee’s firm and away from direct saving. For a firm, pension plans serve two primary functions: first, pension benefits substitute for wages; second, pensions can provide firms with a source of financing because pension benefits need not require current cash payments. The current U.S. tax code provides additional advantages for using pension plans to finance operations. Basic Features of U.S. Pension Plans Virtually all private plans satisfy federal requirements for favorable tax treatment. The tax advantages are three: (1) pension costs of a firm are, within limits, tax deductible;

/ Learn More

Opportunity Cost

When economists refer to the “opportunity cost” of a resource, they mean the value of the next-highest-valued alternative use of that resource. If, for example, you spend time and money going to a movie, you cannot spend that time at home reading a book, and you cannot spend the money on something else. If your next-best alternative to seeing the movie is reading the book, then the opportunity cost of seeing the movie is the money spent plus the pleasure you forgo by not reading the book. The word “opportunity” in “opportunity cost” is actually redundant. The cost of using something is already the value of the highest-valued alternative use. But as contract lawyers and airplane pilots know, redundancy can be a virtue. In this case, its virtue is to remind us that the cost of using a resource arises from the value of what it could be used for instead. This simple concept has powerful implications. It implies, for example, that even when governments subsidize college education, most students still pay more than half of the cost. Take a student who annually pays $4,000 in tuition at a state college. Assume that the government subsidy to the college amounts to $8,000 per student. It looks as if the cost is $12,000 and the student pays less than half. But looks can be deceiving. The true cost is $12,000 plus the income the student forgoes by attending school rather than working. If the student could have earned $20,000 per year, then the true cost of the year’s schooling is $12,000 plus $20,000, for a total of $32,000. Of this $32,000 total, the student pays $24,000 ($4,000 in tuition plus $20,000 in forgone earnings). In other words, even with a hefty state subsidy, the student pays 75 percent of the whole cost. This explains why college students at state universities, even though they may grouse when the state government raises tuitions by, say, 10 percent, do not desert college in droves. A 10 percent increase in a $4,000 tuition is only $400, which is less than a 2 percent increase in the student’s overall cost (see human capital). What about the cost of room and board while attending school? This is not a true cost of attending school at all because whether or not the student attends school, the student still has expenses for room and board. About the Author David R. Henderson is the editor of this encyclopedia. He is a research fellow with Stanford University’s Hoover Institution and an associate professor of economics at the Naval Postgraduate School in Monterey, California. He was formerly a senior economist with President Ronald Reagan’s Council of Economic Advisers. Further Reading Alchian, Armen. “Cost.” In Encyclopedia of the Social Sciences. New York: Macmillan. Vol. 3, pp. 404–415. Buchanan, J. M. Cost and Choice. Chicago: Markham. 1969. Republished as Midway Reprint. Chicago: University of Chicago Press, 1977. Available online at: http://www.econlib.org/library/Buchanan/buchCv6.html (0 COMMENTS)

/ Learn More

OPEC

Few observers and even few experts remember that the Organization of Petroleum Exporting Countries (OPEC) was created in response to the 1959 imposition of import quotas on crude oil and refined products by the United States. In 1959, the U.S. government established the Mandatory Oil Import Quota program (MOIP), which restricted the amount of imported crude oil and refined products allowed into the United States and gave preferential treatment to oil imports from Canada, Mexico, and, somewhat later, Venezuela. This partial exclusion of Persian Gulf oil from the U.S. market depressed prices for Middle Eastern oil; as a result, oil prices “posted” (paid to the selling nations) were reduced in February 1959 and August 1960. In September 1960, four Persian Gulf nations (Iran, Iraq, Kuwait, and Saudi Arabia) and Venezuela formed OPEC in order to obtain higher prices for crude oil. By 1973, eight other nations (Algeria, Ecuador, Gabon, Indonesia, Libya, Nigeria, Qatar, and the United Arab Emirates) had joined OPEC; Ecuador withdrew at the end of 1992, and Gabon withdrew in 1994. The collective effort to raise oil prices was unsuccessful during the 1960s; real (i.e., inflation-adjusted) world market prices for crude oil fell from $9.78 (in 2004 dollars) in 1960 to $7.08 in 1970. However, real prices began to rise slowly in 1971 and then increased sharply in late 1973 and 1974, from roughly $10.00 per barrel to more than $36.00 per barrel in the wake of the 1973 Arab-Israeli (“Yom Kippur”) War. Despite what many noneconomists believe, the 1973–1974 price increase was not caused by the oil “embargo” (refusal to sell) that the Arab members of OPEC directed at the United States and the Netherlands. Instead, OPEC reduced its production of crude oil, raising world market prices sharply. The embargo against the United States and the Netherlands had no effect whatsoever: people in both nations were able to obtain oil at the same prices as people in all other nations. This failure of the embargo was predictable, in that oil is a “fungible” commodity that can be resold among buyers. An embargo by sellers is an attempt to raise prices for some buyers but not others. Only one price can prevail in the world market, however, because differences in prices will lead to arbitrage: that is, a higher price in a given market will induce other buyers to resell oil into the high-price market, thus equalizing prices worldwide. Nor, as is commonly believed, did OPEC cause oil shortages and gasoline lines in the United States. Instead, the shortages were caused by price and allocation controls on crude oil and refined products, imposed originally by President Richard Nixon in 1971 as part of the Economic Stabilization Program. Although the price controls allowed the price of crude oil to rise, it was not allowed to rise to free-market levels. Thus, the price controls caused the amount people wanted to consume to exceed the amount available at the legal maximum prices. Shortages were the inevitable result. Moreover, the allocation controls distorted the distribution of supplies; the government based allocations on consumption patterns observed before the sharp increase in prices. The higher prices, for example, reduced long-distance driving and agricultural fuel consumption, but the use of historical consumption patterns resulted in a relative oversupply of gasoline in rural areas and a relative undersupply in urban ones, thus exacerbating the effects of the price controls themselves. Countries whose governments did not impose price controls, such as (then West) Germany and Switzerland, did not experience shortages and queues. OPEC is in many ways a cartel—a group of producers that attempts to restrict output in order to raise prices above the competitive level. The decision-making center of OPEC is the Conference, comprising national delegations

/ Learn More

New Keynesian Economics

New Keynesian economics is the school of thought in modern macroeconomics that evolved from the ideas of John Maynard Keynes. Keynes wrote The General Theory of Employment, Interest, and Money in the 1930s, and his influence among academics and policymakers increased through the 1960s. In the 1970s, however, new classical economists such as Robert Lucas, Thomas J. Sargent, and Robert Barro called into question many of the precepts of the Keynesian revolution. The label “new Keynesian” describes those economists who, in the 1980s, responded to this new classical critique with adjustments to the original Keynesian tenets. The primary disagreement between new classical and new Keynesian economists is over how quickly wages and prices adjust. New classical economists build their macroeconomic theories on the assumption that wages and prices are flexible. They believe that prices “clear” markets—balance supply and demand—by adjusting quickly. New Keynesian economists, however, believe that market-clearing models cannot explain short-run economic fluctuations, and so they advocate models with “sticky” wages and prices. New Keynesian theories rely on this stickiness of wages and prices to explain why involuntary unemployment exists and why monetary policy has such a strong influence on economic activity. A long tradition in macroeconomics (including both Keynesian and monetarist perspectives) emphasizes that monetary policy affects employment and production in the short run because prices respond sluggishly to changes in the money supply. According to this view, if the money supply falls, people spend less money and the demand for goods falls. Because prices and wages are inflexible and do not fall immediately, the decreased spending causes a drop in production and layoffs of workers. New classical economists criticized this tradition because it lacks a coherent theoretical explanation for the sluggish behavior of prices. Much new Keynesian research attempts to remedy this omission. Menu Costs and Aggregate-Demand Externalities One reason prices do not adjust immediately to clear markets is that adjusting prices is costly. To change its prices, a firm may need to send out a new catalog to customers, distribute new price lists to its sales staff, or, in the case of a restaurant, print new menus. These costs of price adjustment, called “menu costs,” cause firms to adjust prices intermittently rather than continuously. Economists disagree about whether menu costs can help explain short-run economic fluctuations. Skeptics point out that menu costs usually are very small. They argue that these small costs are unlikely to help explain recessions, which are very costly for society. Proponents reply that “small” does not mean “inconsequential.” Even though menu costs are small for the individual firm, they could have large effects on the economy as a whole. Proponents of the menu-cost hypothesis describe the situation as follows. To understand why prices adjust slowly, one must acknowledge that changes in prices have externalities—that is, effects that go beyond the firm and its customers. For instance, a price reduction by one firm benefits other firms in the economy. When a firm lowers the price it charges, it lowers the average price level slightly and thereby raises real income. (Nominal income is determined by the money supply.) The stimulus from higher income, in turn, raises the demand for the products of all firms. This macroeconomic impact of one firm’s price adjustment on the demand for all other firms’ products is called an “aggregate-demand externality.” In the presence of this aggregate-demand externality, small menu costs can make prices sticky, and this stickiness can have a large cost to society. Suppose General Motors announces its prices and then, after a fall in the money supply, must decide whether to cut prices. If it did so, car buyers would have a higher real income and would therefore buy more products from other companies as well. But the benefits to other companies are not what General Motors cares about. Therefore, General Motors would sometimes fail to pay the menu cost and cut its price, even though the price cut is socially desirable. This is an example in which sticky prices are undesirable for the economy as a whole, even though they may be optimal for those setting prices. The Staggering of Prices New Keynesian explanations of sticky prices often emphasize that not everyone in the economy sets prices at the same time. Instead, the adjustment of prices throughout the economy is staggered. Staggering complicates the setting of prices because firms care about their prices relative to those charged by other firms. Staggering can make the overall level of prices adjust slowly, even when individual prices change frequently. Consider the following example. Suppose, first, that price setting is synchronized: every firm adjusts its price on the first of every month. If the money supply and aggregate demand rise on May 10, output will be higher from May 10 to June 1 because prices are fixed during this interval. But on June 1 all firms will raise their prices in response to the higher demand, ending the three-week boom. Now suppose that price setting is staggered: half the firms set prices on the first of each month and half on the fifteenth. If the money supply rises on May 10, then half of the firms can raise their prices on May 15. Yet because half of the firms will not be changing their prices on the fifteenth, a price increase by any firm will raise that firm’s relative price, which will cause it to lose customers. Therefore, these firms will probably not raise their prices very much. (In contrast, if all firms are synchronized, all firms can raise prices together, leaving relative prices unaffected.) If the May 15 price setters make little adjustment in their prices, then the other firms will make little adjustment when their turn comes on June 1, because they also want to avoid relative price changes. And so on. The price level rises slowly as the result of small price increases on the first and the fifteenth of each month. Hence, staggering makes the price level sluggish, because no firm wishes to be the first to post a substantial price increase. Coordination Failure Some new Keynesian economists suggest that recessions result from a failure of coordination. Coordination problems can arise in the setting of wages and prices because those who set them must anticipate the actions of other wage and price setters. Union leaders negotiating wages are concerned about the concessions other unions will win. Firms setting prices are mindful of the prices other firms will charge. To see how a recession could arise as a failure of coordination, consider the following parable. The economy is made up of two firms. After a fall in the money supply, each firm must decide whether to cut its price. Each firm wants to maximize its profit, but its profit depends not only on its pricing decision but also on the decision made by the other firm. If neither firm cuts its price, the amount of real money (the amount of money divided by the price level) is low, a recession ensues, and each firm makes a profit of only fifteen dollars. If both firms cut their price, real money balances are high, a recession is avoided, and each firm makes a profit of thirty dollars. Although both firms prefer to avoid a recession, neither can do so by its own actions. If one firm cuts its price while the other does not, a recession follows. The firm making the price cut makes only five dollars, while the other firm makes fifteen dollars. The essence of this parable is that each firm’s decision influences the set of outcomes available to the other firm. When one firm cuts its price, it improves the opportunities available to the other firm, because the other firm can then avoid the recession by cutting its price. This positive impact of one firm’s price cut on the other firm’s profit opportunities might arise because of an aggregate-demand externality. What outcome should one expect in this economy? On the one hand, if each firm expects the other to cut its price, both will cut prices, resulting in the preferred outcome in which each makes thirty dollars. On the other hand, if each firm expects the other to maintain its price, both will maintain their prices, resulting in the inferior solution, in which each makes fifteen dollars. Hence, either of these outcomes is possible: there are multiple equilibria. The inferior outcome, in which each firm makes fifteen dollars, is an example of a coordination failure. If the two firms could coordinate, they would both cut their price and reach the preferred outcome. In the real world, unlike in this parable, coordination is often difficult because the number of firms setting prices is large. The moral of the story is that even though sticky prices are in no one’s interest, prices can be sticky simply because price setters expect them to be. Efficiency Wages Another important part of new Keynesian economics has been the development of new theories of unemployment. Persistent unemployment is a puzzle for economic theory. Normally, economists presume that an excess supply of labor would exert a downward pressure on wages. A reduction in wages would in turn reduce unemployment by raising the quantity of labor demanded. Hence, according to standard economic theory, unemployment is a self-correcting problem. New Keynesian economists often turn to theories of what they call efficiency wages to explain why this market-clearing mechanism may fail. These theories hold that high wages make workers more productive. The influence of wages on worker efficiency may explain the failure of firms to cut wages despite an excess supply of labor. Even though a wage reduction would lower a firm’s wage bill, it would also—if the theories are correct—cause worker productivity and the firm’s profits to decline. There are various theories about how wages affect worker productivity. One efficiency-wage theory holds that high wages reduce labor turnover. Workers quit jobs for many reasons—to accept better positions at other firms, to change careers, or to move to other parts of the country. The more a firm pays its workers, the greater their incentive to stay with the firm. By paying a high wage, a firm reduces the frequency of quits, thereby decreasing the time spent hiring and training new workers. A second efficiency-wage theory holds that the average quality of a firm’s workforce depends on the wage it pays its employees. If a firm reduces wages, the best employees may take jobs elsewhere, leaving the firm with less-productive employees who have fewer alternative opportunities. By paying a wage above the equilibrium level, the firm may avoid this adverse selection, improve the average quality of its workforce, and thereby increase productivity. A third efficiency-wage theory holds that a high wage improves worker effort. This theory posits that firms cannot perfectly monitor the work effort of their employees and that employees must themselves decide how hard to work. Workers can choose to work hard, or they can choose to shirk and risk getting caught and fired. The firm can raise worker effort by paying a high wage. The higher the wage, the greater is the cost to the worker of getting fired. By paying a higher wage, a firm induces more of its employees not to shirk, and thus increases their productivity. A New Synthesis During the 1990s, the debate between new classical and new Keynesian economists led to the emergence of a new synthesis among macroeconomists about the best way to explain short-run economic fluctuations and the role of monetary and fiscal policies. The new synthesis attempts to merge the strengths of the competing approaches that preceded it. From the new classical models it takes a variety of modeling tools that shed light on how households and firms make decisions over time. From the new Keynesian models it takes price rigidities and uses them to explain why monetary policy affects employment and production in the short run. The most common approach is to assume monopolistically competitive firms (firms that have market power but compete with other firms) that change prices only intermittently. The heart of the new synthesis is the view that the economy is a dynamic general equilibrium system that deviates from an efficient allocation of resources in the short run because of sticky prices and perhaps a variety of other market imperfections. In many ways, this new synthesis forms the intellectual foundation for the analysis of monetary policy at the Federal Reserve and other central banks around the world. Policy Implications Because new Keynesian economics is a school of thought regarding macroeconomic theory, its adherents do not necessarily share a single view about economic policy. At the broadest level, new Keynesian economics suggests—in contrast to some new classical theories—that recessions are departures from the normal efficient functioning of markets. The elements of new Keynesian economics—such as menu costs, staggered prices, coordination failures, and efficiency wages—represent substantial deviations from the assumptions of classical economics, which provides the intellectual basis for economists’ usual justification of laissez-faire. In new Keynesian theories recessions are caused by some economy-wide market failure. Thus, new Keynesian economics provides a rationale for government intervention in the economy, such as countercyclical monetary or fiscal policy. This part of new Keynesian economics has been incorporated into the new synthesis that has emerged among macroeconomists. Whether policymakers should intervene in practice, however, is a more difficult question that entails various political as well as economic judgments. About the Author N. Gregory Mankiw is a professor of economics at Harvard University. From 2003 to 2005, he was the chairman of President George W. Bush’s Council of Economic Advisers. Further Reading   Clarida, Richard, Jordi Gali, and Mark Gertler. “The Science of Monetary Policy: A New Keynesian Perspective.” Journal of Economic Literature 37 (1999): 1661–1707. Goodfriend, Marvin, and Robert King. “The New Neoclassical Synthesis and the Role of Monetary Policy.” In Ben S. Bernanke and Julio Rotemberg, eds., NBER Macroeconomics Annual 1997. Cambridge: MIT Press, 1997. Pp. 231–283. Mankiw, N. Gregory, and David Romer, eds. New Keynesian Economics. 2 vols. Cambridge: MIT Press, 1991.   (0 COMMENTS)

/ Learn More