This is my archive

bar

Education

K-12 In the 1980s, economists puzzled by a decline in the growth of U.S. productivity realized that American schools had taken a dramatic turn for the worse. After rising every year for fifty years, student scores on a variety of achievement tests dropped sharply in 1967. They continued to decline through 1980. The decline was so severe, John Bishop calculates, that students graduating in 1980 had learned “about 1.25 grade-level equivalents less than those who graduated in 1967.”1 Although achievement levels began to recover in 1980, the recovery has been weak and student achievement has yet to regain 1967 levels. By the turn of the century, conservative estimates of the economic growth lost due to the academic achievement decline were on the order of 3.6 percent of the 2000 gross national product. The characteristics of the educational productivity decline challenged widely accepted educational theories of school performance. Theorists accustomed to blaming increased poverty, family instability, large class size, and insufficient spending for poor school performance could not explain why the scores of more able students declined at least as much as those of less able ones, or why measures of inferential ability and problem solving declined more than those of simpler tasks such as arithmetic computation. Achievement fell even though the average U.S. class size shrank from twenty-seven in 1955 to fifteen in 1995.2 The decline affected students primarily after the third grade. First-graders continued to arrive at school better prepared than in any preceding generation. The decline was more pronounced at suburban schools than at inner-city ones, afflicted both private and public schools, and was larger for whites than for minorities in more-advanced grades. The achievement decline cannot be blamed on inadequate spending. Between 1960 and 1995, annual per pupil spending in the United States rose from $2,122 to $6,434 in inflation-adjusted 1995 dollars.3 By 1999, the United States was spending an average of $7,397 per K–12 student. Spending in other industrialized countries averaged $4,850. Only Switzerland, at $8,194 per pupil, spent more than the United States.4 In industrialized countries, student scores on the Third International Mathematics and

/ Learn More

Efficient Capital Markets

The efficient markets theory (EMT) of financial economics states that the price of an asset reflects all relevant information that is available about the intrinsic value of the asset. Although the EMT applies to all types of financial securities, discussions of the theory usually focus on one kind of security, namely, shares of common stock in a company. A financial security represents a claim on future cash flows, and thus the intrinsic value is the present value of the cash flows the owner of the security expects to receive.1 Theoretically, the profit opportunities represented by the existence of “undervalued” and “overvalued” stocks motivate investors to trade, and their trading moves the prices of stocks toward the present value of future cash flows. Thus, investment analysts’ search for mispriced stocks and their subsequent trading make the market efficient and cause prices to reflect intrinsic values. Because new information is randomly favorable or unfavorable relative to expectations, changes in stock prices in an efficient market should be random, resulting in the well-known “random walk” in stock prices. Thus, investors cannot earn abnormally high risk-adjusted returns in an efficient market where prices reflect intrinsic value. As Eugene Fama (1991) notes, market efficiency is a continuum. The lower the transaction costs in a market, including the costs of obtaining information and trading, the more efficient the market. In the United States, reliable information about firms is relatively cheap to obtain (partly due to mandated disclosure and partly due to technology of information provision) and trading securities is cheap. For those reasons, U.S. security markets are thought to be relatively efficient. The informational efficiency of stock prices matters in two main ways. First, investors care about whether various trading strategies can earn excess returns (i.e., “beat the market”). Second, if stock prices accurately reflect all information, new investment capital goes to its highest-valued use. French mathematician Louis Bachelier performed the first rigorous analysis of stock market returns in his 1900 dissertation. This remarkable work documents statistical independence in stock returns—meaning that today’s return signals nothing about the sign or magnitude of tomorrow’s return—and this led him to model stock returns as a random walk, in anticipation of the EMT. Unfortunately, Bachelier’s work was largely ignored outside mathematics until the 1950s. One of the first to recognize the potential information content of stock prices was John Burr Williams (1938) in his work on intrinsic value, which argues that stock prices are based on economic fundamentals. The alternative view, which dominated prior to Williams, is probably best exemplified by John Maynard Keynes’s beauty contest analogy, in which each stock analyst recommends not the stock he thinks best, but rather the stock he thinks most other analysts think is best. In Keynes’s view, therefore, stock prices are based more on speculation than on economic fundamentals. In the long run, prices driven by speculation may converge to those that would exist based on economic fundamentals, but, as Keynes noted in another context, “in the long run we are all dead.” Stock returns and their economic meaning received scant attention before the 1950s because there was little appreciation of the role of stock markets in allocating capital. This oversight had several contributing factors: (1) Keynes’s emphasis on the speculative nature of stock

/ Learn More

Electricity and Its Regulation

Americans consumed 3,463 billion megawatt-hours (mwh) of electricity in 2002, with a delivered value of $249.6 billion. Thirty-seven percent of it was consumed by households, 32 percent by commercial users, and 28 percent by industrial users.1 Adjusted for inflation, its price fell by 36 percent between 1983 and 2004.2 Most electricity is generated when high-pressure steam rotates a turbine to induce an alternating current into a wire. In 2002, 50.1 percent of U.S. electricity was produced by coal, 17.9 by natural gas, 20.2 in nuclear units, 6.6 as hydroelectricity, and 2.3 by “renewable resources” such as wind and solar.3 Newly generated power passes through substations that lower its voltage prior to consumption by final (retail) consumers. Important characteristics of electricity limit the possibilities for markets. First, reserve power plants must always be operating to instantly replace generators or transmission lines that fail. Centralized control (usually by computers) is required to meet both predictable and unforeseen changes in regional conditions. Power cannot be economically stored, and area-wide blackouts occur if production either exceeds or falls short of demand for as little as a second. Second, duplication of facilities is inefficient because a single high-capacity line minimizes both capital cost per megawatt (MW) transferred and line losses due to resistance. A typical large utility (or group of them) is responsible for reliability and economical operation in its defined “control area.” Each control area is interconnected with neighboring ones to facilitate emergency support, coordinated operations, and power purchases and sales. Third, an injection of power flows through the entire network according to Ohm’s and Kirchoff’s laws. Unlike water or gas, it cannot be directed down a single path. If a Utah generator sells power to a Wyoming user, only a small fraction of it flows directly between them. Because the power from Utah flows everywhere, it can overload lines in California and force Californians to curtail their own beneficial transactions. Scale economies and reliability concerns left electricity dominated by large, vertically integrated utilities; that is, utilities that generated, transmitted, and distributed power. Direct competition had vanished by the 1920s as municipal franchise grants left nearly every city with a single utility. Between 1907 and 1940, all states formed regulatory commissions whose authority replaced that of cities. The reasons for the change are unclear: utilities may have sought protection from opportunistic city governments, or from competition in general. “Cost of service” regulation sets retail rates to recover expenses and give a “fair” return on capital. Problems in allocating common costs, as well as politics, allow latitude in setting rates for different customers. State regulators generally also require utilities to serve all customers and to plan facility additions in anticipation of growth. The Federal Energy Regulatory Commission (FERC) oversees “wholesale” or “bulk” transactions that occur prior to state-jurisdictional retail sales. The Federal Power Act requires that wholesale prices (including transmission charges) be cost based, but in practice, FERC simply accepts prices set by markets that meet its standards for competition. FERC’s general policy has been to expand the role of markets and to decrease direct regulation subject to the law’s limits, regardless of which party controls the government. Electricity’s ownership structure is complex. In 1998, America’s 239 corporate utilities made 74.9 percent of retail

/ Learn More

Empirics of Economic Growth

Why are some countries rich and others poor? Why do some countries experience sustained levels of high growth that propel them into the ranks of the rich while others stagnate, seemingly in perpetuity? These are perhaps the most fascinating and important questions in all of economics. Since the late 1980s, economists have done extensive work on the determinants of economic growth. As yet, however, there are few widely agreed-on results. The lack of consensus is unfortunate because increasing the growth rates of the world’s many poor countries is a primary global policy goal. We do have at least two natural experiments in which a single nation was bisected by very different forms of governments: the two Germanys from the end of World War II to reunification in 1991, and the two Koreas. In both cases, the government that allowed private property and free (at least compared with its counterpart government) enterprise oversaw an economic “miracle,” while the more totalitarian governments in the pairings each produced decades of stagnation and poverty. Because, in each case, the people and their situations were so similar before the change that split them up, we get as close as we can ever get in the real world to a laboratory experiment without a laboratory—a fact that makes these findings significant. Economists know that there is some level of government intervention so great that it stifles economic growth, causing economies under it to do poorly. But when economists use statistical analysis on large samples, other differences between countries that are hard to measure (e.g., culture) can be relevant, and the results are not as straightforward. We can show that factors such as private property rights and lack of corruption (or, as it is sometimes called, the “rule of law”) are strongly correlated with high income, but it is difficult to show that they are correlated with current growth. More on that later. What do economists know about the causes of growth? Begin by looking at the world income distribution in the year 2000. Here, countries are the unit of analysis, which means that Uganda counts as much as China, despite China’s much greater population. Most economists take this approach, although some look at either population weighted distributions or worldwide distributions of individuals’ incomes.1 Figure 1. The world Income Distribution in 2000 Figure 1 displays the distribution of per capita national income in U.S. dollars adjusted for deviations from purchasing power parity for 185 countries in the year 2000. The data are from the World Bank’s “World Development Indicators” online database. Ignoring the extreme outlier of Luxembourg (the $51,000 observation), the income ratio between the second-highest-income country (the United States) and the lowest-income country is about 77. There are 31 countries in the sample whose incomes are

/ Learn More

Energy

Most of the energy consumed in America today is produced from the combustion of fossil fuels, primarily oil, coal, and natural gas. Energy can be generated, however, in any number of ways. Figure 1 indicates the sources of energy employed by the American economy as of February 2004. Figure 1 U.S. Energy Sources, 2004 The economy has become more efficient at using energy over time. In 1949, the U.S. economy required 20,620 British thermal units (Btu, a common energy measurement) to produce an inflation-adjusted dollar of domestic goods and services. By 2002, only 10,310 Btu were required to do the same.1 In a free market, cost dictates energy choices. Fossil fuels, for example, are economically attractive for many applications because the energy available from fossil fuels is highly concentrated, easily transportable, and cheaply extracted. Renewable energies such as wind and solar power, on the other hand, are relatively dispersed, difficult to transport, and costly to harness given the capital costs of facility construction. Many people recommend accelerated federal subsidies and preferences for renewable energy in order to reduce America’s dependence on imported oil. But such recommendations fail to appreciate the fact that energy sources are often difficult to substitute for one another. Until we see major technological advances in electric-powered vehicles and related battery systems, for example, technological breakthroughs in solar or wind power will have little, if any, impact on oil imports. That is because renewable energy is used primarily to generate electricity and cannot be used directly in transportation to replace oil: in 2002, only 2.5 percent of America’s total electricity was generated from oil combustion.2 The main impediment to the commercial viability of electric vehicles is the cost and operation of the vehicle’s power train, not the cost of the electricity necessary as an input to that power train. Oil Depletion One of the recurring policy fights concerning energy is what the government should do about the depletion of economically attractive crude oil reserves. Underlying this fight, however, is a dispute about whether oil is in danger of becoming scarcer in the foreseeable future. One camp (primarily geologists) argues that few, if any, major new oil fields remain to be found and that mathematical calculation demonstrates that production will peak at some point in the not-too-distant future and then begin a slow but steady decline.3 Another camp (primarily economists) contends that reserves are as much an economic as a geologic phenomenon. That is, reserves are discovered and counted when it makes economic sense to find them. Thus, we do not know how much economically profitable oil has yet to be “discovered.” Technological advances are adding reserves at a far greater rate than they are being depleted.4 For example, in 1970, non-OPEC countries had about 200 billion barrels in reserves. Through 2003, they had produced 460 billion barrels and still had 209 billion barrels remaining.5 Although the debate is inconclusive, the weight of the evidence suggests that economists have the better argument.6 Another dispute concerns whether price signals alone are sufficient to efficiently move from one set of energy resources to another if the need arises. Some have maintained that consumers do not change their behavior much in response to price increases. The claim is true in the short run, but not true over the course of several years. Economists estimate that a 10 percent increase in oil prices reduces the amount demanded in the short run by about 1 percent. Over the long run, however, a 10 percent increase in oil prices reduces the amount demanded by about 10 percent.7

/ Learn More

Economic Freedom

For well over a hundred years, the economic world has been engaged in a great intellectual debate. On one side of this debate have been those philosophers and economists who advocate an economic system based on private property and free markets—or what one might call economic freedom. The key ingredients of economic freedom are personal choice, voluntary exchange, freedom to compete in markets, and protection of person and property. Institutions and policies are consistent with economic freedom when they allow voluntary exchange and protect individuals and their property. Governments can promote economic freedom by providing a legal structure and a law-enforcement system that protect the property rights of owners and enforce contracts in an evenhanded manner. However, economic freedom also requires governments to refrain from taking people’s property and from interfering with personal choice, voluntary exchange, and the freedom to enter and compete in labor and product markets. When governments substitute taxes, government expenditures, and regulations for personal choice, voluntary exchange, and market coordination, they reduce economic freedom. Restrictions that limit entry into occupations and business activities also reduce economic freedom. Adam Smith was one of the first economists to argue for a version of economic freedom, and he was followed by a distinguished line of thinkers that includes John Stuart Mill, Ludwig von Mises, Friedrich A. Hayek, and Milton Friedman, as well as economists such as Murray Rothbard. On the other side of this debate are people hostile to economic freedom who instead argue for an economic system characterized by centralized economic planning and state control of the means of production. Advocates of an expanded role for the state include Jean-Jacques Rousseau and Karl Marx and such twentieth-century advocates as Abba Lerner, John Kenneth Galbraith, Michael Harrington, and Robert Heilbroner. These scholars argue that free markets lead to monopolies, chronic economic crises, income inequality, and increasing degradation of the poor, and that centralized political control of people’s economic lives avoids these problems of the marketplace. They deem economic life simply too important to be left up to the decentralized decisions of individuals. In the early twentieth century, state control grew as communism and fascism spread. In the United States, the New Deal significantly expanded the role of the state in people’s economic lives. In the late 1970s and early 1980s, economic freedom staged a comeback, with deregulation, privatization, and tax cuts. Of course, the major increase in economic freedom came with the fall of the Soviet Union. Today, the advocates of freedom dominate the debate. In fact, one major socialist, the late Robert Heilbroner, believed that the advocates of freedom have won (see socialism). Substantial evidence has informed the debate. Indeed, the stark differences in the standards of living of people in economically freer systems compared with those in less-free systems have become more and more obvious: North versus South Korea, East versus West Germany, Estonia versus Finland, and Cubans living in Miami versus Cubans living in Cuba are examples. In each case, people in the freer economy have better lives, in virtually every way, than their counterparts in the less-free economies. Measuring Economic Freedom The above comparisons are suggestive. But is it possible to find a relationship between economic freedom and prosperity over a wider range of nations? In the 1980s, scholars began to measure and rate economies based on their degree of economic freedom. Organizations such as Freedom House, the Heritage Foundation, and the Fraser Institute, as well as individual scholars, published “economic freedom indexes” attempting to quantify economic freedom. They came up with an ambitious, and necessarily blunt, measure. In 1996, the Fraser Institute, along with a network of other think tanks, began publishing the Economic Freedom of the World (EFW) annual reports, which present an economic freedom index for more than 120 nations. Using data from the World Bank, International Monetary Fund, Global Competitiveness Report, International Country Risk Guide, PricewaterhouseCoopers, and others, the report rates countries on a zero-to-ten scale. Higher scores indicate greater economic freedom. The overall index is based on ratings in five broad areas. Counting the various subcomponents, the EFW index uses thirty-eight distinct pieces of data. Each subcomponent is placed on a scale from zero to ten that reflects the range of the underlying data. The component ratings within each area are averaged to derive ratings for each of the five areas. In turn, the summary rating is the average of the five area ratings. The five major areas are: • Size of government. To get high ratings in this area, governments must tax and spend modestly, and marginal tax rates must be relatively low. While governments are important in protecting property rights, enforcing contracts,

/ Learn More

Distribution of Income

The distribution of income lies at the heart of an enduring issue in political economy—the extent to which government should redistribute income from those with more income to those with less. Whether government should redistribute income is a normative question, and each person’s answer will depend on his or her values. But for many people, answering the normative question requires understanding the facts about the current income distribution. The term “income distribution” is a statistical concept. No one person is distributing income. Rather, the income distribution arises from people’s decisions about work, saving, and investment as they interact through markets and are affected by the tax system. The 1990s and early 2000s witnessed the establishment of a growing body of work, increasingly precise, describing how the income distribution has changed. This work can be summarized in three points: • The distribution of pretax income in the United States today is highly unequal. The most careful studies suggest that the top 10 percent of households, with average income of about $200,000, received 42 percent of all pretax money income in the late 1990s. The top 1 percent of households, averaging $800,000 of income, received 15 percent of all pretax money income. • In the longer view, the path of income inequality over the twentieth century is marked by two main events: a sharp fall in inequality around the outbreak of World War II and an extended rise in inequality that began in the mid-1970s and accelerated in the 1980s. Income inequality today is about as large as it was in the 1920s. • Over multiple years, family income fluctuates, and so the distribution of multiyear income is moderately more equal than the distribution of single-year income. Trends in Inequality The most frequently cited statistics come from the U.S. Census Bureau’s Current Population Survey (CPS), the monthly household survey best known as the source of the official unemployment rate. Since 1948, the March edition of the CPS has collected household income information for the previous year, as well as the personal characteristics of household members—their age, education, occupation, and industry (if they work), and other data that help give insight into changing income patterns. Although this makes the CPS an indispensable statistical source, it has disadvantages as well. The CPS uses a restricted income definition: pretax money receipts excluding capital gains. This definition is further restricted by a “cap,” currently $999,999, imposed on reported annual earnings for reasons of confidentiality.1 Together, these problems mean that CPS estimates of inequality omit the effects of taxes, nonmoney income such as government and private health insurance, and the portion of individual earnings that exceeds the cap. A second source of inequality statistics is the U.S. Treasury’s Statistics of Income (SOI), which summarizes income reported on federal income tax returns. SOI data contain no personal data on taxpayers such as age or education, and they cannot describe the precise shape of the lower part of the income distribution.2 The strengths of SOI data are their ability to accurately describe the upper part of the distribution—SOI income data are not “capped”—and to extend this description back to 1917, thirty years before CPS statistics begin. Table 1 contains selected information on CPS measures of family and household income inequality since World War II.3 The upper panel describes income patterns across families: living units occupied by two or more related persons. The lower panel describes income patterns across households: all occupied living units including families, persons who live alone, unrelated roommates, and so on. To form each distribution, the sample of families (or households) is listed in order of increasing income. The Census then calculates the fraction of all family income going to the quintile (one-fifth) of families with the lowest incomes, the quintile of families with the second-lowest incomes, and so on, as well as the share going to the highest 5 percent of families (who also are included in the top quintile). The CPS data in Table 1 trace a J-shaped evolution of post–World War II inequality. In 1947, the top quintile of families received $8.60 for every dollar of income received by the bottom quintile. This ratio fell gradually through the 1950s and 1960s until 1969, when it reached $7.25 to $1.00—the low point of inequality. Beginning in the late 1970s, the ratio began to rise again until, by 2002, it had increased to $11.36 to $1.00, significantly greater than in 1947. Household data tell a similar story. To make these trends more concrete, Table 1 includes the 1947, 1979, and 2001 income levels that divide each quintile from the next. Similar data are presented for households, and all income levels are expressed in 2003 dollars. Between 1947 and the mid-1970s, income grew rapidly at all points in the distribution, resulting in both rising living standards and moderating inequality. After the mid-1970s, average income grew much more slowly, and the growth that did occur was concentrated in the distributions’ upper half. Between 1979 and 2001, the income dividing the first and second family quintiles grew slightly, from $22,280 to $24,000 (7.7 percent), while the income dividing the fourth and fifth quintiles grew from $74,470 to $94,150 (26.4 percent). Now, as in the 1970s, a majority of families would describe themselves as middle class, but the “middle class” is now a larger, more diverse concept than it once was.4 Inequality estimates based on the U.S. Treasury’s SOI data expand on this picture. At the outset, SOI data do not “cap” high incomes, so household income inequality as reported in the SOI is significantly larger.5 Using “capped” statistics, the CPS reports that the top one-fifth of households receives 49 percent of all pretax money income. The SOI estimates, more accurately, that the top one-tenth of households, with average annual income of about $200,000, receives 42 percent of total pretax money income. The top 1 percent of households with average annual incomes of about $800,000 receives 15 percent of all pretax income. With their longer historical perspective, SOI statistics also show that inequality in the 1920s and 1930s was as high as it is today. Beginning in 1938, the income share of the top one-tenth of households fell from 43 percent to about 32 percent, where it remained until the deep blue-collar recession of the early 1980s. At that point, inequality began its return to the levels of the 1920s and early 1930s.6 Table 1 Family and Household Income Distributions (Census Definitions) *. Original table rearranged and bracketed headings added here for clarity. †. Data available only since 1967. A. Shape of the Family Income Distribution (share of all family income going to each one-fifth [quintile] of families) First Quintile (Lowest Income) Second Quintile Third Quintile Fourth Quintile Fifth Quintile (Highest Income) Top 5 percent (Contained in Fifth Quintile) Upper bound of Quintile (2003 dollars) Lower bound of top 5% [1st] [2nd] [3rd] [4th] [5th]* 1947 5.0 11.9 17.0 23.1 43.0 17.5 $11,088 $17,893 $24,263 $34,427 na. $56,506 1959 4.9 12.3 17.9 23.8 41.1 15.9 1969 5.6 12.4 17.7 23.7 40.6 15.6 1979 5.4 11.6 17.5 24.1 41.4 15.3 $23,171 $38,102 $53,979 $74,329 na. $119,243 1989 4.6 10.6 16.5 23.7 44.6 17.9 2001 4.2 9.7 15.4 22.9 47.7 21.0 $24,960 $42,772 $65,000 $97,916 na. $170,668 B. Shape of the Household Income Distribution (share of all households’ income going to each one-fifth [quintile] of households)† First Quintile (Lowest Income) Second Quintile Third Quintile Fourth Quintile Fifth Quintile (Highest Income) Top 5 percent (Contained in Fifth Quintile) Upper bound of Quintile (2003 dollars) Lower bound of top 5% [1st] [2nd] [3rd] [4th] [5th]* 1967 4.0 10.8 17.3 24.2 43.8 17.5 $14,002 $27,303 $38,766 $55,265 na. $88,678 1979 4.2 10.3 16.9 24.7 44.0 16.4 $16,457 $30,605 $47,018 $68,318 na. $111,445 1989 3.8 9.5 15.8 24.0 46.8 18.9 2001 3.5 8.7 14.6 23.0 50.1 22.4 $17,970 $33,314 $53,000 $83,500 na. $150,499 The Causes of Inequality In one sense, the growth of inequality in the last part of the twentieth century comes as a surprise. In the 1950s, the bottom part of the income distribution contained large concentrations of two kinds of families: farm families whose in-kind income was not counted in Census data, and elderly families, many of whom were ineligible for the new Social Security program. Over subsequent decades, farm families declined as a proportion of the population while increased Social Security benefits and an expanding private pension system lifted elderly incomes. Both trends favored greater income equality but were outweighed by four main factors. • Family structure. Over time, the two-parent, one-earner family was increasingly replaced by low-income single-parent families and higher-income two-parent, two-earner families. A part of the top quintile’s increased share of income reflects the fact that the average family or household in the top quintile contains almost three times as many workers as the average family or household in the bottom quintile. • Trade and technology. Trade and technology increasingly shifted demand away from less-educated and less-skilled workers toward workers with higher education or particular skills. The result was a growing earnings gap between more- and less-educated/skilled workers. • Expanded markets. With improved communications and transportation, people increasingly functioned in national, rather than local, markets. In these broader markets, persons with unique talents could command particularly high salaries. • Immigration. In 2002, immigrants who had entered the country since 1980 constituted nearly 11 percent of the labor force (see immigration). A relatively high proportion of these immigrants had low levels of education and increased the number of workers competing for low-paid work.7 These factors, however, can explain only part of the increase in inequality. One other factor that explains the particularly high incomes of the highest-paid people is that between 1982 and 2004, the ratio of pay of chief executive officers to pay of the average worker rose from 42:1 to 301:1, and pay of other high-level managers, lawyers, and people in other fields rose substantially also. Does Measurement Matter? As noted above, both CPS and SOI statistics measure pretax money income. These measurements are deficient for three reasons. First, increases in governmental aid to the poor have been concentrated in nonmoney benefits such as Medicaid and food stamps and through tax credits under the Earned Income Tax Credit (EITC). Nonmoney benefits are excluded from standard statistics, and EITC tax credits are typically underreported. Second, an increasing proportion of wage-earners’ total compensation goes to health insurance and pension benefits—which are not counted in standard statistics. Third, taxes themselves modify the income distribution. The U.S. Census has attempted to correct these definition problems for recent years by estimating the household income distribution under alternative income definitions. Table 2 shows the effect in 2001 of moving from the standard Census definition (pretax money excluding capital gains) to an adjusted definition that includes the estimated effects of capital gains, taxes, the EITC, and the monetary value of private and governmental nonmoney benefits. The result is a substantial reduction in inequality, with the ratio between incomes in the top and bottom quintiles falling from $14.31:$1.00 to $10.40:$1.00. Similar adjustments for selected earlier years indicate that better income measurement reduces inequality in any single year. Even under the adjusted definition, though, the trend toward increasing inequality in the 1980s and 1990s remains, but at a slower pace. Table 2 Shape of the Household Income Distribution Under Alternative Income Definitions for 2001 (share of income going to each quintile of households) *. Original table rearranged and bracketed headings added here for clarity. †. Standard Census Income is defined as pretax money income excluding capital gains. First Quintile (Lowest Income) Second Quintile Third Quintile Fourth Quintile Fifth Quintile (Highest Income) Upper bound of quintile (2003 dollars) [1st] [2nd] [3rd] [4th]* Standard Census income† 3.5 8.7 14.6 23 50.1 $18,618 $34,780 $55,105 $86,914 Adjusted Census income 4.5 10.3 15.6 22.6 47 $21,334 $35,485 $51,747 $75,195 Note: Adjusted Census Income is based on pretax money income including estimated capital gains, less all taxes paid plus the estimated receipt of the Earned Income Tax Credit plus the imputed value of in-kind income from employer-provided health insurance and government nonmoney benefits like food stamps, Medicare and Medicaid, and free school lunches. Table 3 Mobility Within the Family Income Distribution Quintile in 1998 First Quintile (Lowest Income) Second Quintile Third Quintile Fourth Quintile Fifth Quintile (Highest Income) First Quintile in 1988 53.30% 23.60% 12.40% 6.40% 4.30% Fifth Quintile in 1988 3.00% 5.70% 14.90% 23.20% 53.20% Source: Katherine Bradbury and Jane Katz, “Are Lifetime Incomes Growing More Unequal? New Evidence on Family Income Mobility,” Federal Reserve Bank of Boston Regional Review 12, no. 4 (2002). Inequality and Mobility A second offset to estimated inequality is economic mobility. Because most family incomes increase as people’s careers develop, long-run incomes are more equal than standard single-year statistics suggest. Table 3 summarizes the results of one study of recent family income mobility.8 Among families in the bottom quintile in 1988, half were in the bottom quintile ten years later, a quarter had moved up to the second quintile, and a quarter had moved to the third or higher quintiles. Families in the fifth quintile (highest incomes) show a similar mobility over time. The Economic Case for Inequality of Wages and Incomes David R. Henderson Is inequality of wages and incomes bad? The question seems ludicrous. Of course inequality is bad, isn’t it? Actually, no. What matters crucially is how the inequality came about. Inequality of wages and incomes is clearly bad if it results from government privileges. Many people would find such an outcome unjust, but even more important to many economists is that such inequality sets up perverse incentives. Instead of producing valuable products and services for their fellow citizens, as people tend to do in free economies, people in societies based on government-granted privileges devote much of their effort to pleasing, or outright bribing, government officials. In many African countries, for example, such as Côte d’Ivoire, Ghana, and Zaire, there are stark inequalities because the government has the power to take a high percentage of the wealth of the already poor and give a large amount of it to government officials or their cronies. And in many Latin American countries, for many decades a few families have had most of the wealth and have used government power to cement their privileges. But inequality in wages and incomes in relatively free economies serves two important social functions. First, it gives people strong incentives to produce so as to make higher incomes and wages. Second, it gives people, and not just young people, strong incentives to get training or education that will allow them to perform well in higher-wage jobs. In his January 1999 Richard T. Ely lecture, economist Finis Welch put the point as follows: Wages play many roles in our economy; along with time worked, they determine labor income, but they also signal relative scarcity and abundance, and with malleable skills, wages provide incentives to render the services that are most highly valued. (Welch 1999, p. 1) Further Reading   Mbaku, John Mukum. “Bureaucratic Corruption in Africa: The Futility of Cleanups.” Cato Journal 16, no. 1 (1996): 99–118. Rothbard, Murray. “Egalitarianism as a Revolt against Nature.” Available online at: http://www.lewrockwell.com/rothbard/rothbard31.html. Welch, Finis. “In Defense of Inequality.” American Economic Review 89, no. 2 (1999): 1–17.   About the Author Frank Levy is the Daniel Rose Professor of Urban Economics in MIT’s Department of Urban Studies and Planning. Further Reading   Charles, Kerwin Kofi, and Erik Hurst. “Correlation of Wealth Across Generations.” Journal of Political Economy, 111, no. 6 (2003): 1155–1182. Fortin, Nicole M., and Thomas Lemeiux. “Institutional Changes and Rising Wage Inequality: Is There a Linkage?” Journal of Economic Perspectives 11, no. 2 (1997): 75–96. Gottschalk, Peter. “Inequality, Income Growth and Mobility: The Basic Facts.” Journal of Economic Perspectives 11, no. 2 (1997): 21–40. Johnson, George E. “Changes in Earnings Inequality: The Role of Demand Shifts.” Journal of Economic Perspectives, 11, no. 2 (1997): 41–54. Saez, Emmanuel. “Income and Wealth Concentration in a Historical and International Perspective.” In Alan J. Auerbach, David E. Card, and John M. Quigley, ed., Public Policy and the Income Distribution. New York: Russell Sage Foundation, 2006. Also available at http://emlab.berkeley.edu/users/saez/berkeleysympo2.pdf. Topel, Robert. “Factor Proportions and Relative Wages: The Supply Side Determinants of Wage Inequality.” Journal of Economic Perspectives 11, no. 2 (1997): 55–74.   Footnotes 1. That is, earnings greater than $999,999 are reported as $999,999.   2. SOI data are combined with national income accounts estimates of total personal income received in the economy to calculate the share of all personal income received by the top 1 percent of households, the top 10 percent of households, and so on. Because many lower-income households do not pay federal income taxes, the SOI cannot provide similar detail on, say, the share of income received by the 10 percent of households with the lowest incomes.   3. Household and family income data are available online at http://www.census.gov/hhes/income/histinc/histinctb.html.   4. The connection between “class” and the income distribution is complicated by the fact that the distribution includes families of all ages, ranging from married students to retirees, while our stereotype of a middle-class income is based on families in their prime earning years.   5. The SOI data are based on tax filing units, a concept that is reasonably close to the Census’s definition of household.   6. See Thomas Piketty and Emanuel Saez, “Income Inequality in the United States, 1913–1998,” Quarterly Journal of Economics 118, no. 1 (2003): 1–39.   7. See Robert Lerman, “U.S. Income Inequality Trends and Recent Immigration,” American Economic Review 89, no. 2 (1999): 23–38.   8. Katherine Bradbury and Jane Katz, “Are Lifetime Incomes Growing More Unequal? New Evidence on Family Income Mobility,” Federal Reserve Bank of Boston Regional Review 12, no. 4 (2002) 3–5.   (0 COMMENTS)

/ Learn More

Disaster and Recovery

Defeated in battle and ravaged by bombing in the course of World War II, Germany and Japan nevertheless made postwar recoveries that startled the world. Within ten years these nations were once again considerable economic powers. A decade later, each had not only regained prosperity but had also economically overtaken, in important respects, some of the war’s victors. The surprising swiftness of recovery from disaster was also noted in previous eras. john stuart mill commented on what has so often excited wonder, the great rapidity with which countries recover from a state of devastation; the disappearance, in a short time, of all traces of the mischiefs done by earthquakes, floods, hurricanes, and the ravages of war. An enemy lays waste a country by fire and sword, and destroys or carries away nearly all the moveable wealth existing in it: all the inhabitants are ruined, and yet in a few years after, everything is much as it was before. (Mill 1896, book 1, chap. 5, para. I.5.19) Still, successful recovery is by no means universal. The ancient Cretan civilization may or may not have been destroyed by earthquake, and the Mayan civilization by disease, but neither recovered. Most famously, of course, the centuries-long Dark Ages followed the fall of Rome. Sociologists, psychologists, historians, and policy planners have extensively studied the nature, sources, and consequences of disaster and recovery, but the professional economic literature is distressingly sparse. As a telling example, the four thick volumes of The New Palgrave: A Dictionary of Economics (1987) omit these topics entirely. The words “disaster” and “recovery” do not even appear in the index of that encyclopedic work. Yet disasters are natural economic experiments; they parallel the tests to destruction from which engineers and physicists learn about the strength of materials and machines. Much light would be thrown on the normal everyday economy if we understood behavior under conditions of great stress. The Historical Record Although everyday small-scale tragedies like auto accidents and disabling illnesses are disastrous enough for those personally involved, our concern here is with events of larger magnitude. It is useful to distinguish between community-wide (middle-scale) calamities such as tornadoes, floods, or bombing raids, and society-wide (large-scale) catastrophes associated with widespread famine, destructive social revolution, or defeat and subjugation after total war. In community-wide disasters the fabric of the larger social order provides a safety net, whereas society-wide catastrophes threaten the very fabric itself. The former may involve hundreds or thousands of deaths; the latter, hundreds of thousands or millions. (As a special case, hyperinflations and great business depressions are society-wide events that do not directly generate massive casualties and yet still have calamitous consequences.) Middle-scale community-wide disasters are relatively frequent events, making empirical generalizations possible. In such disasters, it has been observed, individuals and communities adapt. Survivors are not helpless victims. Very soon after the shock they begin to help themselves and one another. In the immediate postimpact period community identification is strong, promoting cooperative and unselfish efforts aimed at rescue, relief, and repair. After the San Francisco earthquake of 1989, for example, inhabitants of a poor neighborhood spontaneously helped rescue motorists trapped by a freeway collapse. And after the

/ Learn More

Discrimination

Many people believe that only government intervention prevents rampant discrimination in the private sector. Economic theory predicts the opposite: market mechanisms impose inescapable penalties on profits whenever for-profit enterprises discriminate against individuals on any basis other than productivity. Though bigoted managers may hold sway for a time, in the long run the profit penalty makes profit-seeking enterprises tenacious champions of fair treatment. To see how this works, suppose that male and female hot-dog salesmen are equally productive and that bigoted stadium concessionaires prefer to hire men. The bigger demand for male employees will raise men’s wages, meaning that the concessionaires will have to pay more to hire men than they would to hire equally productive women. The higher wages for men cause employers who insist on all-male workforces to be higher-cost producers. Unless customers are willing to pay more for a hot dog delivered by a man than by a woman, higher costs mean smaller profits. Concessionaires interested in maximizing their profits will forgo prejudice, hire women, reduce their costs, and increase their profits. Even if all concessionaires collude in refusing to hire women, new woman-owned firms can exploit their cost advantage by selling hot dogs for less, an effective way to take away customers. Unless government steps in to protect the bigots from competition, market conditions will end up forcing firms to choose between lower profits and hiring women. Though it may take decades, lower costs for female labor will result in the expansion of equal-opportunity employers. This will increase the demand for female labor and increase women’s wages. Some antiwomen owners may contrive to remain in business, but competition will make their taste for unfair discrimination expensive and will ensure that less of it will occur. An example of the effect of market penalties on prejudicial hiring occurred in South Africa in the early 1900s. In spite of penalties threatened by government and violence threatened by white workers, South African mine owners sought to increase profits by laying off high-priced white workers in order to hire lower-priced black workers. Higher-paying jobs were reserved for whites only after white workers successfully persuaded the government to place extreme restrictions on blacks’ ability to work (see apartheid). Market penalties for discrimination also mitigated the effects of prejudice in the McCarthy era when profit-maximizing producers defied the Motion Picture Academy’s blacklist and secretly hired blacklisted screenwriters. Although government intervention often blunts the market mechanisms that penalize bigotry, people who unequivocally support such intervention often do so because they believe that unfair discrimination exists whenever outcomes for a particular group differ from those of the population as a whole. Economist Thomas Sowell calls the idea that “various groups would be equally represented in institutions and occupations were it not for discrimination . . . the grand fallacy of our times.”1 People differ in their tastes, aptitudes, and childhood experiences, in the skills they acquire from their extended families, and in the geography they must adapt to. People who have lived in cities for generations are less likely to become farmers. Those whose families have spent generations in rural areas may

/ Learn More

Economic Growth

Compound Rates of Growth In the modern version of an old legend, an investment banker asks to be paid by placing one penny on the first square of a chessboard, two pennies on the second square, four on the third, etc. If the banker had asked that only the white squares be used, the initial penny would have doubled in value thirty-one times, leaving $21.5 million on the last square. Using both the black and the white squares would have made the penny grow to $92 million billion. People are reasonably good at forming estimates based on addition, but for operations such as compounding that depend on repeated multiplication, we systematically underestimate how quickly things grow. As a result, we often lose sight of how important the average rate of growth is for an economy. For an investment banker, the choice between a payment that doubles with every square on the chessboard and one that doubles with every other square is more important than any other part of the contract. Who cares whether the payment is in pennies, pounds, or pesos? For a nation, the choices that determine whether income doubles with every generation, or instead with every other generation, dwarf all other economic policy concerns. Growth in Income per Capita You can figure out how long it takes for something to double by dividing the growth rate into the number 72. In the twenty-five years between 1950 and 1975, income per capita in India grew at the rate of 1.8 percent per year. At this rate, income doubles every forty years because 72 divided by 1.8 equals 40. In the twenty-five years between 1975 and 2000, income per capita in China grew at almost 6 percent per year. At this rate, income doubles every twelve years. These differences in doubling times have huge effects for a nation, just as they do for our banker. In the same forty-year time span that it would take the Indian economy to double at its slower growth rate, income would double three times—to eight times its initial level—at China’s faster growth rate. From 1950 to 2000, growth in income per capita in the United States lay between these two extremes, averaging 2.3 percent per year. From 1950 to 1975, India, which started at a level of income per capita that was less than 7 percent of that in the United States, was falling even farther behind. Between 1975 and 2000, China, which started at an even lower level, was catching up. China grew so quickly partly because it started so far behind. Rapid growth could be achieved in large part by letting firms bring in ideas about how to create value that were already in use in the rest of the world. The interesting question is why India could not manage the same trick, at least between 1950 and 1975. Growth and Recipes Economic growth occurs whenever people take resources and rearrange them in ways that make them more valuable. A useful metaphor for production in an economy comes from the kitchen. To create valuable final products, we mix inexpensive ingredients together according to a recipe. The cooking one can do is limited by the supply of ingredients, and most cooking in the economy produces undesirable side effects. If economic growth could be achieved only by doing more and more of the same kind of cooking, we would eventually run out of raw materials and suffer from unacceptable levels of pollution and nuisance. Human history teaches us, however, that economic growth springs from better recipes, not just from more cooking. New recipes generally produce fewer unpleasant side effects and generate more economic value per unit of raw material (see natural resources). Take one small example. In most coffee shops, you can now use the same size lid for small, medium, and large cups of coffee. That was not true as recently as 1995. That small change in the geometry of the cups means that a coffee shop can serve customers at lower cost. Store owners need to manage the inventory for only one type of lid. Employees can replenish supplies more quickly throughout the day. Customers can get their coffee just a bit faster. Although big discoveries such as the transistor, antibiotics, and the electric motor attract most of the attention, it takes millions of little discoveries like the new design for the cup and lid to double a nation’s average income. Every generation has perceived the limits to growth that finite resources and undesirable side effects would pose if no new recipes or ideas were discovered. And every generation has underestimated the potential for finding new recipes and ideas. We consistently fail to grasp how many ideas remain to be discovered. The difficulty is the same one we have with compounding: possibilities do not merely add up; they multiply. In a branch of physical chemistry known as exploratory synthesis, chemists try mixing selected elements together at different temperatures and pressures to see what comes out. About a decade ago, one of the hundreds of compounds discovered this way—a mixture of copper, yttrium, barium, and oxygen—was found to be a superconductor at temperatures far higher than anyone had previously thought possible. This discovery may ultimately have far-reaching implications for the storage and transmission of electrical energy. To get some sense of how much scope there is for more such discoveries, we can calculate as follows. The periodic table contains about a hundred different types of atoms, which means that the number of combinations made up of four different elements is about 100 × 99 × 98 × 97 = 94,000,000. A list of numbers like 6, 2, 1, 7 can represent the proportions for using the four elements in a recipe. To keep things simple, assume that the numbers in the list must lie between 1 and 10, that no fractions are allowed, and that the smallest number must always be 1. Then there are about 3,500 different sets of proportions for each choice of four elements, and 3,500 × 94,000,000 (or 330,000,000,000) different recipes in total. If laboratories around the world evaluated one thousand recipes each day, it would take nearly a million years to go through them all. (If you like these combinatorial calculations, try to figure out how many different coffee drinks it is possible to order at your local shop. Instead of moving around stacks of cup lids, baristas now spend their time tailoring drinks to individual palates.) In fact, the previous calculation vastly underestimates the amount of exploration that remains to be done because mixtures can be made of more than four elements, fractional proportions can be selected, and a wide variety of pressures and temperatures can be used during mixing. Even after correcting for these additional factors, this kind of calculation only begins to suggest the range of possibilities. Instead of just mixing elements together in a disorganized fashion, we can use chemical reactions to combine elements such as hydrogen and carbon into ordered structures like polymers or proteins. To see how far this kind of process can take us, imagine the ideal chemical refinery. It would convert abundant, renewable resources into a product that humans value. It would be smaller than a car, mobile so that it could search out its own inputs, capable of maintaining the temperature necessary for its reactions within narrow bounds, and able to automatically heal most system failures. It would build replicas of itself for use after it wears out, and it would do all of this with little human supervision. All we would have to do is get it to stay still periodically so that we could hook up some pipes and drain off the final product. This refinery already exists. It is the milk cow. And if nature can produce this structured collection of hydrogen, carbon, and miscellaneous other atoms by meandering along one particular evolutionary path of trial and error (albeit one that took hundreds of millions of years), there must be an unimaginably large number of valuable structures and recipes for combining atoms that we have yet to discover. Objects and Ideas Thinking about ideas and recipes changes how one thinks about economic policy (and cows). A traditional explanation for the persistent poverty of many less-developed countries is that they lack objects such as natural resources or capital goods. But Taiwan started with little of either and still grew rapidly. Something else must be involved. Increasingly, emphasis is shifting to the notion that it is ideas, not objects, that poor countries lack. The knowledge needed to provide citizens of the poorest countries with a vastly improved standard of living already exists in the advanced countries (see standards of livingand modern economic growth). If a poor nation invests in education and does not destroy the incentives for its citizens to acquire ideas from the rest of the world, it can rapidly take advantage of the publicly available part of the worldwide stock of knowledge. If, in addition, it offers incentives for privately held ideas to be put to use within its borders—for example, by protecting foreign patents, copyrights, and licenses; by permitting direct investment by foreign firms; by protecting property rights; and by avoiding heavy regulation and high marginal tax rates—its citizens can soon work in state-of-the-art productive activities. Some ideas such as insights about public health are rapidly adopted by less-developed countries. As a result, life expectancy in poor countries is catching up with that in the leaders faster than income per capita. Yet governments in poor countries continue to impede the flow of many other ideas, especially those with commercial value. Automobile producers in North America clearly recognize that they can learn from ideas developed in the rest of the world. But for decades, car firms in India operated in a government-created protective time warp. The Hillman and Austin cars produced in England in the 1950s continued to roll off production lines in India through the 1980s. After independence, India’s commitment to closing itself off and striving for self-sufficiency was as strong as Taiwan’s commitment to acquiring foreign ideas and participating fully in world markets. The outcomes—grinding poverty in India and opulence in Taiwan—could hardly be more disparate. A poor country like India can achieve enormous increases in standards of living merely by letting in the ideas held by companies from industrialized nations. With a series of economic reforms that started in the 1980s and deepened in the early 1990s, India has begun to open itself up to these opportunities. For some of its citizens, such as the software developers who now work for firms located in the rest of the world, these improvements in standards of living have become a reality. This same type of opening up is causing a spectacular transformation of life in China. Its growth in the last twenty-five years of the twentieth century was driven to a very large extent by foreign investment by multinational firms. Leading countries like the United States, Canada, and the members of the European Union cannot stay ahead merely by adopting ideas developed elsewhere. They must offer strong incentives for discovering new ideas at home, and this is not easy to do. The same characteristic that makes an idea so valuable—everybody can use it at the same time—also means that it is hard to earn an appropriate rate of return on investments in ideas. The many people who benefit from a new idea can too easily free ride on the efforts of others. After the transistor was invented at Bell Laboratories, many applied ideas had to be developed before this basic science discovery yielded any commercial value. By now, private firms have developed improved recipes that have brought the cost of a transistor down to less than a millionth of its former level. Yet most of the benefits from those discoveries have been reaped not by the innovating firms, but by the users of the transistors. In 1985, I paid a thousand dollars per million transistors for memory in my computer. In 2005, I paid less than ten dollars per million, and yet I did nothing to deserve or help pay for this windfall. If the government confiscated most of the oil from major discoveries and gave it to consumers, oil companies would do much less exploration. Some oil would still be found serendipitously, but many promising opportunities for exploration would be bypassed. Both oil companies and consumers would be worse off. The leakage of benefits such as those from improvements in the transistor acts just like this kind of confiscatory tax and has the same effect on incentives for exploration. For this reason, most economists support government funding for basic scientific research. They also recognize, however, that basic research grants by themselves will not provide the incentives to discover the many small applied ideas needed to transform basic ideas such as the transistor or Web search into valuable products and services. It takes more than scientists in universities to generate progress and growth. Such seemingly mundane forms of discovery as product and process engineering or the development of new business models can have huge benefits for society as a whole. There are, to be sure, some benefits for the firms that make these discoveries, but not enough to generate innovation at the ideal rate. Giving firms tighter patents and copyrights over new ideas would increase the incentives to make new discoveries, but might also make it much more expensive to build on previous discoveries. Tighter intellectual property rights could therefore be counterproductive and might slow growth. The one safe measure governments have used to great advantage has been subsidies for education to increase the supply of talented young scientists and engineers. They are the basic input into the discovery process, the fuel that fires the innovation engine. No one can know where newly trained young people will end up working, but nations that are willing to educate more of them and let them follow their instincts can be confident that they will accomplish amazing things. Meta-ideas Perhaps the most important ideas of all are meta-ideas—ideas about how to support the production and transmission of other ideas. In the seventeenth century, the British invented the modern concept of a patent that protects an invention. North Americans invented the modern research university and the agricultural extension service in the nineteenth century, and peer-reviewed competitive grants for basic research in the twentieth. The challenge now facing all of the industrialized countries is to invent new institutions that encourage a higher level of applied, commercially relevant research and development in the private sector. As national markets for talent and education merge into unified global markets, opportunities for important policy innovation will surely emerge. In basic research, the United States is still the undisputed leader, but in key areas of education, other countries are surging ahead. Many of them have already discovered how to train a larger fraction of their young people as scientists and engineers. We do not know what the next major idea about how to support ideas will be. Nor do we know where it will emerge. There are, however, two safe predictions. First, the country that takes the lead in the twenty-first century will be the one that implements an innovation that more effectively supports the production of new ideas in the private sector. Second, new meta-ideas of this kind will be found. Only a failure of imagination—the same failure that leads the man on the street to suppose that everything has already been invented—leads us to believe that all of the relevant institutions have been designed and all of the policy levers have been found. For social scientists, every bit as much as for physical scientists, there are vast regions to explore and wonderful surprises to discover. About the Author Paul M. Romer is the STANCO 25 Professor of Economics in the Graduate School of Business at Stanford University and a senior fellow at the Hoover Institution. He also founded Aplia, a publisher of Web-based teaching tools that is changing how college students learn economics. Further Reading   Easterly, William. The Elusive Quest for Growth. Cambridge: MIT Press, 2002. Helpman, Elhanan. The Mystery of Economic Growth. Cambridge: Harvard University Press, 2004. North, Douglass C. Institutions, Institutional Change, and Economic Performance. Cambridge: Cambridge University Press, 1990. Olson, Mancur. “Big Bills Left on the Sidewalk: Why Some Nations Are Rich, and Others Poor.” Journal of Economic Perspectives 10, no. 2 (1996): 3–23. Rosenberg, Nathan. Inside the Black Box: Technology and Economics. Cambridge: Cambridge University Press, 1982. Romer, Paul. “Endogenous Technological Change.” Journal of Political Economy 98, no. 5 (1990): S71–S102.   (0 COMMENTS)

/ Learn More