This is my archive

bar

Opportunity Cost

When economists refer to the “opportunity cost” of a resource, they mean the value of the next-highest-valued alternative use of that resource. If, for example, you spend time and money going to a movie, you cannot spend that time at home reading a book, and you cannot spend the money on something else. If your next-best alternative to seeing the movie is reading the book, then the opportunity cost of seeing the movie is the money spent plus the pleasure you forgo by not reading the book. The word “opportunity” in “opportunity cost” is actually redundant. The cost of using something is already the value of the highest-valued alternative use. But as contract lawyers and airplane pilots know, redundancy can be a virtue. In this case, its virtue is to remind us that the cost of using a resource arises from the value of what it could be used for instead. This simple concept has powerful implications. It implies, for example, that even when governments subsidize college education, most students still pay more than half of the cost. Take a student who annually pays $4,000 in tuition at a state college. Assume that the government subsidy to the college amounts to $8,000 per student. It looks as if the cost is $12,000 and the student pays less than half. But looks can be deceiving. The true cost is $12,000 plus the income the student forgoes by attending school rather than working. If the student could have earned $20,000 per year, then the true cost of the year’s schooling is $12,000 plus $20,000, for a total of $32,000. Of this $32,000 total, the student pays $24,000 ($4,000 in tuition plus $20,000 in forgone earnings). In other words, even with a hefty state subsidy, the student pays 75 percent of the whole cost. This explains why college students at state universities, even though they may grouse when the state government raises tuitions by, say, 10 percent, do not desert college in droves. A 10 percent increase in a $4,000 tuition is only $400, which is less than a 2 percent increase in the student’s overall cost (see human capital). What about the cost of room and board while attending school? This is not a true cost of attending school at all because whether or not the student attends school, the student still has expenses for room and board. About the Author David R. Henderson is the editor of this encyclopedia. He is a research fellow with Stanford University’s Hoover Institution and an associate professor of economics at the Naval Postgraduate School in Monterey, California. He was formerly a senior economist with President Ronald Reagan’s Council of Economic Advisers. Further Reading Alchian, Armen. “Cost.” In Encyclopedia of the Social Sciences. New York: Macmillan. Vol. 3, pp. 404–415. Buchanan, J. M. Cost and Choice. Chicago: Markham. 1969. Republished as Midway Reprint. Chicago: University of Chicago Press, 1977. Available online at: http://www.econlib.org/library/Buchanan/buchCv6.html (0 COMMENTS)

/ Learn More

OPEC

Few observers and even few experts remember that the Organization of Petroleum Exporting Countries (OPEC) was created in response to the 1959 imposition of import quotas on crude oil and refined products by the United States. In 1959, the U.S. government established the Mandatory Oil Import Quota program (MOIP), which restricted the amount of imported crude oil and refined products allowed into the United States and gave preferential treatment to oil imports from Canada, Mexico, and, somewhat later, Venezuela. This partial exclusion of Persian Gulf oil from the U.S. market depressed prices for Middle Eastern oil; as a result, oil prices “posted” (paid to the selling nations) were reduced in February 1959 and August 1960. In September 1960, four Persian Gulf nations (Iran, Iraq, Kuwait, and Saudi Arabia) and Venezuela formed OPEC in order to obtain higher prices for crude oil. By 1973, eight other nations (Algeria, Ecuador, Gabon, Indonesia, Libya, Nigeria, Qatar, and the United Arab Emirates) had joined OPEC; Ecuador withdrew at the end of 1992, and Gabon withdrew in 1994. The collective effort to raise oil prices was unsuccessful during the 1960s; real (i.e., inflation-adjusted) world market prices for crude oil fell from $9.78 (in 2004 dollars) in 1960 to $7.08 in 1970. However, real prices began to rise slowly in 1971 and then increased sharply in late 1973 and 1974, from roughly $10.00 per barrel to more than $36.00 per barrel in the wake of the 1973 Arab-Israeli (“Yom Kippur”) War. Despite what many noneconomists believe, the 1973–1974 price increase was not caused by the oil “embargo” (refusal to sell) that the Arab members of OPEC directed at the United States and the Netherlands. Instead, OPEC reduced its production of crude oil, raising world market prices sharply. The embargo against the United States and the Netherlands had no effect whatsoever: people in both nations were able to obtain oil at the same prices as people in all other nations. This failure of the embargo was predictable, in that oil is a “fungible” commodity that can be resold among buyers. An embargo by sellers is an attempt to raise prices for some buyers but not others. Only one price can prevail in the world market, however, because differences in prices will lead to arbitrage: that is, a higher price in a given market will induce other buyers to resell oil into the high-price market, thus equalizing prices worldwide. Nor, as is commonly believed, did OPEC cause oil shortages and gasoline lines in the United States. Instead, the shortages were caused by price and allocation controls on crude oil and refined products, imposed originally by President Richard Nixon in 1971 as part of the Economic Stabilization Program. Although the price controls allowed the price of crude oil to rise, it was not allowed to rise to free-market levels. Thus, the price controls caused the amount people wanted to consume to exceed the amount available at the legal maximum prices. Shortages were the inevitable result. Moreover, the allocation controls distorted the distribution of supplies; the government based allocations on consumption patterns observed before the sharp increase in prices. The higher prices, for example, reduced long-distance driving and agricultural fuel consumption, but the use of historical consumption patterns resulted in a relative oversupply of gasoline in rural areas and a relative undersupply in urban ones, thus exacerbating the effects of the price controls themselves. Countries whose governments did not impose price controls, such as (then West) Germany and Switzerland, did not experience shortages and queues. OPEC is in many ways a cartel—a group of producers that attempts to restrict output in order to raise prices above the competitive level. The decision-making center of OPEC is the Conference, comprising national delegations

/ Learn More

New Keynesian Economics

New Keynesian economics is the school of thought in modern macroeconomics that evolved from the ideas of John Maynard Keynes. Keynes wrote The General Theory of Employment, Interest, and Money in the 1930s, and his influence among academics and policymakers increased through the 1960s. In the 1970s, however, new classical economists such as Robert Lucas, Thomas J. Sargent, and Robert Barro called into question many of the precepts of the Keynesian revolution. The label “new Keynesian” describes those economists who, in the 1980s, responded to this new classical critique with adjustments to the original Keynesian tenets. The primary disagreement between new classical and new Keynesian economists is over how quickly wages and prices adjust. New classical economists build their macroeconomic theories on the assumption that wages and prices are flexible. They believe that prices “clear” markets—balance supply and demand—by adjusting quickly. New Keynesian economists, however, believe that market-clearing models cannot explain short-run economic fluctuations, and so they advocate models with “sticky” wages and prices. New Keynesian theories rely on this stickiness of wages and prices to explain why involuntary unemployment exists and why monetary policy has such a strong influence on economic activity. A long tradition in macroeconomics (including both Keynesian and monetarist perspectives) emphasizes that monetary policy affects employment and production in the short run because prices respond sluggishly to changes in the money supply. According to this view, if the money supply falls, people spend less money and the demand for goods falls. Because prices and wages are inflexible and do not fall immediately, the decreased spending causes a drop in production and layoffs of workers. New classical economists criticized this tradition because it lacks a coherent theoretical explanation for the sluggish behavior of prices. Much new Keynesian research attempts to remedy this omission. Menu Costs and Aggregate-Demand Externalities One reason prices do not adjust immediately to clear markets is that adjusting prices is costly. To change its prices, a firm may need to send out a new catalog to customers, distribute new price lists to its sales staff, or, in the case of a restaurant, print new menus. These costs of price adjustment, called “menu costs,” cause firms to adjust prices intermittently rather than continuously. Economists disagree about whether menu costs can help explain short-run economic fluctuations. Skeptics point out that menu costs usually are very small. They argue that these small costs are unlikely to help explain recessions, which are very costly for society. Proponents reply that “small” does not mean “inconsequential.” Even though menu costs are small for the individual firm, they could have large effects on the economy as a whole. Proponents of the menu-cost hypothesis describe the situation as follows. To understand why prices adjust slowly, one must acknowledge that changes in prices have externalities—that is, effects that go beyond the firm and its customers. For instance, a price reduction by one firm benefits other firms in the economy. When a firm lowers the price it charges, it lowers the average price level slightly and thereby raises real income. (Nominal income is determined by the money supply.) The stimulus from higher income, in turn, raises the demand for the products of all firms. This macroeconomic impact of one firm’s price adjustment on the demand for all other firms’ products is called an “aggregate-demand externality.” In the presence of this aggregate-demand externality, small menu costs can make prices sticky, and this stickiness can have a large cost to society. Suppose General Motors announces its prices and then, after a fall in the money supply, must decide whether to cut prices. If it did so, car buyers would have a higher real income and would therefore buy more products from other companies as well. But the benefits to other companies are not what General Motors cares about. Therefore, General Motors would sometimes fail to pay the menu cost and cut its price, even though the price cut is socially desirable. This is an example in which sticky prices are undesirable for the economy as a whole, even though they may be optimal for those setting prices. The Staggering of Prices New Keynesian explanations of sticky prices often emphasize that not everyone in the economy sets prices at the same time. Instead, the adjustment of prices throughout the economy is staggered. Staggering complicates the setting of prices because firms care about their prices relative to those charged by other firms. Staggering can make the overall level of prices adjust slowly, even when individual prices change frequently. Consider the following example. Suppose, first, that price setting is synchronized: every firm adjusts its price on the first of every month. If the money supply and aggregate demand rise on May 10, output will be higher from May 10 to June 1 because prices are fixed during this interval. But on June 1 all firms will raise their prices in response to the higher demand, ending the three-week boom. Now suppose that price setting is staggered: half the firms set prices on the first of each month and half on the fifteenth. If the money supply rises on May 10, then half of the firms can raise their prices on May 15. Yet because half of the firms will not be changing their prices on the fifteenth, a price increase by any firm will raise that firm’s relative price, which will cause it to lose customers. Therefore, these firms will probably not raise their prices very much. (In contrast, if all firms are synchronized, all firms can raise prices together, leaving relative prices unaffected.) If the May 15 price setters make little adjustment in their prices, then the other firms will make little adjustment when their turn comes on June 1, because they also want to avoid relative price changes. And so on. The price level rises slowly as the result of small price increases on the first and the fifteenth of each month. Hence, staggering makes the price level sluggish, because no firm wishes to be the first to post a substantial price increase. Coordination Failure Some new Keynesian economists suggest that recessions result from a failure of coordination. Coordination problems can arise in the setting of wages and prices because those who set them must anticipate the actions of other wage and price setters. Union leaders negotiating wages are concerned about the concessions other unions will win. Firms setting prices are mindful of the prices other firms will charge. To see how a recession could arise as a failure of coordination, consider the following parable. The economy is made up of two firms. After a fall in the money supply, each firm must decide whether to cut its price. Each firm wants to maximize its profit, but its profit depends not only on its pricing decision but also on the decision made by the other firm. If neither firm cuts its price, the amount of real money (the amount of money divided by the price level) is low, a recession ensues, and each firm makes a profit of only fifteen dollars. If both firms cut their price, real money balances are high, a recession is avoided, and each firm makes a profit of thirty dollars. Although both firms prefer to avoid a recession, neither can do so by its own actions. If one firm cuts its price while the other does not, a recession follows. The firm making the price cut makes only five dollars, while the other firm makes fifteen dollars. The essence of this parable is that each firm’s decision influences the set of outcomes available to the other firm. When one firm cuts its price, it improves the opportunities available to the other firm, because the other firm can then avoid the recession by cutting its price. This positive impact of one firm’s price cut on the other firm’s profit opportunities might arise because of an aggregate-demand externality. What outcome should one expect in this economy? On the one hand, if each firm expects the other to cut its price, both will cut prices, resulting in the preferred outcome in which each makes thirty dollars. On the other hand, if each firm expects the other to maintain its price, both will maintain their prices, resulting in the inferior solution, in which each makes fifteen dollars. Hence, either of these outcomes is possible: there are multiple equilibria. The inferior outcome, in which each firm makes fifteen dollars, is an example of a coordination failure. If the two firms could coordinate, they would both cut their price and reach the preferred outcome. In the real world, unlike in this parable, coordination is often difficult because the number of firms setting prices is large. The moral of the story is that even though sticky prices are in no one’s interest, prices can be sticky simply because price setters expect them to be. Efficiency Wages Another important part of new Keynesian economics has been the development of new theories of unemployment. Persistent unemployment is a puzzle for economic theory. Normally, economists presume that an excess supply of labor would exert a downward pressure on wages. A reduction in wages would in turn reduce unemployment by raising the quantity of labor demanded. Hence, according to standard economic theory, unemployment is a self-correcting problem. New Keynesian economists often turn to theories of what they call efficiency wages to explain why this market-clearing mechanism may fail. These theories hold that high wages make workers more productive. The influence of wages on worker efficiency may explain the failure of firms to cut wages despite an excess supply of labor. Even though a wage reduction would lower a firm’s wage bill, it would also—if the theories are correct—cause worker productivity and the firm’s profits to decline. There are various theories about how wages affect worker productivity. One efficiency-wage theory holds that high wages reduce labor turnover. Workers quit jobs for many reasons—to accept better positions at other firms, to change careers, or to move to other parts of the country. The more a firm pays its workers, the greater their incentive to stay with the firm. By paying a high wage, a firm reduces the frequency of quits, thereby decreasing the time spent hiring and training new workers. A second efficiency-wage theory holds that the average quality of a firm’s workforce depends on the wage it pays its employees. If a firm reduces wages, the best employees may take jobs elsewhere, leaving the firm with less-productive employees who have fewer alternative opportunities. By paying a wage above the equilibrium level, the firm may avoid this adverse selection, improve the average quality of its workforce, and thereby increase productivity. A third efficiency-wage theory holds that a high wage improves worker effort. This theory posits that firms cannot perfectly monitor the work effort of their employees and that employees must themselves decide how hard to work. Workers can choose to work hard, or they can choose to shirk and risk getting caught and fired. The firm can raise worker effort by paying a high wage. The higher the wage, the greater is the cost to the worker of getting fired. By paying a higher wage, a firm induces more of its employees not to shirk, and thus increases their productivity. A New Synthesis During the 1990s, the debate between new classical and new Keynesian economists led to the emergence of a new synthesis among macroeconomists about the best way to explain short-run economic fluctuations and the role of monetary and fiscal policies. The new synthesis attempts to merge the strengths of the competing approaches that preceded it. From the new classical models it takes a variety of modeling tools that shed light on how households and firms make decisions over time. From the new Keynesian models it takes price rigidities and uses them to explain why monetary policy affects employment and production in the short run. The most common approach is to assume monopolistically competitive firms (firms that have market power but compete with other firms) that change prices only intermittently. The heart of the new synthesis is the view that the economy is a dynamic general equilibrium system that deviates from an efficient allocation of resources in the short run because of sticky prices and perhaps a variety of other market imperfections. In many ways, this new synthesis forms the intellectual foundation for the analysis of monetary policy at the Federal Reserve and other central banks around the world. Policy Implications Because new Keynesian economics is a school of thought regarding macroeconomic theory, its adherents do not necessarily share a single view about economic policy. At the broadest level, new Keynesian economics suggests—in contrast to some new classical theories—that recessions are departures from the normal efficient functioning of markets. The elements of new Keynesian economics—such as menu costs, staggered prices, coordination failures, and efficiency wages—represent substantial deviations from the assumptions of classical economics, which provides the intellectual basis for economists’ usual justification of laissez-faire. In new Keynesian theories recessions are caused by some economy-wide market failure. Thus, new Keynesian economics provides a rationale for government intervention in the economy, such as countercyclical monetary or fiscal policy. This part of new Keynesian economics has been incorporated into the new synthesis that has emerged among macroeconomists. Whether policymakers should intervene in practice, however, is a more difficult question that entails various political as well as economic judgments. About the Author N. Gregory Mankiw is a professor of economics at Harvard University. From 2003 to 2005, he was the chairman of President George W. Bush’s Council of Economic Advisers. Further Reading   Clarida, Richard, Jordi Gali, and Mark Gertler. “The Science of Monetary Policy: A New Keynesian Perspective.” Journal of Economic Literature 37 (1999): 1661–1707. Goodfriend, Marvin, and Robert King. “The New Neoclassical Synthesis and the Role of Monetary Policy.” In Ben S. Bernanke and Julio Rotemberg, eds., NBER Macroeconomics Annual 1997. Cambridge: MIT Press, 1997. Pp. 231–283. Mankiw, N. Gregory, and David Romer, eds. New Keynesian Economics. 2 vols. Cambridge: MIT Press, 1991.   (0 COMMENTS)

/ Learn More

New Classical Macroeconomics

After Keynesian Macroeconomics The new classical macroeconomics is a school of economic thought that originated in the early 1970s in the work of economists centered at the Universities of Chicago and Minnesota—particularly, Robert Lucas (recipient of the Nobel Prize in 1995), Thomas Sargent, Neil Wallace, and Edward Prescott (corecipient of the Nobel Prize in 2004). The name draws on John Maynard Keynes’s evocative contrast between his own macroeconomics and that of his intellectual forebears. Keynes had knowingly stretched a point by lumping his contemporaries, a. c. pigou and Alfred Marshall, in with the older classical political economists, such as David Ricardo, and calling them all “classical.” According to Keynes, the classics saw the price system in a free economy as efficiently guiding the mutual adjustment of supply and demand in all markets, including the labor market. Unemployment could arise only because of a market imperfection—the intervention of the government or the action of labor unions—and could be eliminated through removing the imperfection. In contrast, Keynes shifted the focus of his analysis away from individual markets to the whole economy. He argued that even without market imperfections, aggregate demand (equal, in a closed economy, to consumption plus investment plus government expenditure) might fall short of the aggregate productive capacity of its labor and capital (plant, equipment, raw material, and infrastructure). In such a situation, unemployment is largely involuntary—that is, workers may be unemployed even though they are willing to work at a wage lower than the wage the firms pay their current workers. Later Keynesian economists achieved a measure of reconciliation with the classics. paul samuelson argued for a “neoclassical synthesis” in which classical economics was viewed as governing resource allocation when the economy was kept, through judicious government policy, at full employment. Other Keynesian economists sought to explain consumption, investment, the demand for money, and other key elements of the aggregate Keynesian model in a manner consistent with the assumption that individuals behave optimally. This was the program of “microfoundations for macroeconomics.” Origins of the New Classical Macroeconomics Although its name suggests a rejection of Keynesian economics and a revival of classical economics, the new classical macroeconomics began with Lucas’s and Leonard Rapping’s attempt to provide microfoundations for the Keynesian labor market. Lucas and Rapping applied the rule that equilibrium in a market occurs when quantity supplied equals quantity demanded. This turned out to be a radical step. Because involuntary unemployment is exactly the situation in which the amount of labor supplied exceeds the amount demanded, their analysis leaves no room at all for involuntary unemployment. Keynes’s view was that recessions occur when aggregate demand falls—largely as the result of a fall in private investment—causing firms to produce below their capacity. Producing less, firms need fewer workers, and thus employment falls. Firms, for reasons that Keynesian economists continue to debate, fail to cut wages to as low a level as job seekers will accept, and so involuntary unemployment rises. The new classicals reject this step as irrational. Involuntary unemployment would present firms with an opportunity to raise profits by paying workers a lower wage. If firms failed to take the opportunity, then they would not be optimizing. Employed workers should not be able to resist such wage cuts effectively since the unemployed stand ready to take their places at the lower wage. Keynesian economics would appear, then, to rest either on market imperfections or on irrationality, both of which Keynes denied. These criticisms of Keynesian economics illustrate the two fundamental tenets of the new classical macroeconomics. First, individuals are viewed as optimizers: given the prices, including wage rates, they face and the assets they hold, including their education and training (or “human capital”), they choose the best options available. Firms maximize profits; people maximize utility. Second, to a first approximation, prices adjust, changing the incentives to individuals, and thereby their choices, to align quantities supplied and demanded. Business Cycles business cycles pose a special challenge for new classical economists: How are large fluctuations in output compatible with the two fundamental tenets of their doctrine?

/ Learn More

Phillips Curve

The Phillips curve represents the relationship between the rate of inflation and the unemployment rate. Although he had precursors, A. W. H. Phillips’s study of wage inflation and unemployment in the United Kingdom from 1861 to 1957 is a milestone in the development of macroeconomics. Phillips found a consistent inverse relationship: when unemployment was high, wages increased slowly; when unemployment was low, wages rose rapidly. Phillips conjectured that the lower the unemployment rate, the tighter the labor market and, therefore, the faster firms must raise wages to attract scarce labor. At higher rates of unemployment, the pressure abated. Phillips’s “curve” represented the average relationship between unemployment and wage behavior over the business cycle. It showed the rate of wage inflation that would result if a particular level of unemployment persisted for some time. Economists soon estimated Phillips curves for most developed economies. Most related general price inflation, rather than wage inflation, to unemployment. Of course, the prices a company charges are closely connected to the wages it pays. Figure 1 shows a typical Phillips curve fitted to data for the United States from 1961 to 1969. The close fit between the estimated curve and the data encouraged many economists, following the lead of Paul Samuelson and Robert Solow, to treat the Phillips curve as a sort of menu of policy options. For example, with an unemployment rate of 6 percent, the government might stimulate the economy to lower unemployment to 5 percent. Figure 1 indicates that the cost, in terms of higher inflation, would be a little more than half a percentage point. But if the government initially faced lower rates of unemployment, the costs would be considerably higher: a reduction in unemployment from 5 to 4 percent would imply more than twice as big an increase in the rate of inflation—about one and a quarter percentage points. At the height of the Phillips curve’s popularity as a guide to policy, Edmund Phelps and Milton Friedman independently challenged its theoretical underpinnings. They argued that well-informed, rational employers and workers would pay attention only to real wages—the inflation-adjusted purchasing power of money wages. In their view, real wages would adjust to make the supply of labor equal

/ Learn More

Pharmaceuticals: Economics and Regulation

Pharmaceuticals are unique in their combination of extensive government control and extreme economics, that is, high fixed costs of development and relatively low incremental costs of production. Regulation The Food and Drug Administration (FDA) is the U.S. government agency charged with ensuring the safety and efficacy of the medicines available to Americans. The government’s control over medicines has grown in the last hundred years from literally nothing to far-reaching, and now pharmaceuticals are among the most-regulated products in this country. The two legislative acts that are the main source of the FDA’s powers both followed significant tragedies. In 1937, to make a palatable liquid version of its new antibiotic drug sulfanilamide, the Massengill Company carelessly used the solvent diethylene glycol, which is also used as an antifreeze.1 Elixir Sulfanilamide killed 107 people, mostly children, before it was quickly recalled; Massengill was successfully sued and the chemist responsible committed suicide. This tragedy led to the Food, Drug, and Cosmetic Act of 1938, which required that drugs be proven safe prior to marketing.2 In the next infamous tragedy, more than ten thousand European babies were born deformed after their mothers took thalidomide as a tranquilizer to alleviate morning sickness.3 This led to the Kefauver-Harris Amendments of 1962, which required that efficacy be proven prior to marketing. Note that even though thalidomide’s problem was clearly one of safety, an issue for which the FDA already had regulations, the laws were changed to add proof of efficacy. Many people are unaware that most of the drugs, foods, herbs, and dietary supplements that Americans consume have been neither assessed nor approved by the FDA. Some are beyond the scope of the FDA’s regulatory authority—if no specific health claims are made—and some are simply approved drugs being used in ways the FDA has not approved. Such “off-label” uses by physicians are widespread and can reach up to 90 percent in some therapeutic areas.4 Although the FDA tolerates off-label usage, it forbids pharmaceutical companies from promoting such applications of their products. Problems, sometimes serious, can arise even after FDA approval. Baycol (cerivastatin), Seldane (terfenadine), Vioxx (rofecoxib), and “Fen Phen” (fenfluramine and phentermine) are well-known examples of FDA-approved drugs that their manufacturers voluntarily withdrew after the drugs were found to be dangerous to some patients. Xalatan (latanoprost) for glaucoma caused 3–10 percent of users’ blue eyes to turn permanently brown. This amazing side effect was uncovered only after the drug was approved as “safe and effective.” One group of researchers estimated that 106,000 people died in 1994 alone from adverse reactions to drugs the FDA deemed “safe.”5 One problem with the 1962 Kefauver-Harris Amendments was the additional decade of regulatory delay they created for new drugs. For example, one researcher estimated that ten thousand people died unnecessarily each year while beta blockers languished at the FDA, even though they had already been approved in Europe. The FDA has taken a “guilty until proven innocent” approach rather than weighing the costs and benefits of such delays. Just how cautious should the FDA be? Thalidomide and sulfanilamide demonstrate the potential benefit of delays, while a disease such as lung cancer, which kills an American every three minutes, highlights the costs. In 1973, economist Sam Peltzman examined the pre- and post-1962 market to estimate the effect of the FDA’s new powers and found that the number of new drugs had been reduced by 60 percent. He also found little evidence to suggest a decline in the proportion of inefficacious drugs reaching the market.6 From 1963 through 2003, the number of new drugs approved each year approximately doubled, but pharmaceutical R&D expenditures grew by a factor of twenty.7 One result of the FDA approach is the very high, perhaps excessive, level of evidence required before drugs can be marketed legally. In December 2003, an FDA advisory committee declined to endorse the use of aspirin for preventing initial myocardial infarctions (MIs), or heart attacks.8 Does this mean that aspirin, which is approved for prevention of second heart attacks, does not work to prevent first heart attacks? No. One of the panelists, Dr. Joseph Knapka, stated: “As a scientist, I vote no. As a heart patient, I would probably say yes.” In other words, he had two standards. One standard is the scientific proof that aspirin works beyond any reasonable doubt. By this standard, the data on fifty-five thousand patients fall short.9 The other standard is measured by our choices in the real world. By this standard, aspirin passes easily. “The question today isn’t, does aspirin work? We know it works, and we certainly know it works in a net benefit to risk positive sense in the secondary prevention setting,” said panelist Thomas Fleming, chairman and professor of the Department of Biostatistics at the University of Washington, who also voted no.10 When our medical options are left to the scientific experts at a government agency, that agency has a bias toward conservatism. The FDA is acutely aware that of the two ways it can fail, approving a bad drug is significantly worse for its employees than failing to approve a good drug. Approving a bad drug may kill or otherwise harm patients, and an investigation of the approval process will lead to finger pointing. As former FDA employee Henry Miller put it, “This kind of mistake is highly visible and has immediate consequences—the media pounces, the public denounces, and Congress pronounces.”11 Such an outcome is highly emotional and concrete, while not approving a good drug is intellectual and abstract. Who would have benefited and by how much? Who will know enough to complain that she was victimized by being denied such a medicine? The FDA’s approach also curtails people’s freedom. The available medicines are what the FDA experts think we should have, not what we think we should have. It is common to picture uneducated patients blindly stumbling about the complexities of medical technology. While this certainly happens, it is mitigated by the expertise of caregivers (such as physicians), advisers (such as medical thought leaders), and watchdogs (such as the media), which comprise a surprisingly large support group. Of course, not all patients make competent decisions at all times, but FDA regulation treats all patients as incompetent. A medicine that may work for one person at a certain dose at a certain time for a given disease may not work if any of the variables changes. Thalidomide, though unsafe for fetuses, is currently being studied for a wide range of important diseases and was even approved by the FDA in 1998, after four decades of being banned, for a painful skin condition of leprosy.12 Similarly, finasteride is used in men to shrink enlarged prostate glands and to prevent baldness, but women are forbidden even to work in the finasteride factory due to the risk to fetuses. Also, the FDA pulled Propulsid (cisapride), a heartburn drug, from the market in March 2000 after eighty people who took it died from an irregular heartbeat. But for patients with cerebral palsy Propulsid is a miracle drug that allows them to digest food without extreme pain.13 What is a poison for one person may be a lifesaver for another. Economists have long recognized that good decisions cannot be made without considering the affected person’s unique characteristics. But the FDA has little knowledge of a given individual’s tolerance for pain, fear of death, or health status. So the decisions the FDA makes on behalf of individuals are imperfect because the agency lacks fundamental information (see information and prices). Economist Ludwig von Mises made this same argument in its universal form when he identified the Achilles’ heel of socialism: centralized governments are usually incapable of making good decisions for their citizens because they lack most of the relevant information. Some economists have proposed that the FDA continue to evaluate and approve new drugs, but that the drugs be made available—if the manufacturer wishes—during the approval process.14 The FDA could rate or grade drugs and put stern warnings on unapproved drugs and drugs that appear to be riskier. Economists expect that cautious drug companies and patients would simply wait for FDA approval, while some patients would take their chances. Such a solution is pareto optimal, in that everyone is at least as satisfied as under the current system. Cautious patients get the safety of FDA approval while patients who do not want to wait don’t have to. Economics A study by Joseph DiMasi, an economist at the Tufts Center for the Study of Drug Development in Boston, found that the cost of getting one new drug approved was $802 million in 2000 U.S. dollars.15 Most new drugs cost much less, but his figure adds in each successful drug’s prorated share of failures. Only one out of fifty drugs eventually reaches the market. Why are drugs so expensive to develop? The main reason for the high cost is the aforementioned high level of proof required by the Food and Drug Administration. Before it will approve a new drug, the FDA requires pharmaceutical companies to carefully test it in animals and then humans in the standard phases 0, I, II, and III process. The path through the FDA’s review process is slow and expensive. The ten to fifteen years required to get a drug through the testing and approval process leaves little remaining time on a twenty-year patent. Although new medicines are hugely expensive to bring to market, they are cheap to manufacture. In this sense, they are like DVD movies and computer software. This means that a drug company, to be profitable or simply to break even, must price its drugs well above its production costs. The company that wishes to maximize profits will set high prices for those who are willing to pay a lot and low prices that at least cover production costs for those willing to pay a little. That is why, for example, Merck priced its anti-AIDS drug, Crixivan, to poor countries in Africa and Latin America at $600 while charging relatively affluent Americans $6,099 for a year’s supply. This type of customer segmentation—similar to that of airlines—is part of the profit-maximizing strategy for medicines. In general, good customer segmentation is difficult to accomplish. Therefore, the most common type of pharmaceutical segmentation is charging a lower price in poorer countries and giving the product free to poor people in the United States through patient assistance programs. What complicates the picture is socialized medicine, which exists in almost every country outside the United States—and even, with Medicare and Medicaid, in the United States. Because governments in countries with socialized medicine tend to be the sole bargaining agent in dealing with drug companies, these governments often set prices that are low by U.S. standards. To some extent, this comes about because these governments have monopsony power—that is, monopoly power on the buyer’s side—and they use this power to get good deals. These governments are, in effect, saying that if they cannot buy it cheaply, their citizens cannot get it. These low prices also come about because governments sometimes threaten drug companies with compulsory licensing (breaking a patent) to get a low price. This has happened most recently in South Africa and Brazil with AIDS drugs. This violation of intellectual property rights can bring a seemingly powerful drug company into quick compliance. When faced with a choice between earning nothing and earning something, most drug companies choose the latter. The situation is a prisoners’ dilemma. Everyone’s interest is in giving drug companies an adequate incentive to invest in new drugs. To do so, drug companies must be able to price their drugs well above production costs to a large segment of the population. But each individual government’s narrow self-interest is to set a low price on drugs and let people in other countries pay the high prices that generate the return on R&D investments. Each government, in other words, has an incentive to be a free rider. And that is what many governments are doing. The temptation is to cease having Americans bear more than their share of drug development by having the U.S. government set low prices also. But if Americans also try to free ride, there may not be a ride. Governments are not the only bulk purchasers. The majority of pharmaceuticals in the United States are purchased by managed-care organizations (MCOs), hospitals, and governments, which use their market power to negotiate better prices. These organizations often do not take physical possession of the drugs; most pills never pass through the MCO’s hands, but instead go from manufacturer to wholesaler to pharmacy to patient. Therefore, manufacturers rebate money—billions of dollars—to compensate for purchases made at list prices. Managed-care rebates are given with consideration; they are the result of contracts that require performance. For example, a manufacturer will pay an HMO a rebate if it can keep a drug’s prescription market share above the national level. These rebates average 10–40 percent of sales. The net result is that the neediest Americans, frequently those without insurance, pay the highest prices, while the most powerful health plans and government agencies pay the lowest. Pharmaceutical companies would like to help poor people in the United States, but the federal government and, to a much lesser extent, health plans have tied their hands. Drug companies can and do give drugs away free through patient assistance programs, but they cannot sell them at very low prices because the federal government requires drug companies to give the huge Medicaid program their “best prices.” If a drug company sells to even one customer at a very low price, it also has to sell at the same price to the 5–40 percent of its customers covered by Medicaid. Drug prices are regularly attacked as “too high.” Yet, cheaper over-the-counter drugs, natural medicines, and generic versions of off-patent drugs are ubiquitous, and many health plans steer patients toward them. Economic studies have shown that even the newer, more expensive drugs are usually worth their price and are frequently cheaper than other alternatives. One study showed that each dollar spent on vaccines reduced other health care costs by $10. Another study showed that for each dollar spent on newer drugs, $6.17 was saved.16 Therefore, health plans that aggressively limited their drug spending ended up spending more over all. Most patients do not pay retail prices because they have some form of insurance. In 2003, before a law was passed that subsidizes drugs for seniors, 75–80 percent of seniors had prescription drug insurance. Insured people pay either a flat copayment, often based on tiers (copayment levels set by managed-care providers that involve a low payment for generic drugs and a higher payment for brand-name drugs) or a percentage of the prescription cost. On average, seniors spend more on entertainment than they do on drugs and medical supplies combined. But for the uninsured who are also poor and sick, drug prices can be a devastating burden. The overlap of the 20–25 percent who lack drug insurance and the 10 percent who pay more than five thousand dollars per year—approximately 2 percent are in both groups—is where we find the stories of people skimping on food to afford their medications. The number of people in both groups is actually lower than 2 percent because of the numerous patient assistance programs offered by pharmaceutical companies. For all the talk of lower drug prices, what people really want is lower risk through good insurance. Insurance lowers an individual’s risk and, consequently, increases the demand for pharmaceuticals. By spending someone else’s money for a good chunk of every pharmaceutical purchase, individuals become less price sensitive. A two-hundred-dollar prescription for a new medicine is forty times as expensive as a five-dollar generic, but its copay may be only three times the generic’s copay. The marginal cost to patients of choosing the expensive product is reduced, both in absolute and relative terms, and patients are thus more likely to purchase the expensive drug and make purchases they otherwise would have skipped. The data show that those with insurance consume 40–100 percent more than those without insurance. Drugs account for a small percentage of overall health-care spending. In fact, branded pharmaceuticals are about 7 percent and generics 3 percent of total U.S. health-care costs.17 The tremendous costs involved with illnesses—even if they are not directly measured—are the economic and human costs of the diseases themselves, not the drugs. About the Author Charles L. Hooper is president of Objective Insights, a company that consults for pharmaceutical and biotech companies. He is a visiting fellow with the Hoover Institution. Further Reading   Bast, Joseph L., Richard C. Rue, and Stuart A. Wesbury Jr. Why We Spend Too Much on Health Care and What We Can Do About It. Chicago: Heartland Institute, 1993. DiMasi, Joseph A., Ronald W. Hansen, and Henry G. Grabowski. “The Price of Innovation: New Estimates of Drug Development Costs.” Journal of Health Economics 22, no. 2 (2003): 151–185. Higgs, Robert, ed. Hazardous to Our Health? FDA Regulation of Health Care Products. Oakland, Calif.: Independent Institute, 1995. Hilts, Philip J. Protecting America’s Health: The FDA, Business, and One Hundred Years of Regulation. New York: Alfred A. Knopf, 2003. Klein, Daniel B., and Alexander Tabarrok. FDAReview.org. Oakland, Calif.: Independent Institute. Online at: http://www.fdareview.org/. Miller, Henry I. To America’s Health: A Proposal to Reform the Food and Drug Administration. Stanford, Calif.: Hoover Institution Press, 2000.   Footnotes 1. Philip J. Hilts, Protecting America’s Health: The FDA, Business, and One Hundred Years of Regulation (New York: Alfred A. Knopf, 2003), pp. 89–90.   2. Daniel B. Klein and Alexander Tabarrok, FDAReview.org, Independent Institute, online under “History” at: http://www.FDAReview.org/history.shtml#fifth.   3. “THALOMID (Thalidomide): Balancing the Benefits and the Risks,” Celgene Corporation, p. 2, online at: www.sanmateo.org/rimm/Tali_benefits_risks_celgene.pdf.   4. Alexander Tabarrok, “The Anomaly of Off-Label Drug Prescriptions,” Independent Institute Working Paper no. 10, December 1999.   5. Lazarov, Jason, et al. “Incidence of Adverse Drug Reactions in Hospitalized Patients.” Journal of the American Medical Association 279, no. 15 (1998): 1200–1205.   6. Peltzman, Sam. An Evaluation of Consumer Protection Legislation: The 1962 Drug Amendments. Journal of Political Economy 81, no. 5 (1973): 1049–1091.   7. Parexel’s Pharmaceutical R&D Statistical Sourcebook 2004–2005 (Waltham, Mass.: Parexel International Corporation, 2004), p. 9.   8. “Broader Use for Aspirin Fails to Win Backing,” Wall Street Journal, December 9, 2003, p. D9.   9. This 55,000 is the total number of patients tested in five published clinical trials of the use of aspirin to prevent initial non-fatal myocardial infraction.   10. Food and Drug Administration, Center for Drug Evaluation and Research, Cardiovascular and Renal Drugs Advisory Committee meeting, Monday, December 8, 2003, Gaithersburg, Md.   11. Henry I. Miller, M.D., To America’s Health: A Proposal to Reform the Food and Drug Administration (Stanford, Calif.: Hoover Institution Press, 2000), p. 42.   12. “FDA Gives Restricted Approval to Thalidomide,” CNN News, July 16, 1998.   13. “Drug Ban Brings Misery to Patient,” Associated Press, November 11, 2000.   14. Klein and Tabarrok, FDAReview.org, online under “Reform Options” at http://www.fdareview.org/reform.shtml#5; David R. Henderson, The Joy of Freedom: An Economist’s Odyssey (New York: Prentice Hall, 2002), pp. 206–207, 278–279.   15. Joseph A. DiMasi, Ronald W. Hansen, and Henry G. Grabowski, “The Price of Innovation: New Estimates of Drug Development Costs,” Journal of Health Economics 22 (2003): 151–185.   16. Frank R. Lichtenberg, “Benefits and Costs of Newer Drugs: An Update,” NBER Working Paper no. 8996, National Bureau of Economic Research, Cambridge, Mass., 2002.   17. The Centers for Medicare and Medicaid Services (CMS), January 8, 2004.   (0 COMMENTS)

/ Learn More

Money Supply

What Is the Money Supply? The U.S. money supply comprises currency—dollar bills and coins issued by the Federal Reserve System and the U.S. Treasury—and various kinds of deposits held by the public at commercial banks and other depository institutions such as thrifts and credit unions. On June 30, 2004, the money supply, measured as the sum of currency and checking account deposits, totaled $1,333 billion. Including some types of savings deposits, the money supply totaled $6,275 billion. An even broader measure totaled $9,275 billion. These measures correspond to three definitions of money that the Federal Reserve uses: M1, a narrow measure of money’s function as a medium of exchange; M2, a broader measure that also reflects money’s function as a store of value; and M3, a still broader measure that covers items that many regard as close substitutes for money. The definition of money has varied. For centuries, physical commodities, most commonly silver or gold, served as money. Later, when paper money and checkable deposits were introduced, they were convertible into commodity money. The abandonment of convertibility of money into a commodity since August 15, 1971, when President Richard M. Nixon discontinued converting U.S. dollars into gold at $35 per ounce, has made the monies of the United States and other countries into fiat money—money that national monetary authorities have the power to issue without legal constraints. Why Is the Money Supply Important? Because money is used in virtually all economic transactions, it has a powerful effect on economic activity. An increase in the supply of money works both through lowering interest rates, which spurs investment, and through putting more money in the hands of consumers, making them feel wealthier, and thus stimulating spending. Business firms respond to increased sales by ordering more raw materials and increasing production. The spread of business activity increases the demand for labor and raises the demand for capital goods. In a buoyant economy, stock market prices rise and firms issue equity and debt. If the money supply continues to expand, prices begin to rise, especially if output growth reaches capacity limits. As the public begins to expect inflation, lenders insist on higher interest rates to offset an expected decline in purchasing power over the life of their loans. Opposite effects occur when the supply of money falls

/ Learn More

Monetarism

Monetarism is a macroeconomic school of thought that emphasizes (1) long-run monetary neutrality, (2) short-run monetary nonneutrality, (3) the distinction between real and nominal interest rates, and (4) the role of monetary aggregates in policy analysis. It is particularly associated with the writings of Milton Friedman, Anna Schwartz, Karl Brunner, and Allan Meltzer, with early contributors outside the United States including David Laidler, Michael Parkin, and Alan Walters. Some journalists—especially in the United Kingdom—have used the term to refer to doctrinal support of free-market positions more generally, but that usage is inappropriate; many free-market advocates would not dream of describing themselves as monetarists. An economy possesses basic long-run monetary neutrality if an exogenous increase of Z percent in its stock of money would ultimately be followed, after all adjustments have taken place, by a Z percent increase in the general price level, with no effects on real variables (e.g., consumption, output, relative prices of individual commodities). While most economists believe that long-run neutrality is a feature of actual market economies, at least approximately, no other group of macroeconomists emphasizes this proposition as strongly as do monetarists. Also, some would object that, in practice, actual central banks almost never conduct policy so as to involve exogenous changes in the money supply. This objection is correct factually but irrelevant: the crucial matter is whether the supply and demand choices of households and businesses reflect concern only for the underlying quantities of goods and services that are consumed and produced. If they do, then the economy will have the property of longrun neutrality, and thus the above-described reaction to a hypothetical change in the money supply would occur.1 Other neutrality concepts, including the natural-rate hypothesis, are mentioned below. Short-run monetary nonneutrality obtains, in an economy with long-run monetary neutrality, if the price adjustments to a change in money take place only gradually, so that there are temporary effects on real output (GDP) and employment. Most economists consider this property realistic, but an important school of macroeconomists, the so-called real business cycle proponents, denies it. Continuing with our list, real interest rates are ordinary (“nominal”) interest rates adjusted to take account of expected inflation, as rational, optimizing people would do when they make trade-offs between present and future. As long ago as the very early 1800s, British banker and economist Henry Thornton recognized the distinction between real and nominal interest rates, and American economist Irving Fisher emphasized it in the early 1900s. However, the distinction was often neglected in macroeconomic analysis until monetarists began insisting on its importance during the 1950s. Many Keynesians did not disagree in principle, but in practice their models often did not recognize the distinction and/or they judged the “tightness” of monetary policy by the prevailing level of nominal interest rates. All monetarists emphasized the undesirability of combating inflation by nonmonetary means, such as wage and price controls or guidelines, because these would create market distortions. They stressed, in other words, that ongoing inflation is fundamentally monetary in nature, a viewpoint foreign to most Keynesians of the time. Finally, the original monetarists all emphasized the role of monetary aggregates—such as M1, M2, and the monetary base—in monetary policy analysis, but details differed between Friedman and Schwartz, on the one hand, and Brunner and Meltzer, on the other. Friedman’s striking and famous recommendation was that, irrespective of current macroeconomic conditions, the stock of money

/ Learn More

Monetary Union

When economists such as robert mundell were theorizing about optimal monetary unions in the middle of the twentieth century, most people regarded the exercise as largely hypothetical. But since many European countries established a monetary union at the end of the century, the theory of monetary unions has become much more relevant to many more people. Definitions and Background The ability to issue money usable for transactions is a power usually reserved by a country’s central government, and it is often seen as a part of a nation’s sovereignty. A monetary union, also known as a currency union or common currency area, entails multiple countries ceding control over the supply of money to a common authority. Adjusting the money supply is a common tool for managing overall economic activity in a country (see monetary policy), and changes in the money supply also affect the financing of government budgets. So giving up control of a national money supply introduces new limitations on a country’s economic policies. A monetary union in many ways resembles a fixed-exchange-rate regime, whereby countries retain distinct national currencies but agree to adjust the relative supply of these to maintain a desired rate of exchange. A monetary union is an extreme form of a fixed-exchange-rate regime, with at least two distinctions. First, because the countries switch to a new currency, the cost of abandoning the new system is much higher than for a typical fixed-exchange-rate regime, giving people more confidence that the system will last. Second, a monetary union eliminates the transactions costs people incur when they need to exchange currencies in carrying out international transactions. Fixed-exchange-rate regimes have been quite common throughout recent history. The United States participated in such a regime from the 1940s until 1973; numerous Europeans participated in one until the creation of the monetary union; and many small or poor countries (Belize, Bhutan, and Botswana, to name just a few) continue to fix their exchange rates to the currencies of major trading partners. The precedents for monetary unions prior to the current European Monetary Union are rare. From 1865 until World War I, all four members of the Latin Monetary Union—France, Belgium, Italy, and Switzerland—allowed coins to circulate throughout the union. Luxembourg shared a currency with its larger neighbor Belgium from 1992 until the formation of the broader European Monetary Union. In addition, many former colonies such as the franc zone in western Africa or other small poor countries (Ecuador and Panama) adopted the currency of a large, wealthier trading partner. But the formation of the European Monetary Union by a group of large and wealthy countries is an unprecedented experiment in international monetary arrangements. Optimal Currency Area Theory Forming a monetary union carries benefits and costs. One benefit is that merchants no longer need worry about unexpected movements in the exchange rate. Suppose a seller of computers in Germany must decide between buying from a supplier in the United States at a price set in dollars and a supplier in France with a price in euros, payment on delivery. Even if the U.S. supplier’s price is lower once it is converted from dollars to euros at the going exchange rate, there is a risk that the dollar’s value will rise before the time of payment, raising the cost of the computers in euros, and hence lowering the merchant’s profits. Even if the merchant expects that the import price probably will be lower, he may decide it is not worth risking a mistake. A monetary union, like any fixed-exchange-rate regime, eliminates this risk. One effect is to promote international trade among members of the monetary union. The same argument can be made for international investment. If a European investor is considering buying a computer manufacturing company in the United States, the value of profits converted from dollars to euros is uncertain. On the other hand, exchange-rate fluctuations

/ Learn More

Natural Resources

The earth’s natural resources are finite, which means that if we use them continuously, we will eventually exhaust them. This basic observation is undeniable. But another way of looking at the issue is far more relevant to assessing people’s well-being. Our exhaustible and unreproducible natural resources, if measured in terms of their prospective contribution to human welfare, can actually increase year after year, perhaps never coming anywhere near exhaustion. How can this be? The answer lies in the fact that the effective stocks of natural resources are continually expanded by the same technological developments that have fueled the extraordinary growth in living standards since the Industrial Revolution. Innovation has increased the productivity of natural resources (e.g., increasing the gasoline mileage of cars). Innovation also increases the recycling of resources and reduces waste in their extraction and processing. And innovation affects the prospective output of natural resources (e.g., the coal still underneath the ground). If a scientific breakthrough in a given year increases the prospective output of the unused stocks of a resource by an amount greater than the reduction (via resources actually used up) in that year, then, in terms of human economic welfare, the stock of that resource will be larger at the end of the year than at the beginning. Of course, the remaining physical amount of the resource must continually decline,

/ Learn More