This is my archive

bar

New Classical Macroeconomics

After Keynesian Macroeconomics The new classical macroeconomics is a school of economic thought that originated in the early 1970s in the work of economists centered at the Universities of Chicago and Minnesota—particularly, Robert Lucas (recipient of the Nobel Prize in 1995), Thomas Sargent, Neil Wallace, and Edward Prescott (corecipient of the Nobel Prize in 2004). The name draws on John Maynard Keynes’s evocative contrast between his own macroeconomics and that of his intellectual forebears. Keynes had knowingly stretched a point by lumping his contemporaries, a. c. pigou and Alfred Marshall, in with the older classical political economists, such as David Ricardo, and calling them all “classical.” According to Keynes, the classics saw the price system in a free economy as efficiently guiding the mutual adjustment of supply and demand in all markets, including the labor market. Unemployment could arise only because of a market imperfection—the intervention of the government or the action of labor unions—and could be eliminated through removing the imperfection. In contrast, Keynes shifted the focus of his analysis away from individual markets to the whole economy. He argued that even without market imperfections, aggregate demand (equal, in a closed economy, to consumption plus investment plus government expenditure) might fall short of the aggregate productive capacity of its labor and capital (plant, equipment, raw material, and infrastructure). In such a situation, unemployment is largely involuntary—that is, workers may be unemployed even though they are willing to work at a wage lower than the wage the firms pay their current workers. Later Keynesian economists achieved a measure of reconciliation with the classics. paul samuelson argued for a “neoclassical synthesis” in which classical economics was viewed as governing resource allocation when the economy was kept, through judicious government policy, at full employment. Other Keynesian economists sought to explain consumption, investment, the demand for money, and other key elements of the aggregate Keynesian model in a manner consistent with the assumption that individuals behave optimally. This was the program of “microfoundations for macroeconomics.” Origins of the New Classical Macroeconomics Although its name suggests a rejection of Keynesian economics and a revival of classical economics, the new classical macroeconomics began with Lucas’s and Leonard Rapping’s attempt to provide microfoundations for the Keynesian labor market. Lucas and Rapping applied the rule that equilibrium in a market occurs when quantity supplied equals quantity demanded. This turned out to be a radical step. Because involuntary unemployment is exactly the situation in which the amount of labor supplied exceeds the amount demanded, their analysis leaves no room at all for involuntary unemployment. Keynes’s view was that recessions occur when aggregate demand falls—largely as the result of a fall in private investment—causing firms to produce below their capacity. Producing less, firms need fewer workers, and thus employment falls. Firms, for reasons that Keynesian economists continue to debate, fail to cut wages to as low a level as job seekers will accept, and so involuntary unemployment rises. The new classicals reject this step as irrational. Involuntary unemployment would present firms with an opportunity to raise profits by paying workers a lower wage. If firms failed to take the opportunity, then they would not be optimizing. Employed workers should not be able to resist such wage cuts effectively since the unemployed stand ready to take their places at the lower wage. Keynesian economics would appear, then, to rest either on market imperfections or on irrationality, both of which Keynes denied. These criticisms of Keynesian economics illustrate the two fundamental tenets of the new classical macroeconomics. First, individuals are viewed as optimizers: given the prices, including wage rates, they face and the assets they hold, including their education and training (or “human capital”), they choose the best options available. Firms maximize profits; people maximize utility. Second, to a first approximation, prices adjust, changing the incentives to individuals, and thereby their choices, to align quantities supplied and demanded. Business Cycles business cycles pose a special challenge for new classical economists: How are large fluctuations in output compatible with the two fundamental tenets of their doctrine?

/ Learn More

Phillips Curve

The Phillips curve represents the relationship between the rate of inflation and the unemployment rate. Although he had precursors, A. W. H. Phillips’s study of wage inflation and unemployment in the United Kingdom from 1861 to 1957 is a milestone in the development of macroeconomics. Phillips found a consistent inverse relationship: when unemployment was high, wages increased slowly; when unemployment was low, wages rose rapidly. Phillips conjectured that the lower the unemployment rate, the tighter the labor market and, therefore, the faster firms must raise wages to attract scarce labor. At higher rates of unemployment, the pressure abated. Phillips’s “curve” represented the average relationship between unemployment and wage behavior over the business cycle. It showed the rate of wage inflation that would result if a particular level of unemployment persisted for some time. Economists soon estimated Phillips curves for most developed economies. Most related general price inflation, rather than wage inflation, to unemployment. Of course, the prices a company charges are closely connected to the wages it pays. Figure 1 shows a typical Phillips curve fitted to data for the United States from 1961 to 1969. The close fit between the estimated curve and the data encouraged many economists, following the lead of Paul Samuelson and Robert Solow, to treat the Phillips curve as a sort of menu of policy options. For example, with an unemployment rate of 6 percent, the government might stimulate the economy to lower unemployment to 5 percent. Figure 1 indicates that the cost, in terms of higher inflation, would be a little more than half a percentage point. But if the government initially faced lower rates of unemployment, the costs would be considerably higher: a reduction in unemployment from 5 to 4 percent would imply more than twice as big an increase in the rate of inflation—about one and a quarter percentage points. At the height of the Phillips curve’s popularity as a guide to policy, Edmund Phelps and Milton Friedman independently challenged its theoretical underpinnings. They argued that well-informed, rational employers and workers would pay attention only to real wages—the inflation-adjusted purchasing power of money wages. In their view, real wages would adjust to make the supply of labor equal

/ Learn More

Pharmaceuticals: Economics and Regulation

Pharmaceuticals are unique in their combination of extensive government control and extreme economics, that is, high fixed costs of development and relatively low incremental costs of production. Regulation The Food and Drug Administration (FDA) is the U.S. government agency charged with ensuring the safety and efficacy of the medicines available to Americans. The government’s control over medicines has grown in the last hundred years from literally nothing to far-reaching, and now pharmaceuticals are among the most-regulated products in this country. The two legislative acts that are the main source of the FDA’s powers both followed significant tragedies. In 1937, to make a palatable liquid version of its new antibiotic drug sulfanilamide, the Massengill Company carelessly used the solvent diethylene glycol, which is also used as an antifreeze.1 Elixir Sulfanilamide killed 107 people, mostly children, before it was quickly recalled; Massengill was successfully sued and the chemist responsible committed suicide. This tragedy led to the Food, Drug, and Cosmetic Act of 1938, which required that drugs be proven safe prior to marketing.2 In the next infamous tragedy, more than ten thousand European babies were born deformed after their mothers took thalidomide as a tranquilizer to alleviate morning sickness.3 This led to the Kefauver-Harris Amendments of 1962, which required that efficacy be proven prior to marketing. Note that even though thalidomide’s problem was clearly one of safety, an issue for which the FDA already had regulations, the laws were changed to add proof of efficacy. Many people are unaware that most of the drugs, foods, herbs, and dietary supplements that Americans consume have been neither assessed nor approved by the FDA. Some are beyond the scope of the FDA’s regulatory authority—if no specific health claims are made—and some are simply approved drugs being used in ways the FDA has not approved. Such “off-label” uses by physicians are widespread and can reach up to 90 percent in some therapeutic areas.4 Although the FDA tolerates off-label usage, it forbids pharmaceutical companies from promoting such applications of their products. Problems, sometimes serious, can arise even after FDA approval. Baycol (cerivastatin), Seldane (terfenadine), Vioxx (rofecoxib), and “Fen Phen” (fenfluramine and phentermine) are well-known examples of FDA-approved drugs that their manufacturers voluntarily withdrew after the drugs were found to be dangerous to some patients. Xalatan (latanoprost) for glaucoma caused 3–10 percent of users’ blue eyes to turn permanently brown. This amazing side effect was uncovered only after the drug was approved as “safe and effective.” One group of researchers estimated that 106,000 people died in 1994 alone from adverse reactions to drugs the FDA deemed “safe.”5 One problem with the 1962 Kefauver-Harris Amendments was the additional decade of regulatory delay they created for new drugs. For example, one researcher estimated that ten thousand people died unnecessarily each year while beta blockers languished at the FDA, even though they had already been approved in Europe. The FDA has taken a “guilty until proven innocent” approach rather than weighing the costs and benefits of such delays. Just how cautious should the FDA be? Thalidomide and sulfanilamide demonstrate the potential benefit of delays, while a disease such as lung cancer, which kills an American every three minutes, highlights the costs. In 1973, economist Sam Peltzman examined the pre- and post-1962 market to estimate the effect of the FDA’s new powers and found that the number of new drugs had been reduced by 60 percent. He also found little evidence to suggest a decline in the proportion of inefficacious drugs reaching the market.6 From 1963 through 2003, the number of new drugs approved each year approximately doubled, but pharmaceutical R&D expenditures grew by a factor of twenty.7 One result of the FDA approach is the very high, perhaps excessive, level of evidence required before drugs can be marketed legally. In December 2003, an FDA advisory committee declined to endorse the use of aspirin for preventing initial myocardial infarctions (MIs), or heart attacks.8 Does this mean that aspirin, which is approved for prevention of second heart attacks, does not work to prevent first heart attacks? No. One of the panelists, Dr. Joseph Knapka, stated: “As a scientist, I vote no. As a heart patient, I would probably say yes.” In other words, he had two standards. One standard is the scientific proof that aspirin works beyond any reasonable doubt. By this standard, the data on fifty-five thousand patients fall short.9 The other standard is measured by our choices in the real world. By this standard, aspirin passes easily. “The question today isn’t, does aspirin work? We know it works, and we certainly know it works in a net benefit to risk positive sense in the secondary prevention setting,” said panelist Thomas Fleming, chairman and professor of the Department of Biostatistics at the University of Washington, who also voted no.10 When our medical options are left to the scientific experts at a government agency, that agency has a bias toward conservatism. The FDA is acutely aware that of the two ways it can fail, approving a bad drug is significantly worse for its employees than failing to approve a good drug. Approving a bad drug may kill or otherwise harm patients, and an investigation of the approval process will lead to finger pointing. As former FDA employee Henry Miller put it, “This kind of mistake is highly visible and has immediate consequences—the media pounces, the public denounces, and Congress pronounces.”11 Such an outcome is highly emotional and concrete, while not approving a good drug is intellectual and abstract. Who would have benefited and by how much? Who will know enough to complain that she was victimized by being denied such a medicine? The FDA’s approach also curtails people’s freedom. The available medicines are what the FDA experts think we should have, not what we think we should have. It is common to picture uneducated patients blindly stumbling about the complexities of medical technology. While this certainly happens, it is mitigated by the expertise of caregivers (such as physicians), advisers (such as medical thought leaders), and watchdogs (such as the media), which comprise a surprisingly large support group. Of course, not all patients make competent decisions at all times, but FDA regulation treats all patients as incompetent. A medicine that may work for one person at a certain dose at a certain time for a given disease may not work if any of the variables changes. Thalidomide, though unsafe for fetuses, is currently being studied for a wide range of important diseases and was even approved by the FDA in 1998, after four decades of being banned, for a painful skin condition of leprosy.12 Similarly, finasteride is used in men to shrink enlarged prostate glands and to prevent baldness, but women are forbidden even to work in the finasteride factory due to the risk to fetuses. Also, the FDA pulled Propulsid (cisapride), a heartburn drug, from the market in March 2000 after eighty people who took it died from an irregular heartbeat. But for patients with cerebral palsy Propulsid is a miracle drug that allows them to digest food without extreme pain.13 What is a poison for one person may be a lifesaver for another. Economists have long recognized that good decisions cannot be made without considering the affected person’s unique characteristics. But the FDA has little knowledge of a given individual’s tolerance for pain, fear of death, or health status. So the decisions the FDA makes on behalf of individuals are imperfect because the agency lacks fundamental information (see information and prices). Economist Ludwig von Mises made this same argument in its universal form when he identified the Achilles’ heel of socialism: centralized governments are usually incapable of making good decisions for their citizens because they lack most of the relevant information. Some economists have proposed that the FDA continue to evaluate and approve new drugs, but that the drugs be made available—if the manufacturer wishes—during the approval process.14 The FDA could rate or grade drugs and put stern warnings on unapproved drugs and drugs that appear to be riskier. Economists expect that cautious drug companies and patients would simply wait for FDA approval, while some patients would take their chances. Such a solution is pareto optimal, in that everyone is at least as satisfied as under the current system. Cautious patients get the safety of FDA approval while patients who do not want to wait don’t have to. Economics A study by Joseph DiMasi, an economist at the Tufts Center for the Study of Drug Development in Boston, found that the cost of getting one new drug approved was $802 million in 2000 U.S. dollars.15 Most new drugs cost much less, but his figure adds in each successful drug’s prorated share of failures. Only one out of fifty drugs eventually reaches the market. Why are drugs so expensive to develop? The main reason for the high cost is the aforementioned high level of proof required by the Food and Drug Administration. Before it will approve a new drug, the FDA requires pharmaceutical companies to carefully test it in animals and then humans in the standard phases 0, I, II, and III process. The path through the FDA’s review process is slow and expensive. The ten to fifteen years required to get a drug through the testing and approval process leaves little remaining time on a twenty-year patent. Although new medicines are hugely expensive to bring to market, they are cheap to manufacture. In this sense, they are like DVD movies and computer software. This means that a drug company, to be profitable or simply to break even, must price its drugs well above its production costs. The company that wishes to maximize profits will set high prices for those who are willing to pay a lot and low prices that at least cover production costs for those willing to pay a little. That is why, for example, Merck priced its anti-AIDS drug, Crixivan, to poor countries in Africa and Latin America at $600 while charging relatively affluent Americans $6,099 for a year’s supply. This type of customer segmentation—similar to that of airlines—is part of the profit-maximizing strategy for medicines. In general, good customer segmentation is difficult to accomplish. Therefore, the most common type of pharmaceutical segmentation is charging a lower price in poorer countries and giving the product free to poor people in the United States through patient assistance programs. What complicates the picture is socialized medicine, which exists in almost every country outside the United States—and even, with Medicare and Medicaid, in the United States. Because governments in countries with socialized medicine tend to be the sole bargaining agent in dealing with drug companies, these governments often set prices that are low by U.S. standards. To some extent, this comes about because these governments have monopsony power—that is, monopoly power on the buyer’s side—and they use this power to get good deals. These governments are, in effect, saying that if they cannot buy it cheaply, their citizens cannot get it. These low prices also come about because governments sometimes threaten drug companies with compulsory licensing (breaking a patent) to get a low price. This has happened most recently in South Africa and Brazil with AIDS drugs. This violation of intellectual property rights can bring a seemingly powerful drug company into quick compliance. When faced with a choice between earning nothing and earning something, most drug companies choose the latter. The situation is a prisoners’ dilemma. Everyone’s interest is in giving drug companies an adequate incentive to invest in new drugs. To do so, drug companies must be able to price their drugs well above production costs to a large segment of the population. But each individual government’s narrow self-interest is to set a low price on drugs and let people in other countries pay the high prices that generate the return on R&D investments. Each government, in other words, has an incentive to be a free rider. And that is what many governments are doing. The temptation is to cease having Americans bear more than their share of drug development by having the U.S. government set low prices also. But if Americans also try to free ride, there may not be a ride. Governments are not the only bulk purchasers. The majority of pharmaceuticals in the United States are purchased by managed-care organizations (MCOs), hospitals, and governments, which use their market power to negotiate better prices. These organizations often do not take physical possession of the drugs; most pills never pass through the MCO’s hands, but instead go from manufacturer to wholesaler to pharmacy to patient. Therefore, manufacturers rebate money—billions of dollars—to compensate for purchases made at list prices. Managed-care rebates are given with consideration; they are the result of contracts that require performance. For example, a manufacturer will pay an HMO a rebate if it can keep a drug’s prescription market share above the national level. These rebates average 10–40 percent of sales. The net result is that the neediest Americans, frequently those without insurance, pay the highest prices, while the most powerful health plans and government agencies pay the lowest. Pharmaceutical companies would like to help poor people in the United States, but the federal government and, to a much lesser extent, health plans have tied their hands. Drug companies can and do give drugs away free through patient assistance programs, but they cannot sell them at very low prices because the federal government requires drug companies to give the huge Medicaid program their “best prices.” If a drug company sells to even one customer at a very low price, it also has to sell at the same price to the 5–40 percent of its customers covered by Medicaid. Drug prices are regularly attacked as “too high.” Yet, cheaper over-the-counter drugs, natural medicines, and generic versions of off-patent drugs are ubiquitous, and many health plans steer patients toward them. Economic studies have shown that even the newer, more expensive drugs are usually worth their price and are frequently cheaper than other alternatives. One study showed that each dollar spent on vaccines reduced other health care costs by $10. Another study showed that for each dollar spent on newer drugs, $6.17 was saved.16 Therefore, health plans that aggressively limited their drug spending ended up spending more over all. Most patients do not pay retail prices because they have some form of insurance. In 2003, before a law was passed that subsidizes drugs for seniors, 75–80 percent of seniors had prescription drug insurance. Insured people pay either a flat copayment, often based on tiers (copayment levels set by managed-care providers that involve a low payment for generic drugs and a higher payment for brand-name drugs) or a percentage of the prescription cost. On average, seniors spend more on entertainment than they do on drugs and medical supplies combined. But for the uninsured who are also poor and sick, drug prices can be a devastating burden. The overlap of the 20–25 percent who lack drug insurance and the 10 percent who pay more than five thousand dollars per year—approximately 2 percent are in both groups—is where we find the stories of people skimping on food to afford their medications. The number of people in both groups is actually lower than 2 percent because of the numerous patient assistance programs offered by pharmaceutical companies. For all the talk of lower drug prices, what people really want is lower risk through good insurance. Insurance lowers an individual’s risk and, consequently, increases the demand for pharmaceuticals. By spending someone else’s money for a good chunk of every pharmaceutical purchase, individuals become less price sensitive. A two-hundred-dollar prescription for a new medicine is forty times as expensive as a five-dollar generic, but its copay may be only three times the generic’s copay. The marginal cost to patients of choosing the expensive product is reduced, both in absolute and relative terms, and patients are thus more likely to purchase the expensive drug and make purchases they otherwise would have skipped. The data show that those with insurance consume 40–100 percent more than those without insurance. Drugs account for a small percentage of overall health-care spending. In fact, branded pharmaceuticals are about 7 percent and generics 3 percent of total U.S. health-care costs.17 The tremendous costs involved with illnesses—even if they are not directly measured—are the economic and human costs of the diseases themselves, not the drugs. About the Author Charles L. Hooper is president of Objective Insights, a company that consults for pharmaceutical and biotech companies. He is a visiting fellow with the Hoover Institution. Further Reading   Bast, Joseph L., Richard C. Rue, and Stuart A. Wesbury Jr. Why We Spend Too Much on Health Care and What We Can Do About It. Chicago: Heartland Institute, 1993. DiMasi, Joseph A., Ronald W. Hansen, and Henry G. Grabowski. “The Price of Innovation: New Estimates of Drug Development Costs.” Journal of Health Economics 22, no. 2 (2003): 151–185. Higgs, Robert, ed. Hazardous to Our Health? FDA Regulation of Health Care Products. Oakland, Calif.: Independent Institute, 1995. Hilts, Philip J. Protecting America’s Health: The FDA, Business, and One Hundred Years of Regulation. New York: Alfred A. Knopf, 2003. Klein, Daniel B., and Alexander Tabarrok. FDAReview.org. Oakland, Calif.: Independent Institute. Online at: http://www.fdareview.org/. Miller, Henry I. To America’s Health: A Proposal to Reform the Food and Drug Administration. Stanford, Calif.: Hoover Institution Press, 2000.   Footnotes 1. Philip J. Hilts, Protecting America’s Health: The FDA, Business, and One Hundred Years of Regulation (New York: Alfred A. Knopf, 2003), pp. 89–90.   2. Daniel B. Klein and Alexander Tabarrok, FDAReview.org, Independent Institute, online under “History” at: http://www.FDAReview.org/history.shtml#fifth.   3. “THALOMID (Thalidomide): Balancing the Benefits and the Risks,” Celgene Corporation, p. 2, online at: www.sanmateo.org/rimm/Tali_benefits_risks_celgene.pdf.   4. Alexander Tabarrok, “The Anomaly of Off-Label Drug Prescriptions,” Independent Institute Working Paper no. 10, December 1999.   5. Lazarov, Jason, et al. “Incidence of Adverse Drug Reactions in Hospitalized Patients.” Journal of the American Medical Association 279, no. 15 (1998): 1200–1205.   6. Peltzman, Sam. An Evaluation of Consumer Protection Legislation: The 1962 Drug Amendments. Journal of Political Economy 81, no. 5 (1973): 1049–1091.   7. Parexel’s Pharmaceutical R&D Statistical Sourcebook 2004–2005 (Waltham, Mass.: Parexel International Corporation, 2004), p. 9.   8. “Broader Use for Aspirin Fails to Win Backing,” Wall Street Journal, December 9, 2003, p. D9.   9. This 55,000 is the total number of patients tested in five published clinical trials of the use of aspirin to prevent initial non-fatal myocardial infraction.   10. Food and Drug Administration, Center for Drug Evaluation and Research, Cardiovascular and Renal Drugs Advisory Committee meeting, Monday, December 8, 2003, Gaithersburg, Md.   11. Henry I. Miller, M.D., To America’s Health: A Proposal to Reform the Food and Drug Administration (Stanford, Calif.: Hoover Institution Press, 2000), p. 42.   12. “FDA Gives Restricted Approval to Thalidomide,” CNN News, July 16, 1998.   13. “Drug Ban Brings Misery to Patient,” Associated Press, November 11, 2000.   14. Klein and Tabarrok, FDAReview.org, online under “Reform Options” at http://www.fdareview.org/reform.shtml#5; David R. Henderson, The Joy of Freedom: An Economist’s Odyssey (New York: Prentice Hall, 2002), pp. 206–207, 278–279.   15. Joseph A. DiMasi, Ronald W. Hansen, and Henry G. Grabowski, “The Price of Innovation: New Estimates of Drug Development Costs,” Journal of Health Economics 22 (2003): 151–185.   16. Frank R. Lichtenberg, “Benefits and Costs of Newer Drugs: An Update,” NBER Working Paper no. 8996, National Bureau of Economic Research, Cambridge, Mass., 2002.   17. The Centers for Medicare and Medicaid Services (CMS), January 8, 2004.   (0 COMMENTS)

/ Learn More

Money Supply

What Is the Money Supply? The U.S. money supply comprises currency—dollar bills and coins issued by the Federal Reserve System and the U.S. Treasury—and various kinds of deposits held by the public at commercial banks and other depository institutions such as thrifts and credit unions. On June 30, 2004, the money supply, measured as the sum of currency and checking account deposits, totaled $1,333 billion. Including some types of savings deposits, the money supply totaled $6,275 billion. An even broader measure totaled $9,275 billion. These measures correspond to three definitions of money that the Federal Reserve uses: M1, a narrow measure of money’s function as a medium of exchange; M2, a broader measure that also reflects money’s function as a store of value; and M3, a still broader measure that covers items that many regard as close substitutes for money. The definition of money has varied. For centuries, physical commodities, most commonly silver or gold, served as money. Later, when paper money and checkable deposits were introduced, they were convertible into commodity money. The abandonment of convertibility of money into a commodity since August 15, 1971, when President Richard M. Nixon discontinued converting U.S. dollars into gold at $35 per ounce, has made the monies of the United States and other countries into fiat money—money that national monetary authorities have the power to issue without legal constraints. Why Is the Money Supply Important? Because money is used in virtually all economic transactions, it has a powerful effect on economic activity. An increase in the supply of money works both through lowering interest rates, which spurs investment, and through putting more money in the hands of consumers, making them feel wealthier, and thus stimulating spending. Business firms respond to increased sales by ordering more raw materials and increasing production. The spread of business activity increases the demand for labor and raises the demand for capital goods. In a buoyant economy, stock market prices rise and firms issue equity and debt. If the money supply continues to expand, prices begin to rise, especially if output growth reaches capacity limits. As the public begins to expect inflation, lenders insist on higher interest rates to offset an expected decline in purchasing power over the life of their loans. Opposite effects occur when the supply of money falls

/ Learn More

Monetarism

Monetarism is a macroeconomic school of thought that emphasizes (1) long-run monetary neutrality, (2) short-run monetary nonneutrality, (3) the distinction between real and nominal interest rates, and (4) the role of monetary aggregates in policy analysis. It is particularly associated with the writings of Milton Friedman, Anna Schwartz, Karl Brunner, and Allan Meltzer, with early contributors outside the United States including David Laidler, Michael Parkin, and Alan Walters. Some journalists—especially in the United Kingdom—have used the term to refer to doctrinal support of free-market positions more generally, but that usage is inappropriate; many free-market advocates would not dream of describing themselves as monetarists. An economy possesses basic long-run monetary neutrality if an exogenous increase of Z percent in its stock of money would ultimately be followed, after all adjustments have taken place, by a Z percent increase in the general price level, with no effects on real variables (e.g., consumption, output, relative prices of individual commodities). While most economists believe that long-run neutrality is a feature of actual market economies, at least approximately, no other group of macroeconomists emphasizes this proposition as strongly as do monetarists. Also, some would object that, in practice, actual central banks almost never conduct policy so as to involve exogenous changes in the money supply. This objection is correct factually but irrelevant: the crucial matter is whether the supply and demand choices of households and businesses reflect concern only for the underlying quantities of goods and services that are consumed and produced. If they do, then the economy will have the property of longrun neutrality, and thus the above-described reaction to a hypothetical change in the money supply would occur.1 Other neutrality concepts, including the natural-rate hypothesis, are mentioned below. Short-run monetary nonneutrality obtains, in an economy with long-run monetary neutrality, if the price adjustments to a change in money take place only gradually, so that there are temporary effects on real output (GDP) and employment. Most economists consider this property realistic, but an important school of macroeconomists, the so-called real business cycle proponents, denies it. Continuing with our list, real interest rates are ordinary (“nominal”) interest rates adjusted to take account of expected inflation, as rational, optimizing people would do when they make trade-offs between present and future. As long ago as the very early 1800s, British banker and economist Henry Thornton recognized the distinction between real and nominal interest rates, and American economist Irving Fisher emphasized it in the early 1900s. However, the distinction was often neglected in macroeconomic analysis until monetarists began insisting on its importance during the 1950s. Many Keynesians did not disagree in principle, but in practice their models often did not recognize the distinction and/or they judged the “tightness” of monetary policy by the prevailing level of nominal interest rates. All monetarists emphasized the undesirability of combating inflation by nonmonetary means, such as wage and price controls or guidelines, because these would create market distortions. They stressed, in other words, that ongoing inflation is fundamentally monetary in nature, a viewpoint foreign to most Keynesians of the time. Finally, the original monetarists all emphasized the role of monetary aggregates—such as M1, M2, and the monetary base—in monetary policy analysis, but details differed between Friedman and Schwartz, on the one hand, and Brunner and Meltzer, on the other. Friedman’s striking and famous recommendation was that, irrespective of current macroeconomic conditions, the stock of money

/ Learn More

Monetary Union

When economists such as robert mundell were theorizing about optimal monetary unions in the middle of the twentieth century, most people regarded the exercise as largely hypothetical. But since many European countries established a monetary union at the end of the century, the theory of monetary unions has become much more relevant to many more people. Definitions and Background The ability to issue money usable for transactions is a power usually reserved by a country’s central government, and it is often seen as a part of a nation’s sovereignty. A monetary union, also known as a currency union or common currency area, entails multiple countries ceding control over the supply of money to a common authority. Adjusting the money supply is a common tool for managing overall economic activity in a country (see monetary policy), and changes in the money supply also affect the financing of government budgets. So giving up control of a national money supply introduces new limitations on a country’s economic policies. A monetary union in many ways resembles a fixed-exchange-rate regime, whereby countries retain distinct national currencies but agree to adjust the relative supply of these to maintain a desired rate of exchange. A monetary union is an extreme form of a fixed-exchange-rate regime, with at least two distinctions. First, because the countries switch to a new currency, the cost of abandoning the new system is much higher than for a typical fixed-exchange-rate regime, giving people more confidence that the system will last. Second, a monetary union eliminates the transactions costs people incur when they need to exchange currencies in carrying out international transactions. Fixed-exchange-rate regimes have been quite common throughout recent history. The United States participated in such a regime from the 1940s until 1973; numerous Europeans participated in one until the creation of the monetary union; and many small or poor countries (Belize, Bhutan, and Botswana, to name just a few) continue to fix their exchange rates to the currencies of major trading partners. The precedents for monetary unions prior to the current European Monetary Union are rare. From 1865 until World War I, all four members of the Latin Monetary Union—France, Belgium, Italy, and Switzerland—allowed coins to circulate throughout the union. Luxembourg shared a currency with its larger neighbor Belgium from 1992 until the formation of the broader European Monetary Union. In addition, many former colonies such as the franc zone in western Africa or other small poor countries (Ecuador and Panama) adopted the currency of a large, wealthier trading partner. But the formation of the European Monetary Union by a group of large and wealthy countries is an unprecedented experiment in international monetary arrangements. Optimal Currency Area Theory Forming a monetary union carries benefits and costs. One benefit is that merchants no longer need worry about unexpected movements in the exchange rate. Suppose a seller of computers in Germany must decide between buying from a supplier in the United States at a price set in dollars and a supplier in France with a price in euros, payment on delivery. Even if the U.S. supplier’s price is lower once it is converted from dollars to euros at the going exchange rate, there is a risk that the dollar’s value will rise before the time of payment, raising the cost of the computers in euros, and hence lowering the merchant’s profits. Even if the merchant expects that the import price probably will be lower, he may decide it is not worth risking a mistake. A monetary union, like any fixed-exchange-rate regime, eliminates this risk. One effect is to promote international trade among members of the monetary union. The same argument can be made for international investment. If a European investor is considering buying a computer manufacturing company in the United States, the value of profits converted from dollars to euros is uncertain. On the other hand, exchange-rate fluctuations

/ Learn More

Natural Resources

The earth’s natural resources are finite, which means that if we use them continuously, we will eventually exhaust them. This basic observation is undeniable. But another way of looking at the issue is far more relevant to assessing people’s well-being. Our exhaustible and unreproducible natural resources, if measured in terms of their prospective contribution to human welfare, can actually increase year after year, perhaps never coming anywhere near exhaustion. How can this be? The answer lies in the fact that the effective stocks of natural resources are continually expanded by the same technological developments that have fueled the extraordinary growth in living standards since the Industrial Revolution. Innovation has increased the productivity of natural resources (e.g., increasing the gasoline mileage of cars). Innovation also increases the recycling of resources and reduces waste in their extraction and processing. And innovation affects the prospective output of natural resources (e.g., the coal still underneath the ground). If a scientific breakthrough in a given year increases the prospective output of the unused stocks of a resource by an amount greater than the reduction (via resources actually used up) in that year, then, in terms of human economic welfare, the stock of that resource will be larger at the end of the year than at the beginning. Of course, the remaining physical amount of the resource must continually decline,

/ Learn More

Monopoly

A monopoly is an enterprise that is the only seller of a good or service. In the absence of government intervention, a monopoly is free to set any price it chooses and will usually set the price that yields the largest possible profit. Just being a monopoly need not make an enterprise more profitable than other enterprises that face competition: the market may be so small that it barely supports one enterprise. But if the monopoly is in fact more profitable than competitive enterprises, economists expect that other entrepreneurs will enter the business to capture some of the higher returns. If enough rivals enter, their competition will drive prices down and eliminate monopoly power. Before and during the period of the classical economics (roughly 1776–1850), most people believed that this process of monopolies being eroded by new competitors was pervasive. The only monopolies that could persist, they thought, were those that got the government to exclude rivals. This belief was well expressed in an excellent article on monopoly in the Penny Cyclopedia (1839, vol. 15, p. 741): It seems then that the word monopoly was never used in English law, except when there was a royal grant authorizing some one or more persons only to deal in or sell a certain commodity or article. If a number of individuals were to unite for the purpose of producing any particular article or commodity, and if they should succeed in selling such article very extensively, and almost solely, such individuals in popular language would be said to have a monopoly. Now, as these individuals have no advantage

/ Learn More

Natural Gas: Markets and Regulation

Natural gas is the commercial name for methane, a hydrocarbon produced by the same geological processes that produce oil. Relatively abundant in North America, its production and combustion have fewer adverse environmental effects than those of coal or oil. The 23.1 trillion cubic feet (TCF) of gas that Americans consumed in 2002 accounted for 30.3 percent of all their energy use (measured in British thermal units), up from 21.5 percent in 1952.1 Households consumed 23.3 percent of delivered gas, electric utilities used 27.0 percent as generator fuel, and the remainder went to commercial and industrial users. In 2002, 3.8 TCF were imported from Canada and a negligible amount was exported.2 The U.S. output was produced in 383,000 wells owned by hundreds of producers and was transported through 285,000 miles of interstate pipelines.3 Before high-pressure pipelines were developed in the 1920s, gas was either consumed in the vicinity of its production or flared off as hazardous. Today, producers and marketers use interstate pipelines for deliveries to distributors and large consumers. The Federal Energy Regulatory Commission (FERC) determines cost-based pipeline rates, but pipelines are free to discount these (which they often do) in order to attract business. The rates of most local distribution companies (LDCs) that deliver and sell gas to final users are under state regulation, and the remainder are operated by municipal governments. Thus, gas is a vertically unintegrated industry in which dependable product flows require coordination among producers, pipelines, and LDCs. Since the 1970s, the industry has relied more heavily on coordination by market forces and less heavily on regulation, although the latter still plays a large role. Somewhat unusually, regulators themselves took major initiatives to bring competition to the industry, rather than protecting the status quo or imposing heavier regulations. The industry’s evolution is a case study in the replacement of inefficient economic institutions by efficient ones and the replacement of localized markets by national and global ones. 1938–1985: Pervasive Regulation and Shortages The Natural Gas Act of 1938 instituted pipeline regulation by the Federal Power Commission, which was reconstituted as FERC in 1978. The government justified regulation by asserting that pipelines were “natural monopolies” with scale economies so pervasive that a single line (or a handful to guarantee reliability) was the most economical link between producing and consuming areas. At the same time, state-regulated LDCs were (and continue to be) monopoly franchises with cost-based rates and the ability to pass on gas costs dollar for dollar to end-users. Until the mid-1980s, pipelines purchased gas from producers and resold it, with no markup, to LDCs. In 1954, the Supreme Court ruled that federal regulation extended to the wellhead prices received by producers. Prices were to be determined using recorded costs. Regulators set the allowable costs of replacing exhausted wells at low levels that seriously discouraged exploration for new gas. Because oil prices remained unregulated through the 1960s (most gas is found in association with oil), gas shortages became serious only when new price controls on oil helped bring about the “energy crisis” of 1973–1975. Administrations of both political parties were unable or unwilling to acknowledge that the controls restricted the amount supplied and increased the amount demanded. Instead, they instituted direct controls on gas use, such as prohibiting construction of new gas-burning power plants, in the mistaken belief that falling reserves indicated the exhaustion of supply. In reality, reserves were falling because allowable prices were too low to make exploration profitable. Prior to 1978, intrastate markets were exempt from federal price controls, and they experienced no shortages. 1985–2000: A National Gas Market Emerges A complex series of events in the early 1980s led FERC to lift all price controls in 1985, a promarket policy that the Supreme Court subsequently ratified. The decontrol followed on 1984’s Order 436, which effectively ended the earlier role of pipelines as purchasers and resellers of gas to LDCs. Order 436 (followed by Order 636 in 1992) turned pipelines into “open access” transporters for gas owned by producers, LDCs, and others. FERC still set maximum

/ Learn More

National Income Accounts

National income accounts (NIAs) are fundamental aggregate statistics in macroeconomic analysis. The ground-breaking development of national income and systems of NIAs was one of the most far-reaching innovations in applied

/ Learn More