This is my archive

bar

Intellectual Property

Intellectual property is normally defined as the set of products protected under laws associated with copyright, patent, trademark, industrial design, and trade secrets. The U.S. Constitution expressly allows for intellectual property protection, albeit for a limited time, in the form of protection of “writings and discoveries” in order to promote “science and useful arts.” This article focuses on the two most important categories: copyright and patent law. Copyright, which covers the expression of ideas (e.g., through words or music), currently lasts for the rest of the author’s life plus seventy years (or ninety-five years after publication if the product is a “work-for-hire”). But the protection is very narrow. If someone else should, by a remarkable coincidence, write exactly the same song or story as you without ever coming into contact with your work, your prior copyright does not prevent him from selling his work. Copyright currently exists on a work without any effort on the part of the author to attain copyright and without any requirement of quality or originality. Patents, in contrast, last for twenty years and apply to inventions. The protection, although shorter, is broader than that of copyright. If someone else independently creates a duplicate of your invention after you have patented yours, your patent can make his invention worthless since he will not have the legal right to sell his version. This may be true even if his invention is slightly different from yours. For this reason, being the first to patent a valuable idea is very important, and “patent races,” as competitors vie to be first, can be a wasteful use of resources. Unlike copyright, getting the legal patent from the patent office requires spending resources, and before a patent is granted, the ideas that are to be patented must pass several legal hurdles regarding their originality and quality. Although expression and invention must be transformed into physical embodiments before they can have market value, they can also exist, and indeed must originally exist, in the creator’s mind. As such, traditional laws of property, which require physicality, do not apply. Traditional laws of economics, such as the assumption of scarcity, also seem not to apply because individual expressions and ideas cannot be used up. Economists have a term for goods that cannot be used up—“nonrivalrous consumption” (sometimes known as

/ Learn More

Interest Rates

The rate of interest measures the percentage reward a lender receives for deferring the consumption of resources until a future date. Correspondingly, it measures the price a borrower pays to have resources now. Suppose I have $100 today that I am willing to lend for one year at an annual interest rate of 5 percent. At the end of the year, I get back my $100 plus $5 interest (0.05 × 100), for a total of $105. The general relationship is: Money Today (1 + interest rate) = Money Next Year We can also ask a different question: What is the most I would pay today to get $105 next year? If the rate of interest is 5 percent, the most I would pay is $100. I would not pay $101, because if I had $101 and invested it at 5 percent, I would have $106 next year. Thus, we say that the value of money in the future should be discounted, and $100 is the “discounted present value” of $105 next year. The general relationship is: Money Today = Money Next Year (1 + interest rate) The higher the interest rate, the more valuable is money today and the lower is the present value of money in the future. Now, suppose I am willing to lend my money out for a second year. I lend out $105, the amount I have next year, at 5 percent and have $110.25 at the end of year two. Note that I have earned an extra $5.25 in the second year because the interest that I earned in year one also earns interest in year two. This is what we mean by the term “compound interest”—the interest that money earns also earns interest. Albert Einstein is reported to have said that compound interest is the greatest force in the world. Money left in interest-bearing investments can compound to extremely large sums. A simple rule, the rule of 72, tells how long it takes your money to double if it is invested at compound interest. The number 72 divided by the interest rate gives the approximate number of years it will take to double your money. For example, at a 5 percent interest rate, it takes about fourteen years to double your money (72 ÷ 5 = 14.4), while at an interest rate of 10 percent, it takes about seven years. There is a wonderful actual example of the power of compound interest. Upon his death in 1791, Benjamin Franklin left $5,000 to each of his favorite cities, Boston and Philadelphia. He stipulated that the money should be invested and not paid out for one hundred to two hundred years. At one hundred years, each city could withdraw $500,000; after two hundred years, they could withdraw the remainder. They did withdraw $500,000 in 1891; they invested the remainder and, in 1991, each city received approximately $20,000,000. What determines the magnitude of the interest rate in an economy? Let us consider five of the most important factors. 1. The strength of the economy and the willingness to save. Interest rates are determined in a free market where supply and demand interact. The supply of funds is influenced by the willingness of consumers, businesses, and governments to save. The demand for funds reflects the desires of businesses, households, and governments to spend more than they take in as revenues. Usually, in very strong economic expansions, businesses’ desire to invest in plants and equipment and individuals’ desire to invest in housing tend to drive interest rates up. During periods of weak economic conditions, business and housing investment falls and interest rates tend to decline. Such declines are often reinforced by the policies of the country’s central bank (the Federal Reserve in the United States), which attempts to reduce interest rates in order to stimulate housing and other interest-sensitive investments. 2. The rate of inflation. People’s willingness to lend money depends partly on the inflation rate. If prices are expected to be stable, I may be happy to lend money for a year at 4 percent because I expect to have 4 percent more purchasing power at the end of the year. But suppose the inflation rate is expected to be 10 percent. Then, all other things being equal, I will insist on a 14 percent rate on interest, ten percentage points of which compensate me for the inflation.1 Economist irving fisher pointed out this fact almost a century ago, distinguishing clearly between the real rate of interest (4 percent in the above example) and the nominal rate of interest (14 percent in the above example), which equals the real rate plus the expected inflation rate. 3. The riskiness of the borrower. I am willing to lend money to my government or to my local bank (whose deposits are generally guaranteed by the government) at a lower rate than I would lend to my wastrel nephew or to my cousin’s risky new venture. The greater the risk that my loan will not be paid back in full, the larger is the interest rate I will demand to compensate me for that risk. Thus, there is a risk structure to interest rates. The greater the risk that the borrower will not repay in full, the greater is the rate of interest. 4. The tax treatment of the interest. In most cases, the interest I receive from lending money is fully taxable. In certain cases, however, the interest is tax free. If I lend to my local or state government, the interest on my loan is free of both federal and state taxes. Hence, I am willing to accept a lower rate of interest on loans that have favorable tax treatment. 5. The time period of the loan. In general, lenders demand a higher rate of interest for loans of longer maturity. The interest rate on a ten-year loan is usually higher than that on a one-year loan, and the rate I can get on a three-year bank certificate of deposit is generally higher than the rate on a six-month certificate of deposit. But this relationship does not always hold; to understand the reasons, it is necessary to understand the basics of bond investing. Most long-term loans are made via bond instruments. A bond is simply a long-term IOU issued by a government, a corporation, or some other entity. When you invest in a bond, you are lending money to the issuer. The interest payments on the bond are often referred to as “coupon” payments because up through the 1950s, most bond investors actually clipped interest coupons from the bonds and presented them to their banks for payment. (By 1980 bonds with actual coupons had virtually disappeared.) The coupon payment is fixed for the life of the bond. Thus, if a one-thousand-dollar twenty-year bond has a fifty-dollar-per-year interest (coupon) payment, that payment never changes. But, as indicated above, interest rates do change from year to year in response to changes in economic conditions, inflation, monetary policy, and so on. The price of the bond is simply the discounted present value of the fixed interest payments and of the face value of the loan payable at maturity. Now, if interest rates rise (the discount factor is higher), then the present value, or price, of the bond will fall. This leads to three basic facts facing the bond investor: 1. If interest rates rise, bond prices fall. 2. If interest rates fall, bond prices rise. 3. The longer the period to maturity of the bond, the greater is the potential fluctuation in price when interest rates change. If you hold a bond to maturity, you need not worry if the price bounces around in the interim. But if you have to sell prior to maturity, you may receive less than you paid for the bond. The longer the maturity of the bond, the greater is the risk of loss because long-term bond prices are more volatile than shorter-term issues. To compensate for that risk of price fluctuation, longer-term bonds usually have higher interest rates than shorter-term issues. This tendency of long rates to exceed short rates is called the risk-premium theory of the yield structure. This relationship between interest rates for loans or bonds and various terms to maturity is often depicted in a graph showing interest rates on the vertical axis and term to maturity on the horizontal. The general shape of that graph is called the shape of the yield curve, and typically the curve is rising. In other words, the longer term the bond, the greater is the interest rate. This typical shape reflects the risk premium for holding longer-term debt. Long-term rates are not always higher than short-term rates, however. Expectations also influence the shape of the yield curve. Suppose, for example, that the economy has been booming and the central bank, in response, chooses a restrictive monetary policy that drives up interest rates. To implement such a policy, central banks sell short-term bonds, pushing their prices down and interest rates up. Interest rates, short term and long term, tend to rise together. But if bond investors believe such a restrictive policy is likely to be temporary, they may expect interest rates to fall in the future. In such an event, bond prices can be expected to rise, giving bondholders a capital gain. Thus long-term bonds may be particularly attractive during periods of unusually high short-term interest rates, and in bidding for these long-term bonds, investors drive their prices up and their yields down. The result is a flattening, and sometimes even an inversion, in the yield curve. Indeed, there were periods during the 1980s when U.S. Treasury securities yielded 10 percent or more and long-term interest rates (yields) were well below shorter-term rates. Expectations can also influence the yield curve in the opposite direction, making it steeper than is typical. This can happen when interest rates are unusually low, as they were in the United States in the early 2000s. In such a case, investors will expect interest rates to rise in the future, causing large capital losses to holders of long-term bonds. This would cause investors to sell long-term bonds until the prices came down enough to give them higher yields, thus compensating them for the expected capital loss. The result is long-term rates that exceed short-term rates by more than the “normal” amount. In sum, the term structure of interest rates—or, equivalently, the shape of the yield curve—is likely to be influenced both by investors’ risk preferences and by their expectations of future interest rates. About the Author Burton G. Malkiel, the Chemical Bank Chairman’s Professor of Economics at Princeton University, is the author of the widely read investment book A Random Walk down Wall Street. He was previously dean of the Yale School of Management and William S. Beinecke Professor of Management Studies there. He is also a past member of the Council of Economic Advisers and a past president of the American Finance Association. Further Reading   Fabozzi, Frank J. Bond Markets, Analysis and Strategies. 4th ed. New York: Prentice Hall, 2000. Fisher, Irving. The Theory of Interest. 1930. Reprint. Brookfield, Vt.: Pickering and Chatto, 1997. Available online at: http://www.econlib.org/library/YPDBooks/Fisher/fshToI.html Patinkin, Don. “Interest.” In International Encyclopedia of the Social Sciences. Vol. 7. New York: Macmillan, 1968.   Footnotes 1. Actually, I will insist on 14.4 percent, 4 percent to compensate me for the inflation-caused loss of principal and 0.4 percent to compensate me for the inflation-caused loss of real interest. The general relationship is given by the mathematical formula: 1 + i = (1 + r) × (1 + p), where i is the nominal interest rate (the one we observe), r is the real interest rate (the one that would exist if inflation were expected to be zero), and p is the expected inflation rate.   (0 COMMENTS)

/ Learn More

Insurance

Insurance plays a central role in the functioning of modern economies. Life insurance offers protection against the economic impact of an untimely death; health insurance covers the sometimes extraordinary costs of medical care; and bank deposits are insured by the federal government (see financial regulation). In each case, the insured pays a small premium in order to receive benefits should an unlikely but high-cost event occur. Insurance issues, traditionally a stodgy domain, have become subjects for intense debate and concern in recent years. How to provide health insurance for the significant portion of Americans not now covered is a central political issue. Some states, attempting to hold back the tide of higher costs, have placed severe limits on auto insurance rates and have even sought refunds from insurers. And ways to cover losses from terrorism have become a major issue. Temporarily, in response to the massive losses of 9/11, the federal government adopted a heavily subsidized three-year program for reinsuring terror-related building losses. (The program was extended.) In theory, the government can recoup some losses after the fact by levying a surcharge on the premiums of surviving firms. The Basics An understanding of insurance must begin with the concept of risk—that is, the variation in possible outcomes of a situation. A’s shipment of goods to Europe might arrive safely or be lost in transit. B may incur zero medical expenses in a good year, but if she is struck by a car they could be upward of $100,000. We cannot eliminate risk from life, even at extraordinary expense. Paying extra for double-hulled tankers still leaves oil spills possible. The only way to eliminate auto-related injuries is to eliminate automobiles. Thus, the effective response to risk combines two elements: efforts or expenditures to lessen the risk, and the purchase of insurance against whatever risk remains. Consider A’s shipment of, say, $1 million in goods. If the chance of loss on each trip is 3 percent, the loss will be $30,000 (3 percent of $1 million), on average. Let us assume that A can ship by a more costly method and cut the risk by one percentage point, thus saving $10,000, on average. If the additional cost of this shipping method is less than $10,000, it is a worthwhile expenditure. But if cutting risk by a further percentage point will cost $15,000, it sacrifices resources. To deal with the remaining 2 percent risk of losing $1 million, A should think about insurance. To cover administrative costs, the insurer might charge $25,000 for a risk that will incur average losses of no more than $20,000. From A’s standpoint, however, the insurance may be worthwhile because it is a comparatively inexpensive way to deal with the potential loss of $1 million. Note the important economic role of such insurance: without it, A might not be willing to risk shipping goods in the first place. In exchange for a premium, the insurer will pay a claim should a specified contingency—such as death, medical bills, or, in this instance, shipment loss—arise. The insurer—whether a corporation with diversified ownership or a mutual company made up of the insureds themselves—is able to offer such protection against financial loss by pooling the risks from a large group of similarly situated individuals or firms. The laws of probability ensure that only a tiny fraction of these insured shipments will be lost, or only a small fraction of the insured population will face expensive hospitalization in a year. If, for example, each of 100,000 individuals independently faces a 1 percent risk in a year, on average, 1,000 will have losses. If each of the 100,000 people paid a premium of $1,000, the insurance company would have collected a total of $100 million. Leaving aside administrative costs, this is enough to pay $100,000 to anyone who had a loss. But what would happen if 1,100 people had losses? The answer, fortunately, is that such an outcome is exceptionally unlikely. Insurance works through the magic of the law of large numbers. This law assures that when a large number of people face a low-probability event, the proportion experiencing the event will be close to the expected proportion. For instance, with a pool of 100,000 people who each face a 1 percent risk, the law of large numbers says that 1,100 people or more will have losses only one time in one thousand. In many cases, however, the risks to different individuals are not independent. In a hurricane, airplane crash, or epidemic, many may suffer at the same time. Insurance companies spread such risks not only across individuals, but also across good years and bad, building up reserves in the good years to deal with heavier claims in bad ones. For further protection, they also diversify across lines, selling both health and homeowners’ insurance, for example. The risks normally insured are unintentional, either due to the actions of nature or the inadvertent consequences of human activity. Terrorism creates a new model for insurance for three reasons: (1) The losses are man-made and intentional. (2) Massive numbers of people and structures could be harmed. (Theft losses fall in the first category, but not in the second.) (3) Historical experience does not provide a yardstick for assessing likely risk levels. Nuclear war presented equivalent challenges in the twentieth century. Had there been a significant nuclear war, insurance companies simply would not have paid. The losses would have been too massive to pay out of assets, and many of the assets underlying the insurance would have been destroyed. In time, appropriate insurance arrangements for this new category of massive risk will be developed. The Identity and Behavior of the Insured An economist views insurance as being like most other commodities. It obeys the laws of supply and demand, for example. However, it is unlike many other commodities in one important respect: the cost of providing insurance depends on the identity of the purchaser. A year of health insurance for an eighty-year-old costs more to provide than one for a fifty-year-old. It costs more to provide auto insurance to teenagers than to middle-aged people. If a company mistakenly sells health policies to old folks at a price appropriate for young folks, it will assuredly lose money, just as a restaurant will lose if it sells twenty-dollar steak dinners for ten dollars. The restaurant would lure lots of steak eaters. So, too, would the insurance company attract large numbers of older clients. Because of the differential cost of providing coverage, and because customers search for their lowest price, insurance companies go to great pains to set different premiums for different groups, depending on the risks each will impose. Recognizing that the identity of the purchaser affects the cost of insurance, insurers must be careful to whom they offer insurance at a particular price. Those high-risk individuals whose knowledge of their risk is better than that of the insurers will step forth to purchase, knowing that they are getting a good deal. This is a process called adverse selection, which means that the mix of purchasers will be adverse to the insurer. What leads to this adverse selection is asymmetric information: potential purchasers have more information than the sellers. The potential purchasers have “hidden” information that relates to their particular risk, and those whose information is unfavorable are thus most likely to purchase. For example, if an insurer determined that 1 percent of fifty-year-olds would die in a year, it might establish a premium of $12 per $1,000 of coverage—$10 to cover claims and $2 to cover administrative costs. The insurer might naively expect to break even. However, insureds who ate poorly or who engaged in high-risk professions or whose parents had died young might have an annual risk of mortality of 3 percent. They would be most likely to purchase insurance. Health fanatics, by contrast, might forgo life insurance because for them it is a bad deal. Through adverse selection, the insurer could end up with a group whose expected costs were, say, $20 per $1,000 rather than the $10 per $1,000 for the population as a whole; at a $12 price, the insurer would lose money. The traditional approach to the adverse selection problem is to inspect each potential insured. Individuals taking out substantial life insurance must submit to a medical exam. Fire insurance might be granted only after a check of the alarm and sprinkler systems. But no matter how careful the inspection, some information will remain hidden, and a disproportionately high number of those choosing to insure will be high risk. Therefore, insurers routinely set high rates to cope with adverse selection. Alas, such high rates discourage ordinary-risk buyers from buying insurance. Though this problem of adverse selection is best known in insurance problems, it applies broadly across economics. Thus, a company that “insures” its salesmen by offering a relatively high salary compared with commission will end up with many salesmen who are not confident of their abilities. Colleges that insure their students by offering many pass-fail courses can expect weaker students to enroll. Moral Hazard or Hidden Action Once insured, an individual has less incentive to avoid the risk of a bad outcome. A person with automobile collision insurance, for example, is more likely to venture forth on an icy night. Federal pension insurance induces companies to underfund (see pensions) and weakens the incentives for their employees to complain. Federally subsidized flood insurance encourages citizens to build homes on floodplains. Insurers use the term “moral hazard” to describe this phenomenon. It means, simply, that insured people undertake actions they would otherwise avoid. Stated in less judgmental language, people respond to incentives. In the above salesman example, not only are low-quality salesmen enticed to join, but all salesmen, even those of high quality, are given an incentive to be less productive. Ideally, the insurer would like to be able to monitor the insured’s behavior and take appropriate action. Flood insurance might not be sold to new residents of a floodplain. Collision insurance might not pay off if it can be proven that the policyholder had been drinking or had otherwise engaged in reckless behavior. But given the difficulty of monitoring many actions, insurers accept that once policies are issued, behavior will change adversely, and more claims will be made. The moral hazard problem is often encountered in areas that, at first glance, do not seem associated with traditional insurance. Products covered under optional warranties tend to get abused, as do autos that are leased with service contracts. Equity Issues The same insurance policy will have different costs for serving individuals whose behavior or underlying characteristics may differ. Because these cost differences influence pricing, some people see an equity dimension to insurance. Some think, for example, that urban drivers should not pay much more than rural drivers to protect themselves from auto liability, even though urban driving is riskier. But if prices are not allowed to vary in relation to risk, insurers will seek to avoid various classes of customers altogether, and availability will be restricted. When sellers of health insurance are not allowed to find out if potential clients are HIV-positive, for example, insurance companies often respond by refusing to insure, say, never-married men over age forty. Equity issues in insurance are addressed in a variety of ways in the real world. Most employers cross-subsidize health insurance, providing the same coverage at the same price to older, higher-risk workers and younger, lower-risk ones. Sometimes the government provides the “insurance” itself, although the federal government’s Medicare and Social Security programs are really a combined tax and subsidy scheme—one that gives a bigger benefit to those who live longer. The government’s decision not to tax employer-provided health insurance as income acts like a subsidy. In pursuit of equity, governments may set insurance rates, as many states do with auto insurance. The traditional public-interest argument for government rate regulation is that it serves to control a monopoly. But this argument fails with auto insurance: in most regulated insurance markets, there are dozens of competing insurers. Insurance rates are regulated to help some groups—usually those imposing high risks—at the expense of others. The Massachusetts auto insurance market provides an example. High-cost drivers are subsidized at the expense of all other drivers. Thus, inexperienced, occasional drivers in Massachusetts paid, on average, $1,967 for insurance in 2004 compared with $1,114 for experienced drivers. In contrast, in neighboring Connecticut, where such cross-subsidies were not imposed, the respective rates are $3,518 and $845. Such practices raise a new class of equity issues. Should the government force people who live quiet, low-risk lives to subsidize the high-risk fringe? Most people’s response to this question depends on whether they think people can control risks. Because most of us think we should not encourage people to engage in behavior that is costly to the system, we conclude, for example, that nonsmokers should not have to pay for smokers. The question becomes more complex when it comes to health care premiums for, say, gay men or recovering alcoholics, whose health care costs are likely to be greater than average. Moral judgments inevitably creep into such discussions. And sometimes the facts lead to disquieting considerations. Smokers, for example, tend to die early, reducing expected costs for Social Security. Should they, therefore, pay lower Social Security taxes? Black men have shorter lives than white men. Should black men pay lower Social Security taxes? Government’s Role in Insurance Government plays four major roles with insurance: (1) Government writes it directly, as with Social Security, terrorism reinsurance, and pension guarantees—via the Pension Benefit Guaranty Corporation (PBGC)—should a corporation fail. (2) Government subsidizes insurance: quite explicitly in some programs, such as federal flood insurance, but only de facto in other cases (e.g., the PBGC has a large projected deficit). (3) Government mandates a residual market for high risks (e.g., Florida’s program for hurricanes or many states’ programs for high-risk drivers). Governments hold down prices in such markets either by creating a state fund to cover losses or by requiring insurers who participate in the voluntary market to pick up a certain portion of this high-risk market. (4) Government regulates matters such as premiums, insurance company solvency (to make sure that insureds get paid), and permissible criteria for pricing insurance (e.g., for auto insurance, race and ethnicity are banned everywhere; Michigan bans geographic designations smaller than a city). Property liability insurance is regulated at the state level, providing many opportunities to compare the efficacy of alternative approaches. The three main regulatory approaches to pricing have been: (1) prior approval (regulators must approve rates before they go in effect); (2) use and file (companies set rates, but regulators can disallow them subsequently if they are found excessive); and (3) open competition (a market-based system in which rates are deemed not excessive as long as there is competition). Empirical studies conflict as to whether regulation leads to lower prices. Government participates far more in insurance markets than in typical markets. The two great dangers with government participation in insurance arise when, as is common, the goals for participation remain vague (e.g., promoting the insured activity, redistributing income, or spreading risk effectively), or when its expected cost is not recognized in budgets. With insurance, as with all government endeavors, the citizenry deserves to know both the rationale and the cost. Conclusion The traditional role of insurance remains the essential one recognized centuries ago: that of spreading risk among similarly situated individuals. Insurance works most effectively when losses are not under the control of individuals (thus avoiding moral hazard) and when the losses are readily determined (lest significant transactions costs associated with lawsuits become a burden). Individuals and firms insure against their most major risks—high health costs, the inability to pay depositors—which often are politically salient issues as well. Not surprisingly, government participation—as a setter of rates, as a subsidizer, and as a direct provider of insurance services—has become a major feature in insurance markets. Its highly subsidized terrorism reinsurance provides a dramatic example. Political forces may sometimes triumph over sound insurance principles, but such victories are Pyrrhic. In a sound market, we must recognize that with insurance, as with bread and steel, the cost of providing it must be paid. About the Author Richard Zeckhauser is the Frank P. Ramsey Professor of Political Economy at Harvard University’s John F. Kennedy School of Government. He writes frequently on risk-related issues. Practicing what he preaches, in 2003 and 2004 he came in second and third in two different U.S. national bridge championships. Further Reading   Arrow, Kenneth J. “The Economics of Agency.” In John W. Pratt and Richard J. Zeckhauser, eds., Principals and Agents: The Structure of Business. Boston: Harvard Business School Press, 1985. Arrow, Kenneth J. Essays in the Theory of Risk-Bearing. Amsterdam: North-Holland, 1971. Cutler, David, and Richard Zeckhauser. “The Anatomy of Health Insurance.” In Joseph P. Newhouse and Anthony Culyer, eds., The Handbook of Health Economics. New York: Elsevier, 2000. Cutler, David, and Richard Zeckhauser. “Extending the Theory to Meet the Practice of Insurance.” In Robert E. Litan and Richard Herring, eds., Brookings-Wharton Papers on Financial Services. Washington, D.C.: Brookings Institution Press, 2004. Pp. 1–53. Gollier, Christian. The Economics of Risk and Time. Cambridge: MIT Press, 2001. Huber, Peter W. Liability: The Legal Revolution and Its Consequences. New York: Basic Books, 1988.   (0 COMMENTS)

/ Learn More

International Capital Flows

International capital flows are the financial side of international trade.1 When someone imports a good or service, the buyer (the importer) gives the seller (the exporter) a monetary payment, just as in domestic transactions. If total exports were equal to total imports, these monetary transactions would balance at net zero: people in the country would receive as much in financial flows as they paid out in financial flows. But generally the trade balance is not zero. The most general description of a country’s balance of trade, covering its trade in goods and services, income receipts, and transfers, is called its current account balance. If the country has a surplus or deficit on its current account, there is an offsetting net financial flow consisting of currency, securities, or other real property ownership claims. This net financial flow is called its capital account balance. When a country’s imports exceed its exports, it has a current account deficit. Its foreign trading partners who hold net monetary claims can continue to hold their claims as monetary deposits or currency, or they can use the money to buy other financial assets, real property, or equities (stocks) in the trade-deficit country. Net capital flows comprise the sum of these monetary, financial, real property, and equity claims. Capital flows move in the opposite direction to the goods and services trade claims that give rise to them. Thus, a country with a current account deficit

/ Learn More

Inflation

Economists use the term “inflation” to denote an ongoing rise in the general level of prices quoted in units of money. The magnitude of inflation—the inflation rate—is usually reported as the annualized percentage growth of some broad index of money prices. With U.S. dollar prices rising, a one-dollar bill buys less each year. Inflation thus means an ongoing fall in the overall purchasing power of the monetary unit. Inflation rates vary from year to year and from currency to currency. Since 1950, the U.S. dollar inflation rate, as measured by the December-to-December change in the U.S. Consumer Price Index (CPI), has ranged from a low of −0.7 percent (1954) to a high of 13.3 percent (1979). Since 1991, the rate has stayed between 1.6 percent and 3.3 percent per year. Since 1950 at least eighteen countries have experienced episodes of hyperinflation, in which the CPI inflation rate has soared above 50 percent per month. In recent years, Japan has experienced negative inflation, or “deflation,” of around 1 percent per year, as measured by the Japanese CPI. Central banks in most countries today profess concern with keeping inflation low but positive. Some specify a target range for the inflation rate, typically 1–3 percent. Although economies on silver and gold standards sometimes experienced inflation, inflation rates in such economies seldom exceeded 2 percent per year, and the overall experience over the centuries was inflation of close to zero. Economies on paper-money standards, which all economies have today, have displayed much more inflation. As Peter Bernholz (2003, p. 1) points out, “the worst excesses of inflation occurred only in the 20th century” in countries where metallic standards were no longer in force. In 1971 the U.S. government cut the U.S. dollar’s last link to gold, ending its commitment to redeem dollars for gold at a fixed rate for foreign central banks. Even among countries that have avoided hyperinflation, inflation rates have generally been higher in the period after 1971. But inflation rates in most countries have been lower since 1985 than they were in 1971–1985. Measuring Inflation In the United States, the inflation rate is most commonly measured by the percentage rise in the Consumer Price Index, which is reported monthly by the Bureau of Labor Statistics (BLS). A CPI of 120 in the current period means that it now takes $120 to purchase a representative basket

/ Learn More

Industrial Revolution and the Standard of Living

Between 1760 and 1860, technological progress, education, and an increasing capital stock transformed England into the workshop of the world. The industrial revolution, as the transformation came to be known, caused a sustained rise in real income per person in England and, as its effects spread, in the rest of the Western world. Historians agree that the industrial revolution was one of the most important events in history, marking the rapid transition to the modern age, but they disagree vehemently about many aspects of the event. Of all the disagreements, the oldest one is over how the industrial revolution affected ordinary people, often called the working classes. One group, the pessimists, argues that the living standards of ordinary people fell, while another group, the optimists, believes that living standards rose. At one time, behind the debate was an ideological argument between the critics (especially Marxists) and the defenders of free markets. The critics, or pessimists, saw nineteenth-century England as Charles Dickens’s Coketown or poet William Blake’s “dark, satanic mills,” with capitalists squeezing more surplus value out of the working class with each passing year. The defenders, or optimists, saw nineteenth-century England as the birthplace of a consumer revolution that made more and more consumer goods available to ordinary people with each passing year. The ideological underpinnings of the debate eventually faded, probably because, as T. S. Ashton pointed out in 1948, the industrial revolution meant the difference between the grinding poverty that had characterized most of human history and the affluence of the modern industrialized nations. No economist today seriously disputes the fact that the industrial revolution began the transformation that has led to extraordinarily high (compared with the rest of human history) living standards for ordinary people throughout the market industrial economies. The standard-of-living debate today is not about whether the industrial revolution made people better off, but about when. The pessimists claim no marked improvement in standards of living until the 1840s or 1850s. Most optimists, by contrast, believe that living standards were rising by the 1810s or 1820s, or even earlier. The most influential recent contribution to the optimist position (and the center of much of the subsequent standard-of-living debate) is a 1983 paper by Peter Lindert and Jeffrey Williamson that produced new estimates of real wages in England for the years 1755 to 1851. These estimates are based on money wages for workers in several broad categories, including both blue-collar and white-collar occupations. The authors’ cost-of-living index attempted to represent actual working-class budgets. Lindert’s and Williamson’s analyses produced two striking results. First, they showed that real wages grew slowly between 1781 and 1819. Second, after 1819, real wages grew rapidly for all groups of workers. For all blue-collar workers—a good stand-in for the working classes—the Lindert-Williamson index number for real wages rose from 50 in 1819 to 100 in 1851. That is, real wages doubled in just thirty-two years. Other economists challenged Lindert’s and Williamson’s optimistic findings. Charles Feinstein produced an alternative series of real wages based on a different price index. In the Feinstein series, real wages rose much more slowly than in the Lindert-Williamsons series. Other researchers have speculated that the largely unmeasured effects of environmental decay more than offset any gains in well-being attributable to rising wages. Wages were higher in English cities than in the countryside, but rents

/ Learn More

Industrial Concentration

“Industrial concentration” refers to a structural characteristic of the business sector. It is the degree to which production in an industry—or in the economy as a whole—is dominated by a few large firms. Once assumed to be a symptom of “market failure,” concentration is, for the most part, seen nowadays as an indicator of superior economic performance. In the early 1970s, Yale Brozen, a key contributor to the new thinking, called the profession’s about-face on this issue “a revolution in economics.” Industrial concentration remains a matter of public policy concern even so. The Measurement of Industrial Concentration Industrial concentration was traditionally summarized by the concentration ratio, which simply adds the market shares of an industry’s four, eight, twenty, or fifty largest companies. In 1982, when new federal merger guidelines were issued, the Herfindahl-Hirschman Index (HHI) became the standard measure of industrial concentration. Suppose that an industry contains ten firms that individually account for 25, 15, 12, 10, 10, 8, 7, 5, 5, and 3 percent of total sales. The four-firm concentration ratio for this industry—the most widely used number—is 25 + 15 + 12 + 10 = 62, meaning that the top four firms account for 62 percent of the industry’s sales. The HHI, by contrast, is calculated by summing the squared market shares of all of the firms in the industry: 252 + 152 + 122 + 102 + 102 + 82 + 72 + 52 + 52 + 32 = 1,366. The HHI has two distinct advantages over the concentration ratio. It uses information about the relative sizes of all of an industry’s members, not just some arbitrary subset of the leading companies, and it weights the market shares of the largest enterprises more heavily. In general, the fewer the firms and the more unequal the distribution of market shares among them, the larger the HHI. Two four-firm industries, one containing equalsized firms each accounting for 25 percent of total sales, the other with market shares of 97, 1, 1, and 1, have the same four-firm concentration ratio (100) but very different HHIs (2,500 versus 9,412). An industry controlled by a single firm has an HHI of 1002 = 10,000, while the HHI for an industry populated by a very large number of very small firms would approach the index’s theoretical minimum value of zero. Concentration in the U.S. Economy According to the U.S. Department of Justice’s merger guidelines, an industry is considered “concentrated” if the HHI exceeds 1,800; it is “unconcentrated” if the HHI is below 1,000. Since 1982, HHIs based on the value of shipments of the fifty largest companies have been calculated and reported in the manufacturing series of the Economic Census.1 Concentration levels exceeding 1,800 are rare. The exceptions include glass containers (HHI = 2,959.9 in 1997), motor vehicles (2,505.8), and breakfast cereals (2,445.9). Cigarette manufacturing also is highly concentrated, but its HHI is not reported owing to the small number of firms in that industry, the largest four of which accounted for 89 percent of shipments in 1997. At the other extreme, the HHI for machine shops was 1.9 the same year. Whether an industry is concentrated hinges on how narrowly or broadly it is defined, both in terms of the product it produces and the extent of the geographic area it serves. The U.S. footwear manufacturing industry as a whole is very unconcentrated (HHI = 317 in 1997); the level of concentration among house slipper manufacturers is considerably higher, though (HHI = 2,053.4). Similarly, although

/ Learn More

Information

Since about 1970, an important strand of economic research, sometimes referred to as information economics, has explored the extent to which markets and other institutions process and convey information. Many of the problems of markets and other institutions result from costly information, and many of their features are responses to costly information. Many of the central theories and principles in economics are based on assumptions about perfect information. Among these, three stand out: efficiency, full employment of resources, and uniform prices. Efficiency At least since Adam Smith, most economists have believed that competitive markets are efficient, and that firms, in pursuing their own interests, enhance the public good “as if by an invisible hand.” A major achievement of economic science during the first half of the twentieth century was finding the precise sense in which that result is true. This result, known as the Fundamental Theorem of Welfare Economics, provides a rigorous analytic basis for the presumption that competitive markets allocate resources efficiently. In the 1980s economists made clear the hidden information assumptions underlying that theorem. They showed that in a wide variety of situations where information is costly (indeed, almost always), government interventions could make everyone better off if government officials had the right incentives. At the very least these results have undermined the long-standing presumption that markets are necessarily efficient. Full Employment of Resources A central result (or assumption) of standard economic theory is that resources are fully employed. The economy has a variety of mechanisms (savings and inventories provide buffers; price adjustments act as shock absorbers) that are supposed to dampen the effects of any shocks the economy experiences. In fact, for the past two hundred years economies have experienced large fluctuations, and there has been massive unemployment in the slumps. Though the great depression of the 1930s was the most recent prolonged and massive episode, the American economy suffered major recessions from 1979 to 1982, and many European economies experienced prolonged high unemployment rates during the 1980s. Information economics has explained why unemployment may persist and why fluctuations are so large. The failure of wages to fall so that unemployed workers can find jobs has been explained by efficiency wage theories, which argue that the productivity of workers increases with higher wages (both because employees work harder and because employers can recruit a higher-quality labor force). If information about their workers’ output were costless, employers would not pay such high wages because they could costlessly monitor output and pay accordingly. But because monitoring is costly, employers pay higher wages to give workers an incentive not to shirk. While efficiency wage theory helps explain why unemployment may persist, other theories that focus on the implications of imperfect information in the capital markets can help explain economic volatility. One strand of this theory focuses on the fact that many of the market’s mechanisms for distributing risk, which are critical to an economy’s ability to adjust to economic shocks, are imperfect because of costly information. Most notable in this respect is the failure of equity markets. In recent years less than 10 percent of new capital has been raised via equity markets. Information economics explains why. First, issuers of equity generally know more about the value of the shares than buyers do, and are more inclined to sell when they think buyers are overvaluing their shares. But most potential buyers know that this incentive exists and, therefore, are wary of buying. Second, shareholders have only limited control over managers. Information about what management is doing, or should be doing, to maximize shareholder value is costly. Thus, shareholders often limit the amount of “free cash” managers have to play with by imposing sufficient debt burdens to put managers’ “backs to the wall.” Managers must then exert strong efforts to meet those debt obligations and lenders will carefully scrutinize firms’ behavior. The fact that firms cannot (or choose not to) raise capital via equity markets means that if firms wish to invest more than their cash flow allows—or if they wish to produce more than they can finance out of their current working capital—they must turn to credit markets, and to banks in particular. From the firm’s perspective, borrowing has one major disadvantage: it imposes a fixed obligation on the firm. If it fails to meet that obligation, the firm can go bankrupt. (By contrast, an all-equity firm cannot go bankrupt.) Firms normally take actions to reduce the likelihood of bankruptcy by acting in a risk-averse manner. Risk-averse behavior, in turn, has two important consequences. First, it means that a firm’s behavior is affected by its net-worth position. When its financial position is adversely affected, it cuts back on all its activities (since there is some risk associated with virtually all activities);

/ Learn More

Information and Prices

Modern economists excel at identifying theoretical reasons why markets might fail. While these theories may temper uncritical views of the market, it is important to note that markets do, in fact, work incredibly well. Indeed, markets work so thoroughly and quietly that their success too often goes unnoticed. Consider that the number of different ways to arrange, even in a single dimension, a mere twenty items is far greater than the number of seconds in ten billion years. Now consider that the world contains trillions of different resources: my labor, iron ore, Hong Kong harbor, the stage at the Met, countless stands of pine trees, fertile Russian plains, orbiting satellites, automobile factories—the list is endless. The number of different ways to use, combine, and recombine these resources is unimaginably colossal. And almost all of these ways are useless. It would be a mistake, for example, to combine Arnold Schwarzenegger with medical equipment and have him perform brain surgery. Likewise, it would be a genuine shame to use the fruit of Chateau Petrus’s vines to make grape juice. Only a tiny fraction of all the possible ways to allocate resources is useful. How can we discover these ways? Random chance clearly will not work. Nor will central planning—which is really just a camouflaged method of relying on random chance. It is impossible for a central planning body even to survey the full set of possible resource arrangements, much less to rank these according to how well each will serve human purposes. That citizens of modern market societies eat and bathe regularly; wear clean clothes; drive automobiles; fly to Rome, Italy, or Branson, Missouri, for holidays; and chat routinely on cell phones is powerful evidence that our economy is amazingly well arranged. An effective means must be at work to ensure that some of the relatively very few patterns of resource use that are beneficial are actually used (rather than any of the 99.9999999+ percent of resource-use patterns that would be either useless or calamitous). The decentralized price system is that means. Critical to its functioning is the institution of private property with its associated duties and rights, including the duty to avoid physically harming and taking other people’s property, and the right to exchange property and its fruits at terms agreed on voluntarily. Each person seeks to use every parcel of his property in ways that yield him maximum benefit, either by consuming it most effectively according to his own subjective judgment or by employing it most effectively (“profitably”) in production. Market prices are vital to making such decisions. Vital Role of Prices Market prices are vital because they condense, in as objective a form as possible, information on the value of alternative uses of each parcel of property. Nearly every parcel of property has alternative uses. For example, a plot of land can be used to site a pumpkin patch, a restaurant, a suite of physicians’ offices, or any of many other things. If this plot of land is to be used beneficially rather than wastefully, those responsible for deciding how it will be used must be able to determine the likely worth of each possible alternative. Making such determinations requires reliable information. And market prices are a marvelously compact and reliable source of such information. Offers on the land from potential buyers or renters combine with the current owner’s assessment of the value of the land to him to create a price for the land. Each potential user values the land by at least as much as he is willing to bid. The more intense the bidding, the more likely that each bid will reflect the maximum value each bidder places on the land. Of course, the market prices of goods or services that can be produced with the land are an especially important source of information exploited by potential users of the land to determine how much each will bid. If the land’s current owner cannot use it in a way that promises him as much value as he can get by selling it, he will sell to the buyer offering the highest price. If a commercial developer purchases the land as a site for doctors’ offices, it is because this buyer observed that the rents for office space currently paid by physicians are sufficiently high to justify his purchase of the land, construction of the buildings, and purchase and assembly of all other inputs necessary to create a suite of medical offices.

/ Learn More

Innovation

“Innovation”: creativity; novelty; the process of devising a new idea or thing, or improving an existing idea or thing. Although the word carries a positive connotation in American culture, innovation, like all human activities, has costs as well as benefits. These costs and benefits have preoccupied economists, political philosophers, and artists for centuries. Nature and Effects Innovation can turn new concepts into realities, creating wealth and power. For example, someone who discovers a cure for a disease has the power to withhold it, give it away, or sell it to others.1 Innovations can also disrupt the status quo, as when the invention of the automobile eliminated the need for horse-powered transportation. joseph schumpeter coined the term “creative destruction” to describe the process by which innovation causes a free market economy to evolve.2 Creative destruction occurs when innovations make long-standing arrangements obsolete, freeing resources to be employed elsewhere, leading to greater economic efficiency. For example, when a business manager installs a new machine that replaces manual laborers, the laborers who lose their jobs are now free to put their labor into another enterprise, resulting in more productivity. In fact, in many cases, the number of jobs available will actually increase because the machinery is introduced. Henry Hazlitt provides the example of cotton-spinning machinery introduced in England in the 1760s.3 At the time, the English textile industry employed some 7,900 people, and many workers protested the introduction of machinery out of fear for their livelihoods. But in 1787 there were 320,000 workers in the English textile industry. Although the introduction of machinery caused temporary discomfort to some workers, the machinery increased the aggregate wealth of society by decreasing the cost of production. Amazingly, concerns over technology and job loss in the textile industry continue today. One report notes that the introduction of new machinery in American textile mills between 1972 and 1992 coincided with a greater than 30 percent decrease in the number of textile jobs. However, that decrease was offset by the creation of new jobs. The authors conclude that “there is substantial entry into the industries, job creation rates are high, and productivity dynamics suggest surviving plants have emerged all the stronger while it has been the less productive plants that have exited.”4 According to Schumpeter, the process of technological change in a free market consists of three parts: invention (conceiving a new idea or process), innovation (arranging the economic requirements for implementing an invention), and diffusion (whereby people observing the new discovery adopt or imitate it). These stages can be observed in the history of several famous innovations. The Xerox photocopier was invented by Chester Carlson,5 a patent attorney frustrated by the difficulty of copying legal documents.6 After several years of tedious work, Carlson and a physicist friend successfully photocopied a phrase on October 22, 1938. But industry and government were not interested in further development of the invention. In 1944, the nonprofit Battelle Corporation,7 dedicated to helping inventors, finally showed interest. It and the Haloid Company (later called Xerox) invested in further development. Haloid announced the successful development of a photocopier on October 22, 1948, but the first commercially available copier was not sold until 1950. After another $16 million was invested in developing the photocopier concept, the Xerox 915 became the first simple push-button plain-paper copier. An immense success, it earned Carlson more than $150 million.8 In the following years, competing firms began selling copiers, and other inventions, such as the fax machine, adapted the technology.

/ Learn More