This is my archive

bar

Anthropocene: Three Economic Thoughts

Does it make sense to identify a geological age of Earth called the Anthropocene or “the recent age of man” in Dutch scientist Paul Crutzen’s terms? (Anthropos is the Greek term for “man” and includes all persons of both sexes.) The argument for doing so is that starting in the mid-twentieth century, humans are permanently transforming the Earth’s geological record, with traces of nuclear energy in sediments for example. The Economist discusses this topic in its July 13 issue under the alarming title: “What Matters About the Anthropocene is Not When It Began, but How it Might End.” As usual, the magazine espouses (sometimes carefully) an environmentalist viewpoint, and writes: The idea of the Anthropocene is a striking expression of a profound truth. Human activity is having effects that will be visible for periods of time far longer than recorded history. Humans are responsible for physical, chemical and biological changes previously brought about only by the great forces of nature. A paradox if not a contradiction seems to underlie the last sentence, and this problem mars much environmental activism. The statement implies that humans are not part of “the great forces of nature.” If, on the one hand, man is part of nature, he is certainly part of “the great forces of nature.” Man being a rational animal, it is to be expected that he will transform the rest of nature in some noticeable ways. From this perspective, it seems nonsensical to oppose humans to nature. If, on the other hand, man is not part of nature because his reason stands over and above nature—or perhaps because he has an immortal soul—then his dominion over nature is to be expected and celebrated. Certainly, “nature” cannot object and argue that man is just a biological slime. Blaming mankind for transforming “nature” makes little sense. We (any of us) may be concerned for the future of our species, of those individuals who will be our descendants. A thinking creature certainly has philosophical or esthetic reasons, or reasons of vanity, to react this way. It is true that an all-out nuclear war or perhaps another sort of ecological catastrophe could wipe out mankind; dinosaurs have been there before. But it is also true that mankind could prevent some catastrophes, such as an asteroid hit, from wiping out all human life. Already, the social, economic, and political institutions developed in Western countries have dramatically reduced poverty in our world. Some humility is also warranted. Whatever humans do, it is unlikely that the Earth or the solar system or the Milky Way will still be around in, say, 30 billion years (about the current age of the universe). “we” may have been swallowed by a black hole, perhaps our own black hole at the center of our galaxy. More practically, economics is relevant to a reflection on shorter time horizons. The economic way of thinking suggests at least three thoughts. First, mankind is composed of the total set of distinct human individuals; it is not a big organism (whose brain, as Hitler thought, is in Vienna). Second, this implies that the standard problem of collective action will appear in any common action that could be necessary to save the planet (on collective action, see Mancur Olson, The Logic of Collective Action, 1971). Who will pay, and how much, for destroying or deflecting an incoming asteroid? Who will pay for preventing, or adapting to, climate change, if this is a serious threat for everybody? Will the workers employed or conscripted on related projects be paid the Davis-Beacon wage? Will they be allowed to go on strike? And—question seldom asked—how many years or centuries of tyranny are individuals likely to suffer as a consequence of their conscription by the state in collective green projects? The answer that this is a slippery-slope argument is vacuous; besides history, the economic analysis of politics does show that submission to political authority is indeed very slippery. Third, to what extent is organized collective action necessary? A great 18th-century discovery must not be forgotten: a spontaneous social order can generate prosperity without the necessity of any individual or group of individuals overruling other individuals’ preferences. This spontaneous order was first theorized by Adam Smith and is especially well represented in the thought of Friedrich Hayek (see my review of his Law, Legislation, and Liberty, especially Volume 1 and Volume 2). In other words, collective action led by government is generally not required. But if and when it is, on the basis of which rules should it be organized? A major strand in James Buchanan’s work has been to explore this question in a radically different way than cost-benefit analysis, given the ethical requirement of each individual’s consent to the rules that limit his or her liberty. I claim that any environmental reflection must consider these economic ideas. (0 COMMENTS)

/ Learn More

Killing Your Cultural Baggage

In this episode, Annie Duke discusses her book, Quit: The Power of Knowing When to Walk Away with EconTalk host Russ Roberts, who of course, asks her great questions. “The opposite of a great virtue is also great virtue,” is a recurring theme that is grappled with as the two discuss examples of grit, perseverance and when, why, and how to quit. It’s impossible to listen to this conversation and not consider your own experiences. Can you easily identify “kill criteria” when evaluating a new idea? How does cultural baggage influence your ability to push through pain? What is the opportunity cost to saying no? We hope you start, finish, and enjoy this episode! Perhaps you will share with us what you’ve quit lately?     1- Why does Annie Duke view quitting as a calibration issue? How do you determine the right time to quit or to stick with things?   2- To what extent do the Everest hiker and the startup employer stories  justify quitting as a moral imperative?  Are there obligations beyond self-preservation and owing others that might be considered when calling it quits? Explain.   3- Duke suggests throughout her story of the ants that we might too easily say no, or worse, fill our time with hobbies and more that don’t allow us to be available for new opportunities. Are you concerned that you might have missed out on some fortunate stroke of serendipity?   4- Richard Thaler claims that we do not like to close mental accounts in the losses. How do Roberts’ Dad’s words, “Don’t quit, finish what you plan,” support this line of thinking while challenging Duke’s premise about the problem with short-term goals?   5- We take into account the resources that we’ve already spent in deciding whether to continue and spend more. Duke claims that amateur poker players really care about money already spent (in the pot). Do we get better at recognizing the sunk cost effect as we progress in life?    (0 COMMENTS)

/ Learn More

Would drug legalization increase overdose fatalities?

In response to a question about libertarianism, Matt Yglesias recently had this to say: The right to have full-scale commercial production, marketing, distribution, and sales of fentanyl is compatible with a similar system of liberty for all. But it would also be a huge catastrophe for human welfare. I think we’re all familiar with the flaws and shortcomings of the so-called “War on Drugs.” At the same time, we can also see the very significant harm that the sale and distribution of fentanyl are causing in the United States, even as it continues to be illegal. If manufacturing, distribution, and sale were allowed, it would be much cheaper and more widely available — we’d have more addicts, more ruined lives, and more overdose deaths. But even worse, we’d have something like the advertising boom we’re currently seeing for sports gambling with large, sophisticated players investing in creating the largest possible new cohort of addicts. This might be correct, but I am not at all convinced by the argument.  Instead, I suspect that the boom in fentanyl use (and the resulting drug overdoses), is more of a consequence of the War on Drugs.  Here are a few facts to consider: 1. In the 2000s and early 2010s, there was an increase in the abuse of legal opioids such as Oxycontin.  The federal government responded by cracking down of Oxycontin prescriptions.  As this legal medication became more difficult to obtain, addicts switched to illegal alternatives such as fentanyl.  The new policy turned out to be a spectacular failure, as opioid deaths skyrocketed much higher in the years after the crackdown on Oxycontin use.  A graph in a National Affairs  article shows the increase: 2.  There is other evidence that evidence that the rise in opioid deaths is linked to the illegal status of narcotics.  The same National Affairs article describes the consequence of a black market in drugs, where quality controls are nonexistent: Late last year, the Wall Street Journal reported on three high-status casual drug users — an investment banker, a lawyer, and a social worker — who ordered cocaine from the same New York delivery service. The drugs were laced with fentanyl; all three died. Such stories — of overdose death among people who are not the conventional “faces of addiction” — have grown increasingly common. Some might argue that they got what they deserved, as they recklessly chose to consume an illegal product.  I don’t believe that’s the right way to think about the issue.  I view these tragedies as “collateral damage” in the War on Drugs.  If cocaine were legal, all three people would likely still be alive today. 3.  I don’t know how many people would use narcotics if they were legal.  But I suspect the answer is not “most people”.  Look at the death rate from overdoses during the early 1900s, a time when opioids were legal in the US.  Admittedly, modern products like fentanyl are more dangerous than opium or heroin.  But even back in the pre-fentanyl 1990s, the US overdose death rate was comparable to the early 1900s (a time when hospital emergency room treatments for overdoses were presumably pretty primitive.)  As is so often the case in public policy, this is a question of elasticities.  How much would legalizing drugs increase the rate of drug addiction?  I don’t know anyone who would decide to go out and consume fentanyl, but I don’t doubt that in a country as large as the US the effect of legalization would be to substantially boost drug use, perhaps by millions of people. At the same time, full legalization would dramatically improve the safety of drugs in two ways.  First, those who chose to consume opioids would know exactly what dose they were getting, which would reduce the risk of accidental overdose.  Most fatalities seem to be due to people consuming more fentanyl than they anticipated.  Second, if drugs were legal then people might choose to consume less deadly drugs.  That investment banker ended up dying from fentanyl not because he wished to consume fentanyl, rather because in a market where cocaine is illegal the product will often be sold in an adulterated form.  Thus if legalization doubled drug use, but also led to a 60% decline in overdose deaths for any given drug user, then total deaths from drug use would decline.  Of course there are other side effects to consider, both positive (fewer people in prison and fewer gang wars) and negative (more drug use, which has negative consequences beyond overdose deaths.)  One could also envision intermediate options, such as legalizing “natural” products such as marijuana, hallucinogenic mushrooms, cocaine and opium, while continuing to ban more deadly synthetic drugs like fentanyl and methamphetomine.  Here the goal would be “harm reduction”.  It’s not that consuming the milder drugs is a good idea, but if people are determined to consume some sort of narcotic, then providing a less dangerous way of getting high will lead to fewer overdose deaths than under a regime where all drugs are illegal, and hence people have no idea what they are consuming. Another intermediate option would be to fully legalize drugs only for those who do not violate the law or rely on welfare to survive.  People convicted of crimes and/or reliant on public welfare because they are unable to hold a job could be required to enter rehab programs.   To summarize, I believe that ideas like drug legalization are dismissed too quickly.  It certainly might be a bad idea, but the arguments I’ve seen are not persuasive. In particular, it does no good to cite the horrors of fentanyl use when the fentanyl crisis was created by drug cartels in response to the federal government’s war on less deadly drugs.  We need to do more than simply think about where to go next—we need to think deeply about how we got to this position in the first place.  PS.  This post is not about libertarianism, and hence I’m not going to comment on Yglesias’s remarks about legal marketing of narcotics, which I regard as an entirely separate issue.  My focus here is on the legalization of drug production and consumption. PPS.  Some people view decriminalization as the sensible compromise between prohibition and full legalization.  To me, decriminalization seems like the worst of both worlds.  You still have the illegal drug trade with all the crime and accidental overdoses, but you also have increased consumption. (0 COMMENTS)

/ Learn More

Kenneth Arrow’s Overrated “Impossibility Theorem”

Kenneth Arrow‘s impossibility theorem, a pillar of neoclassical economic theory, argues that under certain “seemingly reasonable” conditions, no social welfare function can adequately represent the preferences of society’s members. A social welfare function is a mathematical formula that combines individual preferences in some manner to reflect societal preferences. The impossibility theorem is sometimes colloquially understood as implying that aggregating individual preferences to form a social preference ordering is impossible. However, the restrictions Arrow placed for his theory to hold are far from reasonable, undermining the validity of his theorem. Let’s take a closer look. One of the constraints Arrow includes in his theorem is the “independence of irrelevant alternatives” (IIA). This supposes that a person’s preference between two options shouldn’t be influenced by other choices. Essentially, it posits that preferences shouldn’t be dependent on any other alternatives becoming available. Some economists have likened the IIA assumption to choosing amongst flavors of ice cream. Whether I prefer chocolate to vanilla ice cream shouldn’t depend on whether strawberry is available, or so it is argued. The ice cream metaphor is a poor one, however, given the trivial nature of the alternatives. So let’s break down the IIA assumption with another, more appropriate, example. Suppose you’re torn between going out to party tonight and staying home to study for an exam. At face value, partying appears to be the more enjoyable option. However, a third option, like “getting into a good college,” may depend on your decision. Clearly, this third choice—the future consequence—ranks higher among your preferred alternatives than the immediate option to party. This is despite the fact that partying, when taken independently of future consequences, seems to be the preferred choice over studying. Herein lies the problem with Arrow’s impossibility theorem: it overlooks long-term consequences and focuses too much on immediate gratification. This is troubling because much of today’s welfare analysis, including cost-benefit analysis, is based, at least indirectly, on Arrow’s theorem, leading economic theory broadly and public policy specifically to take a short-sighted approach. Rejecting Arrow’s theorem shouldn’t be controversial. Nor should it be a politically divisive issue. Many from the political left already contest Arrow’s theorem and its rigid IIA assumption. However, those on the political right have been slower to disavow it, perhaps because of a natural skepticism toward a social planner dictating societal values through a social welfare function. This is a mistake. Ironically, rejecting the use of any social welfare function at all, as some libertarians do, also implies rejecting the market process—which itself is guided by a social welfare function of sorts. Arrow himself acknowledges as much in his book that presents the impossibility theorem, Social Choice and Individual Values, when he concludes that “the market mechanism does not create a rational social choice.” Strangely, most libertarians have failed to heed the lesson. Without the IIA condition, the impossibility theorem falls apart. The key takeaway here is that economists need to revisit some of their foundational theories. Indeed, much of modern welfare economics needs a fresh look and reevaluation. The mid-20th century, sometimes regarded as a golden age of economic theory, may well have been a breeding ground for economic errors. We need to rectify these, and a good starting point would be to revisit Arrow’s impossibility theorem.   James Broughel is a Senior Fellow at the Competitive Enterprise Institute with a focus on innovation and dynamism.  (0 COMMENTS)

/ Learn More

Labor Reform in Colombia Threatens Employability

Gustavo Petro, president of Colombia, proposed a new labor reform, which has caused significant discontent among citizens. Principal changes include: Digital Platforms Leveraged in Gig Economy (Article 29. Duty to verify social security): Digital platforms must verify that such workers are affiliated to social security or otherwise must assume the payment of 100%. Dismissal of Vulnerable Employees (Article 4. Job stability): People described below, could only be dismissed if in addition to the existence of a just cause or a legal cause, having exhausted the respective procedure, and are: Mothers or fathers, head of household whose family income depends on the salary earned. Disabled Pregnant women and up to 6 months after child birth. Pre-pensioners, i.e., those who are three (3) years or less away from fulfilling the requirements to obtain the old age pension. Holidays and Sunday Work (Article 18): Employers are supposed to pay you overtime if you work on a holiday or Sunday, but now it is proposed that holidays and Sundays shall be paid with a 100% overtime and night hours shall start at 6:00 pm, so every work after that time falls under this rule too. Contracting Models ( Article 47. Indefinite term contracts will be the general rule): it gradually puts an end to independent contracts, where in some cases it is not used for temporary jobs, and it really is an employment relationship and the payments that by law correspond are recognized. The reform has caused significant discontent, particularly in the case of Rappi workers who decided to protest against its implementation. The President’s response to the protests has been indifferent, and his recent tweet ridiculing protesting Rappi workers has done little to calm the situation. What motivates these workers to protest against such labor reform? The CEO of Rappi, Simon Borrero, has stated that between 80 and 85 percent of their workers work occasionally, and many have other jobs, study, or devote time to their families. Thus, they would not be able to continue working under the proposed model. That explains why Rappi workers’ protests against the labor reform are understandable, as the proposed measures would initially affect their ability to manage their time. Beyond that, the proposed reforms threaten the freedoms and flexibility that have enabled the company to thrive and have created employment opportunities. Current Minister of Labor, Gloria Ramirez, indicated that the present labor reform “does not seek to generate employment, but to improve working conditions”, which makes the situation very worrisome, especially for those people who live in the informal sector and those who are unemployed. In view of this, people such as the president of Fenalco, Jaime Alberto Cabal, show their concern about the situation by calling for another new labor reform project, since it is more than clear that the current proposal would seriously affect the economy and the employability of the country. A labor reform that negatively affects workers should not be implemented. The proposed labor reform in Colombia would have negative consequences for workers in the gig economy. To illustrate my point, let’s consider the case of two workers named Juan and María. Under Rappi’s flexible labor model, María earns more money than Juan because she works more hours delivering goods, whereas Juan only works a few afternoons a week because he also has to help run his father’s business. However, if the labor laws changed to require a set schedule of 40 hours per week, Juan would not be able to comply because of his obligations to his family’s business, and María would not be able to work in the mornings, because she is attending university. If Juan and María were both required to work a set schedule and were paid the same amount regardless of the number of deliveries they made, it would be unfair to María, who can deliver twice as many goods as Juan. This would also decrease productivity, as it would not incentivize workers to be more efficient. Therefore, the flexible labor model, which pays workers based on the number of deliveries they make, is more beneficial for both workers and employers. It is important to consider the reasons why employees are protesting the labor reform and to acknowledge their right to do so. Any reform that would leave many people without jobs and worsen the country’s economic conditions should not be implemented. The government should not impede economic growth but rather promote economic freedom by providing incentives for workers to be more productive and for entrepreneurs to innovate and satisfy societal needs.   Omar Camilo Hernández Mercado is a law student at the Universidad Libre de Colombia, Senior coordinator of Students for Liberty in Colombia, and a seminarist in “The Austrian School of Economics” at the International Bases Foundation.  (0 COMMENTS)

/ Learn More

The Surprising Beneficiaries of American Slavery

Slavery has never been legal in California. But that didn’t stop the California Reparations Task Force. In its final report, issued on June 29, it suggested that the state government pay $1.3 million to Californians who can demonstrate that they are the descendant of a slave or a freed black person living in the U.S. prior to 1900. This payment is to compensate those whose ancestors suffered from chattel slavery and its downstream effects, such as racism and lower life expectancies. Here’s the problem. The reparations being proposed will take money from people, the vast majority of whom gained nothing from slavery, and give it to people who benefited immensely from slavery. Who suffered from slavery? The slaves themselves. They were brought from Africa against their will, and they were forced to work without receiving the full value of their labor. Who gained nothing from slavery? Except for the rare person who inherited an estate that slavery enriched, every contemporary non-black American gained nothing from slavery. Who gained from slavery? Americans of African descent. The late economist Walter E. Williams said that slavery was the worst thing ever to happen to his ancestors, but the best thing ever to happen to him. Why? Because instead of growing up in Guinea-Bissau, Angola, Senegal, Mali, or the Democratic Republic of Congo, he enjoyed the opportunities, wealth, health, security, and freedom of the United States. This is from David R. Henderson and Charles L. Hooper, “The Surprising Beneficiaries of American Slavery,” American Institute for Economic Research, July 14, 2023. Read the whole thing. It’s quite short. (0 COMMENTS)

/ Learn More

Why Planners Can’t Plan Every Detail And Why That Matters

In a recent post, I expressed skepticism about the ability of anyone to meaningfully predict the outcome of military engagements. There are many factors that make accurately predicting this sort of thing all but impossible, but one factor that deserves to be kept in mind is the impossibility of military planners to have all the relevant information.  As one studies military history, one thing that jumps out is how major events turned on tiny details. In the Revolutionary War, the Battle of Trenton, while not a large engagement, was still a pivotal moment. The victory for the American forces was a major morale booster and encouraged more Americans to join in the war efforts. And a crucial component of this victory was that George Washington’s forces held the element of surprise, with the famous crossing of the Delaware River. They caught the Hessian forces completely off guard and were able to secure a quick victory with minimal losses. How differently might history have gone if, instead of holding the element of surprise, Washington and his forces had been spotted, and attempted to attack a garrison that was fully expecting them?  Except, Washington and his forces absolutely were spotted and seen coming. A British sympathizer saw their approach and promptly reported it. This report was carried to the garrison and given to the commander of the outpost who, engrossed in the card game he was playing, promptly put it in his pocket and forgot about it. Had the commander, in that one brief moment, decided to look at the report rather than put it in his pocket, Washington and his forces wouldn’t have been moving towards a successful surprise attack – they would have been walking straight into an ambush. And how differently might that battle have gone, and from there, the entire course of the Revolutionary War? One brief moment, one small decision, and the future of the world is forever changed.  These kinds of lynchpin moments can only be identified in hindsight – and even then, we have no idea just how many equally crucial moments have occurred but never made it into any history books. Others are simply unknowable by their nature. Imagine another world where the garrison commander was in a more professional mood that night and would have promptly opened the letter and been prepared for the attack – but the British sympathizer who sent out the initial warning had died of smallpox as a nine-year-old. That small child’s death would have equally been such a lynchpin moment, but there is no way for historians to know this and record “if this little boy had grown into an adult, he would have spotted the forces of the rebellion and alerted the nearby garrison, forever changing the direction of the war.”   An entertaining book filled with these kinds of moments is Tiny Blunders/Big Disasters: Thirty-Nine Tiny Mistakes That Changed the World Forever. It chronicles a series of small, seemingly insignificant moments and decisions that ultimately led to massive consequences, such as when a “soldier accidentally kicks a helmet off of the top of a wall and causes an empire to collapse.” The more you study history and learn about how many major, world changing events turned on tiny details, the more you appreciate how utterly impossible it is for large-scale social planning to be carried out effectively. These aren’t cases where you can say “if only someone wiser had been in charge, if only the planners had been better informed, these circumstances would have been anticipated and the outcomes been kept under control.” History turns on events that nobody can and ever could anticipate or see coming, and nobody can possibly plan or control for.  This doesn’t just apply to large scale, society level planning. Even in our own lives, there are seemingly tiny moments where small, trivial decisions later turned out to be the moments where everything changed – and only in retrospect can we see what those moments were. And we’ll never, ever know how many such moments occurred that we can’t identify – or how our lives would look today if any of those moments went differently. I’ll give one personal example. I never would have met my wife, and thus my children would not exist today, if, six months before she and I met, and while she was thousands of miles away on a different continent, on one specific night, I hadn’t decided to stop at the smoke pit in front of the barracks briefly before heading off base to get something to eat.  There is no planner in the world who can identify and integrate these kinds of moments into their plan – and thus there is no planner in the world who can craft a plan that will turn out as they expect. As I once heard George Will quip, Elizabeth Warren’s favorite catchphrase is “I have a plan for that” for any and all social issues – but there is much wisdom in the old saying “If you want to make God laugh, tell him your plans.”  (0 COMMENTS)

/ Learn More

The Fed doesn’t fight inflation

Ramesh Ponnuru has an excellent article in the Washington Post, pushing back at arguments for raising the inflation target to 3%: Zandi is right that relatively steady, predictable and low inflation matters more than the exact target rate. A steady 3 percent inflation rate would even have some advantages over a lower one, such as raising nominal interest rates and thus giving central banks more room to maneuver. That’s why 22 prominent economists wrote a letter to the Fed in 2017 urging it to consider a higher inflation target. And if we were designing a monetary policy from scratch, we might well pick a 3 percent target. But we’re not. There’s a history we have to consider. Ponnuru argues out that a higher inflation target at this time could lead to a loss of confidence in policy, and points to the 1970s as an example of what happens when central banks lose credibility. I share his view that this isn’t a good time to raise the inflation target.  Indeed I don’t favor a higher inflation target at any time.  But if we did decide to eventually transit to a 3% target, the best time to do so would be when inflation is below the 2% target and the Fed is wrestling with the “zero lower bound” problem of interest rates.  When at the zero lower bound, a higher inflation target makes it easier to adopt an expansionary monetary policy. If Fed officials ever decided that a 3% target would be better in the long run, they should make the following announcement long before this change was needed: “We will continue to target inflation at 2% until such time as interest rates fall to zero and we are having trouble raising inflation up to 2%.  At that time, we will permanently switch to a 3% target.” Raising the inflation target now would be solving a nonexistent problem, as unemployment is only 3.6% and the Fed has plenty of room to cut rates if needed.  If we are planning to do this, do it at a time where it would actually do some good. This also caught my eye: [T]he Federal Reserve is already hearing calls to declare its own victory over inflation, rather than continue to raise interest rates until it achieves its stated goal of bringing the inflation rate down to 2 percent. Ponnuru rejects this argument, but I’d go even further.  I get annoyed when people suggest the Fed is “fighting inflation”, as if it’s engaged in a battle with an enemy.  The Fed doesn’t battle inflation, it creates inflation.  On occasion, inflation can also be generated by supply shocks, but that sort of inflation is transitory.  The inflation we’ve experienced over the past few years is almost entirely created by a highly expansionary monetary policy, which drove up nominal GDP.  The inflation overshoot is pretty similar to the NGDP overshoot. Prior to 1913, the average inflation rate in America was zero.  Since the Fed was created, we’ve had far more inflation than deflation.  The average rate of inflation is determined by monetary policy.  The Fed isn’t trying to declare “victory” over inflation, they are trying to target a specific inflation rate—2% to be specific.  And over the past two years they’ve done a poor job of hitting their target. They can never declare victory and rest on their laurels, they must continually strive to create 2% inflation. Imagine a marksman shooting at a target on the side of a barn.  He keeps missing badly to the right.  His buddy has a bright idea, “Just paint a target around the bullet holes and declare victory.” (0 COMMENTS)

/ Learn More

A great tribute to Milan Kundera

When the dust will settle, I am sure that Milan Kundera will have a place among the greatest novelists of the last century. His prose was magnificent, his stories well built, his sensibility unique. Kundera defended the novel as the cornerstone of European culture but was quite alert to a vice practiced by most literati: their tendency to take sides, to use culture as a Trojan horse for some “view” or “opinion”, their eagerness to celebrate political power, even in its cruelest moments. Kundera claimed the novel as a way to make present justice, since as human beings we have difficulties in living the present: we look forward to the future, we long for the past, we play with our memories. This meant that the novel for him was the quintessential space of freedom: it should not be twisted into propaganda, of any kind, nor be appreciated because of the views it conveys. This may explain one of the features of Kundera’s work that I find more attractive. His characters may be voluptuous, or crazy, or weak, or stupidly bold, or mad. They may be lying, betraying, running away. Yet he described all their imperfections with a light touch, a true sign of appreciation for humanity as it is. He didn’t like labels, and insisted not to be attached any. But, in this sense, he had a sensibility which I think should be particularly attractive to classical liberals – or at least it is to this classical liberal. I read a few tributes to Kundera, who died on July 12 at 94 years old. The best one is this truly excellent piece by David Samuels for Unherd.com. Here’s a bit: Kundera was never particularly interested in or engaged by politics. Instead, his work was a passionate defence of the right to pursue one’s own individual desires and lusts against bureaucratic maniacs of whatever stripe who wished to colonise individual experience on behalf of the state. To his critics on both the Right and the Left, Kundera’s stance was borderline immoral, not to mention hopelessly bourgeois. While the Left preferred Che and the Right preferred Solzhenitsyn, Kundera insisted on the human right to be left alone. As alienating as Cold War ideologues found Kundera’s bourgeois anti-politics, there were also plenty of writers and critics who objected to the qualities of his prose. The intense musicality of his sentences could seem like an artefact of a romantic moment that had passed. His world-class talent for aphorisms could seem similarly dated, a parlour trick that impressed college students on their junior year abroad: “there is no perfection only life” (The Unbearable Lightness of Being); “to laugh is to live profoundly” (The Book of Laughter and Forgetting). He objectified women, in a way that grew increasingly detached from dominant Anglo-Saxon sensibilities. Surely the world had better things to do with its time than to vicariously wallow in the lusts of yet another ageing male writer. Kundera never saw himself as a political man, as a moralist, a liberal, a conservative, or as an author of texts whose highest destiny was to become movie scripts. He was, quite simply, a novelist. For him, the novel was the highest form of aesthetic endeavour, a kind of anti-scripture representing the sensibility of the individual, containing “an outlook, a wisdom, a position; a position that would rule out identification with any politics, any religion, any ideology, any moral doctrine, any group”. The faith he placed in the novel as a compass that can be used to negotiate life’s big questions can seem hopelessly antiquated. Yet if you don’t believe that, why bother to write one? Read the whole thing, it’s worth it. (0 COMMENTS)

/ Learn More

Shells and Casings: A Systemic Media Problem

You will tell me that this is not the worst problem in either the media or our societies, and I will agree. Although it may be related to more serious problems or inspire some inquiry in the economics of language, you may consider this post as a light midsummer piece. Reporting on a murder mystery, the Wall Street Journal writes, speaking of a sheriff’s deputy (“A Hiker Died With a Bullet in His Chest. Why Did Police Say He Was Stabbed by a Stick?” July 12, 2023): He didn’t see any bullet wounds in the puppy, and after searching the area for 25 minutes, he couldn’t find any shell casings. As the story headline says, not only had the puppy been shot, but his master too. Here, I am focusing on the muddled terminology. The deputy sheriff would not be looking for “shell casings” except if he had already seen a wound or wounds typical of a shotgun blast. Only shotgun shells have “shell casings,” because the whole cartridge is called a “shell.” A pistol or a rifle typically fires a single bullet propelled from the end of a metal (usually brass) “case” or simply “casing” containing the powder; together, the bullet and the casing are called “cartridge.” A shotgun shell casing, mainly made of plastic, contains a large number of pellets on top of the powder. True, there is the exception of shotgun shells that contain only one “bullet” commonly called “slug.” The other exception is revolver shotshells, designed for snakes and unlikely to kill a dog or a man. It would be surprising if the deputy sheriff had sloppily spoken of “shell casings” while he was looking for all sorts of casings. If the deputy sheriff really said he was looking for “shell casings,” it would suggest that he was not exactly on top of his job, as his failure to identify a bullet wound on the dead hiker would confirm. By definition, of course, murder mysteries raise many questions. This being said, the reporter may not be allowed to go scot-free. Ignorance of firearm basics (it is not rocket science) seems to be systemic in the media and, alas, not only in European or Canadian media—where we should not be overly surprised to find that they can’t distinguish a rifle from a broomstick. I suppose that ordinary individuals, as opposed to state agents or their war conscripts, should not know about these things. In a previous EconLog post on a related topic, I wrote: Perhaps it should be a condition of the job, even in America sadly, that journalists and their editors own and shoot guns. (0 COMMENTS)

/ Learn More