Alex Richey

Software Engineer @ Amazon

Against Pessimism

My article about pessimism originally published on 3QuarksDaily.

Pessimism is on the rise among members of the older generation. According to a 2011 Gallup poll, only 36% of Americans aged 50 to 64 believe that today's youth will have better lives than their parents. And another poll conducted in 2013 by Rasmussen says that just over half of Americans think that their country's best days are in the past.

There are two ways of explaining this kind of negativity. According to the first view, it is understandable that such attitudes have formed, given both the political and economic turbulence of the last decade, and other long-term social and economic trends.

Recent literature is replete with explanations of this sort. In Thomas Frank's article "Storybook Plutocracy," he classifies more than 30 recent books as members of what he has dubbed the "social-disintegration genre." This genre includes George Packer's The Unwinding, Charles Murray's Coming Apart, and Hedrick Smith's Who Stole the American Dream?, among many others.

Although the authors of these books may differ in political orientation and policy prescriptions, they agree in matters of methodology and share a basis of facts. Moreover, they tend to agree that, with the right policies, America's situation can be improved and that the general mood of the country can be ameliorated.

The second type of explanation is bleaker.  Its proponents argue that the worsening mood of the country is not due to transient events such as the Great Recession or to reversible political policies, but rather to permanent and essential elements of modernity itself.

Because of the cynicism intrinsic to this sort of view, its written expressions are comparatively rare among professional writers; its cultural manifestations, however, are prominent.

Members of the so-called the Prepper's Movement, for example, carefully pack and maintain "bug-out bags," receptacles whose contents are intended to "see them through the collapse of civilization." Preppers, as the movement's adherents call themselves, preach the virtues of preparedness and some of their more extreme members – people who build underground bunkers and stockpile things like gasoline, guns, ammunition, and Meals Ready to Eat – have been featured on National Geographic's reality TV show "Doomsday Preppers." Many members of this movement believe that civilization itself is unsustainable and that the apocalypse is likely occur in our lifetimes.

Until recently, it has been difficult to apprehend the reasons that motivate such activities; however, in the last few months, authors Jonathan Franzen and David Mamet have published essays that express some of the reasoning which seems to inform this and other Malthusian endeavors.

Franzen's "What's Wrong with the Modern World," which appeared in The Guardian a few months ago, bemoans the age of information, calling it a "media-saturated, technology-crazed, [and] apocalypse-haunted historical moment," while Mamet's "Entropy," which was published in Playboy's 60th Anniversary issue, predicts the "dismantling" of the West. Both of these essays articulate the standard rationale for being pessimistic about the future, but in a way that is confused and uninformed.

One often heard criticism, expressed by Franzen, is that we are becoming too dependent on technology.

[N]ow it's hard to get through a meal with friends without somebody reaching for an iPhone to retrieve the kind of fact it used to be the brain's responsibility to remember. The techno-boosters, of course, see nothing wrong here. They point out that human beings have always outsourced memory – to poets, historians, spouses, books. But I'm enough of a child of the 60s to see a difference between letting your spouse remember your nieces' birthdays and handing over basic memory function to a global corporate system of control.

Franzen's notion that using an iPhone or other technologies means buying into "a global corporate system of control" is, on more careful consideration, pretty laughable. The only thing these corporations – Facebook, Twitter, Google, etc. – are interested in getting people to do is spend more money. The suspicion that they have some special interest in individuals beyond their patronage steams from an inflated sense of self-importance or paranoia.

As for the notion that outsourcing certain mental faculties to computers is somehow dangerous, there is ample research that suggests otherwise. Franzen's worry here is reminiscent of the claim that outsourcing computational ability to calculators impedes mathematical proficiency. A 2011 research brief from the National Council of Teachers of Mathematics, which synthesized nearly 200 studies from 1976 to 2009, found that "the use of calculators in the teaching and learning of mathematics does not," in contrast to what Franzen might predict, "contribute to any negative outcomes for skill development or procedural proficiency, but instead enhances the understanding of mathematics concepts and student orientation toward mathematics."

Clive Thompson's new book Smarter Than You Think sets out a similar conclusion about "human-computer symbiosis." He asks, "What's the line between our own, in-brain knowledge and the sea of information around us? Does it make us smarter when we can dip in so instantly? Or dumber with every search?" Walter Isaacson writes in his review of the book that "[Thompson's] answer is that our creative minds are being strengthened rather than atrophied by the ability to interact easily with the Web and Wikipedia. ‘Not only has transactive memory not hurt us,' [Thompson] writes, ‘it's allowed us to perform at higher levels, accomplishing acts of reasoning that are impossible for us alone.'"

One observation that Thompson offers in support of this conclusion is that our culture is now more literate than it was in the past. "Before the Internet came along," he writes, "most people rarely wrote anything at all for pleasure or intellectual satisfaction after graduating from high school or college." But now, of course, with the advent of blogs, texting, and constant status updates on Facebook and Twitter, it is the norm.

Franzen, as one might expect, disapproves of this increase in cultural participation. "One of the worst things about the internet," he says, "is that it tempts everyone to be a sophisticate – to take positions on what is hip and to consider, under pain of being considered unhip, the positions that everyone else is taking." But where Franzen finds reason for contempt and pessimism, Thompson finds reason to be hopeful. "It's easy (and not altogether incorrect) to denigrate much of the blathering that occurs each day in blogs and tweets," paraphrases Isaacson. "But that misses a more significant phenomenon: the type of people who 50 years ago were likely to be sitting immobile in front of television sets all evening are now expressing their ideas, tailoring them for public consumption and getting feedback."

David Mamet's criticisms of the modern age are less ornery than Franzen's but are far more apocalyptic. Mamet attempts to explain what he sees as a nearly religious "return to chaos" by appealing to the second law of thermodynamics, which says that, in closed systems, entropy increases with time. Entropy is a measure of "order" or "chaos." Speaking somewhat metaphorically, the second law can explain how, when left alone in a kitchen, an egg on the counter will crack and splatter, creating disorder where there was previously order.

Mamet's idea of appropriating this concept from physics and applying it to social phenomena is not original. "All we have to do is look out the window/internet to see disorder running rampant all around us," says the Caltech physicist Sean Carroll in his blog post "Social Entropy":

So people from Henry Adams and Oswald Spengler to Thomas Pynchon and Norbert Wiener have suggested (with different degrees of seriousness) that maybe the social chaos around us is merely the inevitable outcome of some grand dynamical principle.

Mamet's own appropriation of the concept fits this mold. In the space of just a couple thousand words, he gives a reading of history from the beginning of time through the present day and concludes that "Life then, human and otherwise, may be understood not primarily as the desire to perpetuate life (which just begs the question "Why?"), but as an attempt to maximize [the dispersal of energy or entropy]."

In this way, he supports his conclusion that the western world – or humanity in general – is in an inevitable decline.

This argument fails and the reasons for its failure are twofold. First, the second law of thermodynamics only applies to closed systems. Since the universe and human civilization are not closed systems, the law does not apply. What's more, even when entropy does increase in a system, chaos need not reign everywhere. "The Earth radiates lots of high-entropy radiation into space, but its own entropy can easily decrease," says Carroll. "It's not just allowed — it happens quite readily. Order is spontaneously generated in subsystems as the larger world increases in entropy."

The second reason the argument fails is that it is baldly contradicted by evidence. In his book The Better Angels of Our Nature, the Harvard psychologist Steven Pinker argues that violence, which is surely a relevant measure of social decay, "has been in decline for thousands of years, and today we may be living in the most peaceable era in the existence of our species." This conclusion may sound surprising given the bad news we receive every night on the evening news. But even cursorily recalling "the genocides in the Old Testament and the crucifixions in the New, the gory mutilations in Shakespeare's tragedies and Grimm's fairy tales, the British monarchs who beheaded their relatives and the American founders who dueled with their rivals" – all this suggests that violence, at the very least, is condoned to a lesser extent in our society.

What's more, by nearly every metric, quality of life has been improving. Between now and a hundred years ago, many horrific diseases such as polio, tuberculosis, measles, and others have been largely eradicated in developed countries. In America, segregation and many other racist policies have been abolished; and, for newborns, average life expectancy has increased from just 49 years in 1900 to nearly 78 in 2008. Average income has also increased from roughly $15,000 in 1914 to a little over $50,000 in 2004 (in 2004 dollars), while, at the same time, working conditions have improved and child labor has been eliminated.

I do not mean to suggest that the western world is doing fine or to minimize the significance of our current environmental and economic problems. On the contrary, these problems are substantial, if not epoch defining. What I mean to say is that, even though these problems are enormous, they do not justify the kind of pessimism expressed by Franzen, Mamet, and others. All of our problems are soluble and very likely will be solved. They're not going to stop human history in its tracks and bring about the apocalypse.

It seems to me that, in writing his article, Mamet started with a certain feeling and then formed a reading of history to justify it, picking out the facts that supported his view. That's shoddy reasoning. Sound reasoning moves in the opposite direction: You start by investigating the facts, form an interpretation that fits them, and then adjust your feeling accordingly.

If the trends of history are in any way prognostic of the future, then optimism is the sounder view.

(Image via Evolllution.)

What the Science of Happiness Can Tell Us About Taxes

My article about the relationship between happiness and income, originally published on PolicyMic.

Over the past few decades, a remarkable field of inquiry that seeks to understand the relation between income and happiness has developed in economics and psychology departments. It has revealed that happiness and income are in fact correlated and that the fiscal policies a country enacts can have a considerable effect on the happiness and wellbeing of its citizens.

In light of the recent and widely reported 30% increase in the suicide rate for Americans ages 35 to 54, which occurred over the backdrop of stagnating wages and widening inequality caused by the Great Recession, the findings of this new field are especially germane. What’s more, if its findings are right, we ought to enact more liberal fiscal policies.

According to a 2010 study by Daniel Kahneman and Angus Deaton, two Princeton professors, which draws on data from over 450,000 participants, there are two aspects of happiness with which family annual income correlates. There is emotional wellbeing, which refers to “the frequency and intensity of experiences of joy, stress, sadness, anger, and affection that make one’s life pleasant or unpleasant,” and there is also life evaluation, which refers to the thoughts that one has when she thinks about her life as a whole. The former aspect of happiness captures our day-to-day experience, whereas the latter aspect captures how satisfied we are with our lives more generally.

Perhaps not surprisingly, Kahneman and Deaton found that both emotional wellbeing and life evaluation correlate positively with family annual income. But when income exceeds approximately $75,000 a year, emotional wellbeing plateaus. Thus the old adage that money can’t buy happiness is true if you make more than $75,000, but false otherwise.

Life evaluation, on the other hand, continues to correlate logarithmically with income beyond $75,000. What this means is that a doubling of any two incomes, wherever they fall on the spectrum, will produce equivalent gains in life evaluation.

Moreover, “in the context of income, a $100 raise does not have the same significance for a financial services executive as for an individual earning the minimum wage, but a doubling of their respective incomes might have a similar impact on both.”

If the findings of Kahneman and Deaton are true, then when income flows uninhibitedly to the top, we experience wasted utility, in the sense that a large portion of the money in our economy is distributed in such a way that it does not improve the quality of anyone’s life. Through a more progressive tax system, or through limitations on executive pay (like tethering the highest paid employee’s salary to a multiple of the lowest paid employee’s salary), this money could be used to substantially enhance vast numbers of lives of those at the middle and bottom of the spectrum, without at all lessening the emotional wellbeing and only slightly lessening the life evaluations of few at the very top.

For each American family that earns more than $10 million a year, for instance, there is roughly $9.925 million that does not produce greater emotional wellbeing for anyone, and although this extra $9 million will produce a higher score in life evaluation, such a family could sustain a very high score even if more than a couple million dollars of income were distributed elsewhere. Meanwhile, if 1,000 people near the bottom of the spectrum, making only $15,000, say, were to receive a pay raise of $2,000 more each year from that $2 million, then their quality of life would measurably and significantly improve.

In order to bend the curve of the suicide rate back down, I believe we ought to distribute more money at the top downward in some way, given the huge gains of total happiness and quality of life that would result — this would make more people’s lives worth living and make our country a better place. But many people, I believe, will disagree with me on various grounds.

One objection to the program I advocate is that the policies involved would damage the economy. If Congress passes legislation that increases the minimum wage or that limits executive pay by tethering the highest paid employee’s salary to a multiple of the lowest paid employee’s salary, for example, then the costs of running a business would increase and many employees would have to be terminated.

I believe that this can be a sound way of thinking under many market conditions. But the conditions of today’s market do not support this reasoning. There are, of course, scales to be balanced here. We want it to be easy for entrepreneurs to start businesses and we want there to be incentives for people to rise through the ranks and become more productive. At the same time, we want people to be able to rise through the ranks and we do not want those at the bottom and middle to live in perpetual penury. The fact that CEOs of America’s largest companies make 354 times more than the average American worker and the fact that the bottom 80% of Americans hold only 7% of the country’s wealth — these facts tell us that the scales are unbalanced and that our CEOs could take home less of the pie so that there would be more for the rest of us. In 1980, for comparison, CEO pay was only 42 times that of the average worker.

Another objection to my proposal steams from the notion that it would probably require raising taxes on our top earners. If we impose higher taxes on top earners or limitations on executive pay, many people think, we would penalize productivity and hence damage the economy. The argument is that businesspeople will not be motivated to work harder and entrepreneurs will not be motivated to start new businesses because, if taxes are higher, their payoffs will be less. But this way of thinking is not borne out by history. Our economy grew during the 1950s when the marginal tax rate was in some cases 91% and it continued to grow through the ‘60s and ‘70s when the marginal rate was 70%. What’s more, even “under those burdensome rates,” writes Warren Buffett in an Op-Ed article, “both employment and gross domestic product increased at a rapid clip.”

More often than not I find the above arguments serve more to mask monetary selfishness than they do to solve the social and economic problems at hand. If Congress passes legislation that directs some income from top earners downward, the economy would not suffer and the quality of many American lives would improve.

If the findings of Kahneman and Deaton are right, then the fiscal policies we enact affect on our own happiness and we ought to pass legislation that produces a society that is both more equitable and more happy. The recent spike in the suicide rate ought to remind us of what’s really at stake.

Minimum Wage Bill: Obama's $9 Proposal Won't Increase Unemployment

My article on minimum wage, originally published on PolicyMic.

In the months following the State of the Union address, in which Obama called to raise the federal minimum wage from $7.25 to $9 an hour, a series of conservative arguments about the negative effects of minimum wage hikes has appeared.

Most of these arguments are false or highly confused. They fail to accurately weigh the positive and negative effects of the minimum wage or they suffer from unawareness of the relevant facts.

One often heard claim is that minimum wage hikes increase unemployment. The standard argument for this, as the editors at the Wall Street Journal put it, is that workers whose skills do not merit a wage increase “will be priced out of the job market and their pay won’t rise to $9. It will be zero.”

This argument is remarkably simple and this gives it some intuitive appeal. But in a field as complex as economics, such simplicity should warrant more suspicion than confidence.

The fact of the matter is that raising the minimum wage does not increase unemployment. According to a study by John Schmitt from the Center for Economic Policy Research published this February, there is “little or no employment response to modest increases in the minimum wage.”

Another influential study from 2010 by Dube, Lester, and Reich found “no adverse employment effects” to minimum wage increases. And many other studies make similar claims. What’s more, even studies with conservative biases admit that the negative employment effects of minimum wage increases are small.

Why is this so? It turns out that when minimum wage goes up, employers react in more subtle and effective ways rather than by simply firing their workers. In response to minimum wage hikes, employers tend to improve efficiency, give smaller bonuses to more highly paid employees, or else absorb the costs of minimum wage increases by accepting smaller margins.

Another common response of employers is to pass on the cost of higher wages to consumers by raising prices. Many people point to this fact in arguing against minimum wage increases. Christina Romer a professor at Berkeley, for example, writes that the burden of higher prices “may harm the very people whom a minimum wage increase is supposed to help.”

But a wider investigation tells a different story. One study by Sara Lemos that reviewed over 30 academic papers on the price effects of the minimum wage showed that most studies find that “a 10% US minimum wage increase raises food prices by no more than 4% and overall prices by no more than 0.4%.” So the negative effects of higher prices are strongly offset.

Another well documented benefit of minimum wage increases is reduced labor turnover. Dube, Lester, and Reich’s study found that turnover rates “fall substantially following a minimum wage increase.” And what’s more, the cost savings associated with reduced turnover “may compensate for some or all of the increased wage costs,” comments Dr. Schmitt.

Indeed it is because the positive effects of minimum wage increases so strongly outweigh the negative that another study by Doug Hall and David Copper estimates that raising the minimum wage to $9.80 an hour by July 2014 would actually add over $20 billion to the economy and create approximately 100,000 new jobs.

The rational for this result is that poor people spend more of their incomes than their affluent counterparts. So when their income is increased through minimum wage hikes, they spend more money and this added spending could boost demand and stimulate employment as a result.

This does not mean that raising the minimum wage is always a good idea. In response to some of the arguments I’ve given, many conservatives often ask the loaded question, “Why not raise the minimum wage to $100 an hour?” An increase that substantial would cause the negative effects of the minimum wage to outweigh the positive and nearly all economists agree that this would do catastrophic damage to the economy. That is not what we should do and nor is it what anyone is suggesting. What I intend to say is, given our current economic conditions, an appropriately sized increase to the minimum wage would be beneficial.

Moreover, by any reasonable standard, the current minimum wage is too low. If the minimum wage had kept up with inflation since 1968, it would be $10.67 today.

Still, some conservative extremists go so far as to say that there should be no minimum wage at all. Milton Friedman said in an interview that when you impose a minimum wage, you “assure that people whose skills are not sufficient to justify that kind of a wage will be unemployed" — a claim echoed by enumerable conservative columnists over the generations.

But I believe this claim is confused. People who work for the minimum wage are paid just enough to avoid starvation and to have somewhat consistent shelter. Many of these people are one accident away from total financial ruin. If we really believe that there are people whose skills are so low that they do not warrant a wage that provides consistent food and shelter, then we ought to offer some sort of remedial education or other program for these people, since the notion that there are Americans whose potential skills are so low that they do not deserve enough pay to fulfill basic physiological needs is absurd.