The Prospect is proud to exclusively release the book Take Back Our Party: Restoring the Democratic Legacy by James Kwak. We will release one chapter every other day over the next two weeks. Read the Introduction and Chapter 1.
There are only two families in the world, my old grandmother used to say, the Haves and the Have-nots.
—Sancho, in Don Quixote
Things have not changed so much since the days when Miguel de Cervantes wrote Don Quixote. There are exceptionally wealthy families, and then there is everyone else. As a society, we clearly have the capacity to produce enough goods and services to enable everyone to live in material comfort, with decent food and clothing, a safe place to live, a comprehensive education from preschool through college, and even a reasonable amount of health care. The total income of all Americans is more than $54,000 per person—that’s more than $210,000 for a family of four. Yet many people can barely get by. Even in what we often call the richest country in human history, almost 40 million people live in households that struggle to obtain enough food.
Inequality and economic hardship have long been characteristics of human civilization, at least since the advent of agriculture more than ten thousand years ago. At first glance, it might seem that wealthy people, who need money the least, should share some of their riches with everyone else. In fact, both Social Security and Medicare—the most popular federal government programs in existence—rely on a modest degree of redistribution. But this is not how the American political system works today. Instead, Republicans and most Democrats agree that competitive markets and economic growth are the best way to help both the 99 percent and the 1 percent—that “a rising tide lifts all boats” (a saying commonly attributed to President John F. Kennedy). As long as the total pie is getting bigger, the logic goes, each person will get a larger slice; therefore, what we should care about is overall economic growth.
This principle was the basis for the accommodation between labor and capital in the Western bloc after World War II. Workers’ parties abandoned the ideas of violent revolution and state control of the means of production, settling instead for workers’ rights, nationalization of a few key industries (in some European countries), and a generous welfare state. For the business sector, a robust social safety net was a fair price to pay for political stability and the preservation of capitalism. Instead of the government organizing production and then sharing everything more or less equally (“To each according to his needs,” in the words of Karl Marx), the private sector would be modestly regulated, people would make what they could earn in the market, and then tax-funded government programs would make sure no one was completely destitute. Even with a large degree of inequality, economic growth would make all social classes continually better off over time. The assembly-line worker in the automobile factory would earn only a fraction of what the company CEO took home, but his standard of living would rise over his lifetime, and he knew that the inexorable march of prosperity promised a better future for his children.
The rising tide was the theoretical basis for the “trickle-down economics” of President Ronald Reagan and the conservative revolution. In their view, the postwar American economy already suffered from too much redistribution; cutting taxes for the rich would encourage them to work, save, and invest, accelerating growth and therefore benefiting all people. But it was also the justification for the opposition’s New Democrat narrative of growth and opportunity. “This isn’t the time to get caught up in distributional politics,” Democratic Leadership Council Chair Charles Robb said in 1986—“it’s time to make the economic pie grow.”
Greg Gibson/AP Photo
Senator Charles Robb, left, with President Clinton in 1994
Bill Clinton agreed. “The Democratic Party’s fundamental mission,” he wrote, is “to expand opportunity, not government; to recognize that economic growth is a prerequisite for expanding opportunity; to invest in the skills and ingenuity of our people in order to build shared prosperity.” In short, the key to making everyone better off is to encourage private-sector growth, while “expanding opportunity” to ensure that no one is left behind. Clinton’s primary domestic policy achievements included lower deficits, welfare reform, and financial deregulation, all traditional Republican goals; yet as long as economic growth was good for rich and poor alike, he could still claim that he was a champion of ordinary men and women.
The fundamental economic fact of our lifetimes is that this premise no longer holds. Growth no longer benefits all strata of society. Instead, the overwhelming majority of the gains from material progress are being claimed by people who are already very rich, while everyone else is left to share the crumbs.
To see what has gone wrong with the American economy, it’s first important to understand what growth is and why it makes people better off. We say an economy grows when the total amount of goods and services it produces—often called gross domestic product, or GDP—increases from year to year. A country’s GDP is mainly determined by the number of people in the economy (population) and the average amount that each person can produce (productivity). Over the long term, growth in GDP usually results either from an increase in population or from an increase in productivity. The former, however, makes people no better off on average because more stuff has to be divided among more people. It is only productivity growth that leads to higher standards of living. When we get better at making things, we end up with more things per person.
In other words, a rising standard of living is only made possible by long-term increases in productivity. Fortunately for us, productivity has been climbing higher since the beginning of the Industrial Revolution, and in particular since World War II. From 1948 until 1972, as shown in Figure 2.1, productivity rose at an average rate of 2.7 percent per year, and wages grew at exactly the same rate (after adjusting for inflation). As people got better at making stuff, they were paid more to make it. This makes intuitive sense, because if a worker can produce twice as much as before, her labor is worth twice as much to her company. When this is the case, economic growth does make most people better off, because ordinary workers are getting a fair share of its benefits.
Since 1972, however, the story has completely changed. Productivity has continued to grow, albeit more slowly (at an average annual rate of 1.3 percent). But in contrast to previous decades, workers have received only a small fraction of those gains in the form of higher wages, which have increased at a rate of less than 0.3 percent per year. People got better at making stuff, and therefore the economy got bigger—but most of them did not get a fair share of that growth. On average, a worker in 2017 could produce 82 percent more per hour than a worker in 1972, but was only paid 13 percent more.
So where instead did the benefits of higher productivity and economic growth go? The answer is simple: to the very rich.
Figure 2.2 shows how market forces have distributed the benefits of growth across different social strata between 1980 and 2014. (Figures 2.2 and 2.3 are from research by Thomas Piketty, Emmanuel Saez, and Gabriel Zucman, probably the most prominent economists studying income and wealth inequality today. Their latest series end in 2016; all figures are adjusted for inflation.) Incomes for the bottom 50 percent of the population (blue line), have barely budged. In other words, half of our country gained virtually no material benefit from 36 years of economic growth. In the meantime, the incomes of the top 1 percent (green line) have tripled; the groups in between have done modestly better than the bottom 50 percent, but have not enjoyed anything like the rewards flowing to the very rich. (People in the top 0.1 percent did even better; their incomes almost quadrupled.) It is by now well known that the 1 percent have gained a much larger share of national income in recent decades; in the process, as this figure shows, they have left almost nothing for much of the population.
But things are actually worse than they seem. Figure 2.3 shows the change in average income since 1980 for people in the bottom 50 percent of the distribution—that is, the half of the country that was once associated with the Democratic Party. Within this group, working-age adults, particularly younger ones, are making less than they did in 1980; only the elderly are better off, because of rising Social Security and pension payouts. If we look instead at after-tax income—taking into account taxes and government spending—the picture is only slightly improved. Average after-tax income for the poorer half of the population increased by 25 percent (about 0.8 percent per year), but all of this growth was enjoyed by the elderly, thanks in large part to the rising dollar value of Medicare.
There are many reasons why the wealthy have been the primary beneficiaries of increased productivity in recent decades. The most important explanation is a shift in the balance of power between labor and capital. If companies can make 82 percent more stuff while only paying workers 13 percent more, their per-unit costs are lower. Lower costs result in some combination of lower prices, which help everyone, and higher profits, which benefit the owners of those businesses. Those owners are largely rich people, virtually by definition; their equity holdings in companies, either directly or through stock portfolios, is what makes them rich. And lower prices on many consumer goods have been more than canceled out by severe increases in the cost of health care, education, and housing.
A second obvious reason why the rich have done so much better than everyone else has been tax cuts that favor the wealthy—particularly tax cuts on capital income. These were primarily the work of Republicans, including the 1981 Reagan tax cut and the 2001 and 2003 Bush tax cuts, but Democrats had a part to play as well—the Clinton administration with the 1997 capital gains tax cut, and the Obama administration with the 2013 deal that made permanent most of the Bush tax cuts. Other factors include globalization and technology, which have enabled highly educated, highly skilled people to earn more money by serving larger markets more efficiently than before.
Growth has not been widely shared in the contemporary U.S. economy—and neither has opportunity, the other pillar of the New Democrat ideology. The economic importance of a college education has never been greater, yet children from affluent families are more than four times as likely to earn a bachelor’s degree as children from poor families. Large gaps in educational outcomes exist even between students with similar levels of academic preparation, as indicated by standardized assessments. More disturbing still, when comparing people born in the 1980s to those born in the 1970s, the advantages of wealth have only grown. Even as more children of lower-income families have attended college (in part thanks to the greater availability of student loans), their chances of earning a degree have actually declined; by contrast, rich children have become much more likely to successfully graduate.
In short, since the 1970s, the tide has continued to rise, but it has not lifted all boats—only the luxury yachts of the super-rich. In the 1970s, 1 percent of the population owned 20–25 percent of total wealth; today they own about 40 percent of everything that can be traced—and considerably more if you include the likely assets hidden in offshore tax havens. The economy does a decent job of increasing productivity and therefore generating growth in total output, but leaves ordinary people little better off than they were 40 years ago. For example, the typical man working a full-time, year-round job earned $51,640 in 2016—after adjusting for inflation, only $16 more than in 1976. Overall, median household income in 2017 was only 19 percent higher than in 1973—an increase of less than 0.5 percent per year—and has been essentially flat since 1999. The Great Recession wiped out all wealth gains for the bottom 50 percent of the population since the 1950s; by 2016, this group was only as rich as it had been in the late 1960s.
Looking at the most intuitive measure of the so-called American dream, there is no longer any reason to be confident that children will do better than their parents. The rate of absolute mobility—making more money than one’s parents—was more than 90 percent for people born in 1940, but only 50 percent for those born in 1980. And growth alone cannot be the answer; given the way economic rewards are distributed today, we would need annual growth rates exceeding 6 percent—a virtual impossibility—to restore the levels of absolute mobility enjoyed by people born in the 1940s.
With Democrats in the White House for 16 of the past 27 years, we cannot simply blame this situation on the Republicans. Since the Clinton administration, the overarching economic strategy of the Democratic establishment has been to foster overall economic growth rather than aiding struggling families directly, on the premise that everyone would benefit in the long run. But even to the extent those policies helped the economy expand—for example, low deficits in the 1990s may have contributed to that decade’s boom—it is clear in retrospect that they only had the effect of making a thin stratum of rich people even richer. In addition, many of the signature policies of the Clinton and Obama administrations turned out to be directly harmful to poor and working-class people.
Marcy Nighswander/AP Photo
With the Treasury Department in the background, a homeless man pulls his belongings across Lafayette Square in Washington, December 1993.
Welfare reform was the core domestic policy achievement of the Clinton presidency. Tightened eligibility requirements for cash assistance and lifetime benefit caps did have the intended result of reducing the number of people on welfare, which fell by more than half. But contrary to the facile predictions of the bill’s supporters, people who lost government benefit checks did not magically become productive workers able to support themselves. Those with superior skills or educational attainment were able to find decent jobs, especially in the booming economy of the late 1990s, and many also benefited from contemporaneous increases in the Earned Income Tax Credit. But another group of people left welfare for low-wage jobs that barely made up for the cash assistance they had lost, while many others—about 40 percent of former recipients—were unable to find jobs and were forced to struggle without either paying jobs or cash assistance. The number of people in deep poverty—those in households receiving less than half of the poverty threshold—increased after the 1996 bill, largely because of its restrictions on cash assistance.
The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 made life much harder for the most vulnerable populations—people without the education, skills, or stable health necessary to find and keep steady jobs. In addition to imposing a five-year cap on cash assistance, the bill gave states the ability to impose harsher restrictions. The Arizona legislature, for example, recently set a one-year limit on welfare benefits—which were already only $278 per month for a family of three. Across the country, the result has been a sharp rise in families that have virtually no cash income—less than $2 per person, per day. The number of children living in these extremely poor households almost tripled from 415,000 in 1995 to 1.2 million in 2012 (even after adjusting for underreporting of income); the increase was even more rapid in single-mother households. Poor people who used to be able to rely on a small check from the government now must resort to blood donations (which have almost tripled since the late 1990s), food pantries (whose clients have more than doubled), collecting recyclable containers, and selling food stamps at a deep discount.
As welfare reform consigned some poor families to destitution and economic hopelessness, it also left both federal and state governments free to ignore their plight. Out of every dollar spent under the Temporary Assistance for Needy Families (TANF) umbrella, only 26 cents provide cash assistance to poor families, and another 24 cents fund programs to help poor people get and keep jobs. The other 50 cents (or much more in some states) is diverted to other uses ranging from educational scholarships to the child welfare system to marriage counseling (with a full 8 cents dedicated to “preventing out-of-wedlock pregnancies”). The consequence is that only 23 percent of families in poverty actually get any financial support from TANF. The meagerness of our welfare system was particularly apparent during the Great Recession, when the number of families receiving assistance barely budged even as the unemployment rate more than doubled—not a surprising outcome, given that TANF is funded as a block grant that grows more slowly than inflation. Turning to market incentives to address the challenge of poverty had exactly the outcome we should have expected: Those people better able to sell their skills in the market economy became better off, while those without the ability to compete—often through no fault of their own—suffered.
The flaws of welfare reform have gone largely unnoticed because few people pay much attention to the plight of the poor. By contrast, the Democratic Party’s embrace of financial deregulation turned out to be a spectacular catastrophe. Clinton-era financial policy rested on two basic premises: First, relaxing constraints on the banking and securities markets would unleash innovation, expanding access to capital and turbocharging the economy; and second, market forces and the self-interest of sophisticated institutions would protect consumers from abuse and ensure the safety of the overall system. These assumptions were calamitously wrong.
Deregulation had exactly the immediate results that its supporters predicted. There was a huge increase in concentration in the financial sector, as banks combined with each other in mergers of ever-increasing scale and scope. To take just one example, NationsBank bought Boatmen’s Bancshares in 1996 and Barnett Bank in 1997, becoming the biggest bank in the country; in 1998, NationsBank merged with Bank of America (which had absorbed Security Pacific in 1992); and in 2004, the new Bank of America bought FleetBoston (itself the merger of the three largest banks in New England).
At the same time, the Gramm-Leach-Bliley Act of 1998, along with earlier rulings by the Federal Reserve, made possible a new generation of mega-institutions that did everything from taking deposits and making commercial loans to structuring and underwriting complex securities, trading customized derivatives, and placing massive proprietary bets on the direction of markets. In virtually every case, the risk-loving culture of the investment banks (where people talked about “ripping the face off” their customers) dominated the once more conservative outlook of the commercial bankers. And deregulation certainly fostered financial innovation. Of the many brilliant ideas churned out by the wizards of Wall Street, the most important was the structured asset-backed security, epitomized by the mortgage-backed security (MBS) and collateralized debt obligation (CDO). These new creations allowed banks to pool together a large number of mortgages and repackage them as multiple flavors of MBSs with customized risk and return characteristics—and then repeat the process, using those MBSs as inputs to create CDOs. By selling these MBSs, CDOs, and even third-generation CDO-squared products to investors, the banks vastly increased the amount of money available for the housing market, spurring a rapid escalation in home construction.
This influx of capital did lower interest rates for homeowners. But the steady stream of profits from structured securities spurred demand for loans, particularly the subprime mortgages that made the best raw material for MBSs. A rapidly growing industry of mortgage lenders and brokers targeted millions of largely unsuspecting borrowers, pushing toxic products such as the option adjustable rate mortgage, whose monthly payments could explode upward after a teaser period of two or three years. Lenders falsified loan applications or simply agreed not to verify borrowers’ income; the investment banks packaging mortgages together overlooked evidence of fraud or shoddy underwriting, eager to get their hands on higher-cost loans that could yield more in the secondary market.
The result of this frenzy of lending and securitization was the largest housing bubble in modern history. Even after it peaked in 2006, the large banks continued squeezing profits out of a housing industry they knew was heading for a crash, selling synthetic CDOs based not on actual MBSs, but on side bets for and against existing securities. When the bubble finally imploded in 2008, many securities that were supposed to be utterly safe turned out to be worthless. The megabanks that a generation of policymakers had hailed as the engine of capitalism were worthless as well, their balance sheets weighed down by toxic assets they had concocted themselves. Hundreds of acres of unfinished developments were left to rot in the sun in Florida and the Southwest, and the shock waves of a collapsing financial system paralyzed the real economy, producing the worst downturn since the Great Depression. Most galling, the banks that had for decades been lobbying to be left alone by Washington came begging for federal support. Both the Bush and Obama administrations obliged, bailing them out on generous terms, rescuing both their CEOs and their shareholders from their mistakes.
The bottom line of the financial crisis is virtually incalculable: nine million families forced out of their homes, almost nine million jobs lost, record levels of long-term unemployment, trillions of dollars in lost output, and an economy whose overall capacity has been damaged for the foreseeable future. But those losses were not spread evenly. Although rich people lost paper wealth when the stock market plummeted at the height of the crisis, by 2019 the S&P 500 index had more than quadrupled from its lows—and almost doubled since its earlier peak in 2007. The median family in the top 10 percent of the income distribution was 27 percent richer in 2016 than before the crash, and is worth considerably more today. Wall Street bonuses rebounded to their pre-crisis levels, reaching $184,000 on average by 2017. Highly skilled, well-paid people living in winner-take-all cities, such as San Francisco, New York, Washington, and Boston, have thrived as the economy continues to grow.
On the other hand … Employees with mid-level skills were replaced by more educated workers or by machines. Rural communities hemorrhaged businesses and jobs that will probably never return. People who lost their homes to foreclosure, their credit records ruined, were unable to benefit from the housing recovery. Because minority borrowers were more likely to have taken out high-cost mortgages, the financial crisis and Great Recession exacerbated differences in wealth across racial and ethnic groups. The percentage of households owning their own homes fell to its lowest point in 50 years; even after a modest recovery, homeownership rates for all groups except the elderly are lower than in the 1980s. Long-neglected government programs for public or subsidized rental housing have been overrun by the need for affordable rentals in expensive cities; the proportion of eligible families receiving any housing assistance has fallen to 25 percent. In 2016, the median family had a net worth 29 percent lower than in 2007. In the long run, financial deregulation only deepened the growing divide between the haves and the have-nots in America.
Paul Sancya/AP photo
Waiting in line at a job fair in Southfield, Michigan, June 2011
The verdict on Democratic health care reform is not quite so bleak. The Affordable Care Act undoubtedly helped many poor and middle-class families, particularly by expanding eligibility for Medicaid (at least in states whose Republican governments did not reject federal money out of spite) and by helping people without employer-sponsored coverage buy health insurance. But the cracks in Obamacare were already visible before the Trump administration set out to undermine it, for the simple reason that the program could never be a solution to our fundamental health care problem: Modern medicine in the United States is expensive. In aggregate, we spend more than $10,000 per person on health care—almost twice as much as in other rich countries—and the sector continues to grow faster than the economy as a whole. Obamacare makes it easier for people to shop for health plans, but it relies on private, profit-seeking insurers, so we have to let them pass rising prices through to all of us—or they will simply pull out of the market. (The Affordable Care Act eliminated insurers’ other option, which was to stop covering sick people.) Even after a slight dip in 2019, benchmark premiums for policies sold on the new health insurance exchanges have grown at an average annual rate of 12 percent since they were introduced for the 2014 coverage year. Premiums in the employer-sponsored market have also increased faster than inflation; in 2019, the average family plan costs $20,576, of which the employee pays $6,015 directly.
Premiums tell only part of the story. We are also living through a major change in what it means to have health insurance in the first place. More and more policies today demand high levels of cost sharing, also known as out-of-pocket payments, in the form of deductibles, co-payments, and coinsurance. There are two reasons for this shift. One is the idea that, for markets to function properly, consumers have to have the right incentives; forcing people to pay a significant share of their health care costs will make them buy less. The second reason is that insurers can offer lower up-front premiums by shifting the real cost of insurance to the time of service; families on a tight budget may only be able to afford plans with low premiums now that will require high out-of-pocket payments later.
Both of these factors have contributed to the rise of the high-deductible health plan (HDHP), which requires people to pay a large deductible before their coverage kicks in—in other words, health insurance that many people cannot afford to use. These policies essentially did not exist before the George W. Bush administration; by 2018, 30 percent of workers with employer-sponsored insurance were enrolled in HDHPs, with an average deductible for family coverage well over $4,000. As for Obamacare, the vast majority of the plans that people buy on the exchanges have high deductibles.
This is why the headline problem with our health care system is no longer that people don’t have coverage at all (although the proportion of people who are uninsured has crept back up to 13.7 percent in recent years), but that too many nominally insured people can’t afford the care they need. For example, even after subsidies, a family of four earning $60,000 would have to pay $4,608 per year for a “silver” plan, and then could have to pay up to $13,000 in deductibles and co-payments. That’s a lot of money. And it doesn’t even include charges for out-of-network services, which usually are not covered by Obamacare plans. (One way that insurers control prices is by negotiating preferred rates with a narrow network of doctors and hospitals, but then refusing to cover services by other providers.)
Since the Affordable Care Act was passed in 2010, the decrease in the uninsured has been almost perfectly balanced by an increase in the underinsured—people whose premiums or out-of-pocket spending place a heavy burden on their finances. As of 2018, 37 percent of adults reported trouble paying medical bills, and 35 percent said they declined needed care because of costs. Of working- and middle-class adults (those making less than 2.5 times the poverty line), half are not confident that they could afford to pay for a serious illness, and two-thirds would be unable to pay an unexpected $1,000 medical bill. At the end of the day, insurers have responded to rising healthcare costs simply by providing less coverage for more money.
Unfortunately, there is reason to think things will only get worse. As long as the cost of medical care continues to increase, we either need to get less of it or pay more. If we insist on keeping premiums “affordable,” then the price of insurance—whether employer group plans or individual policies bought on exchanges—has to be shifted to out-of-pocket payments. People who can’t afford those payments, or who couldn’t afford the premiums in the first place, will have to go without needed care. Obamacare was a heroic effort to solve the problem of the uninsured, and it made things better in the short term. But any system based on competition between private insurers will lack the market power to rein in an out-of-control health care industry. In the meantime, the overall landscape is becoming more and more unequal: People with steady jobs at large employers have the best plans, while those working for small businesses and the casually employed are more likely to be stuck with policies that turn out to be inadequate when emergency strikes.
In short, the signature economic policies of the past two Democratic presidents have done little to help, and have in some cases harmed, the working- and middle-class families that their party once represented. Welfare reform, financial deregulation, and Obamacare were technocratic, market-oriented programs backed up by reams of white papers mass-produced by think tanks and government agencies. They were popular with the Democratic policy elite because they promised to benefit everyone without offending the corporate sector’s desire for profits, in contrast to an imagined past of naked redistribution from rich to poor. Their legacy, however, was an increase in extreme poverty, a ruinous financial crisis and recession, and a health care system destined to collapse under the weight of mounting costs.
More important still is what the most recent Democratic regimes did not do. Confident that a rising tide would lift all boats, they did little to confront rising inequality, even as President Obama—who could certainly talk a good game—called it the “defining challenge of our time.” Inequality has many causes, ranging from globalization and technology to school resegregation and changes in the tax code. But the economic platform introduced by the New Democrats, with its market-based solutions and its focus on overall growth (rather than who benefits from that growth), provided a convenient excuse to do nothing about inequality itself, nothing to help the people getting left behind with the wrong job skills or in the wrong part of the country. Both Presidents Clinton and Obama pointed with pride to the long economic booms during their tenures—overlooking the fact that it was precisely during those times that the very rich opened up the tremendous gap that separates them from everyone else.
Too much of what passes for Democratic economic policy still suffers the same basic flaw: wanting to believe that some clever, market-oriented, unmistakably capitalist policy idea will help low- and middle-income families. In recent years, the two main proposals that most Democrats could agree on have been infrastructure spending and investment in clean technology. Both of these are undeniably good ideas. Better infrastructure, both physical and digital, should promote innovation and economic activity. Research, development, and commercialization of renewable-energy technologies are urgently needed if we are to limit the impact of climate change to any kind of tolerable level. But these proposals are too small and too poorly targeted to do much to address wage stagnation and inequality. Most infrastructure spending plans would simply provide a short-term boost to the construction sector and otherwise leave intact the economic forces that allow the rich to appropriate the benefits of economic growth. Clean-technology investment would flow through the same venture capital network that turns select members of the educational elite into billionaires while providing enviable returns to well-connected institutional investors. There is no reason to believe that either of these ideas would change the fundamental dynamic of the past four decades: When good things happen to the economy, it is the wealthy few who benefit.
The crucial problem we must face is how the gains from economic progress are divided. Ideas that would actually address that problem arise only on the left of the Democratic Party, and then are usually resisted stubbornly (though often quietly, behind the scenes) by the establishment. A few manage to break through by sheer force of popularity, such as the campaign for a $15 minimum wage, but they are the exception. Even Medicare for All—which has majority support among the entire population, let alone Democrats—faces bitter resistance from centrists still terrified of being tagged as tax-and-spend liberals, hiding behind the Clintonite cloak of fiscal responsibility.
Growing awareness of inequality has prompted a new wave of thinking about how to affect the “predistribution” of income—that is, how the benefits of economic activity are divided before government intervention via taxes and spending programs. The minimum wage can be thought of as a policy tool aimed at predistribution, because it generally increases employees’ wages at the expense of company owners and customers. Rules that make it easier for unions to organize similarly enable workers to take home a larger share of the surplus that they create. Breaking up or aggressively regulating the large companies that dominate many sectors of our economy would begin to reverse the harms caused by decades of increasing market concentration, including lower wages for employees, higher prices for customers, and reduced innovation. Thinking about the distributional outcomes produced by our economic system is undeniably better than focusing solely on overall growth. Increasing labor’s share of national income is a valuable first step toward more fairly sharing the bounty generated by our economy.
At the same time, however, improving the way the market allocates income between the shop floor, the executive suite, and the owner’s mansion will not be enough to undo the damage done by decades of bipartisan market boosterism. We enter the economic arena with vastly unequal endowments of wealth, education, and social capital. Improving the negotiating position of workers and limiting the market power of giant companies will help, but in a system that leaves people to sink or swim on the basis of their ability to sell their own labor, too many will struggle to keep above water. We should certainly try to influence the way markets function so that ordinary people realize a larger share of the benefits of growth, but this will not be a complete solution for the economic insecurity faced by working people today.
In the end, a Democratic Party traumatized by the Reagan Revolution and defined by its aversion to the words “socialist” and even “liberal” has proven powerless against the economic and political forces that have created this Second Gilded Age of monumental inequality. Flattered by its growing proximity with the economic elite and unwilling to do anything that might smack of class warfare, the party that was supposed to stand up for the working class instead took the side of markets and the overall economy, protesting that this was, in fact, the smarter, more sophisticated way to help all Americans. It wasn’t.