A homeless encampment outside City Hall in San Francisco, on May 22, 2020, at the height of the Covid-19 lockdown. | Josh Edelson/AFP via Getty Images
Housing expert Matthew Desmond argues poverty has stagnated in America, but misses something big.
Matthew Desmond, the acclaimed Princeton sociologist and author of Evicted: Poverty and Profit in the American City, thinks that poverty has barely improved in the United States over the past 50 years — and he has a theory why. Laid out in a long essay for the New York Times magazine that is adapted from his forthcoming book Poverty, by America, Desmond’s theory implicates “exploitation” in the broadest sense, from a decline in unions and worker power to a proliferation of bank fees and predatory landlord practices, all of which combine to keep the American underclass down.
Desmond, who won a Pulitzer Prize in 2017 for Evicted, is an original and nuanced thinker and I cannot do his 6,000-word argument justice in a short article. But I do know a little bit about how we measure poverty, and I want to back up briefly and interrogate Desmond’s fundamental premise: Has poverty in America persisted? Is it true that in recent decades, as Desmond writes, “On the problem of poverty … there has been no real improvement — just a long stasis”? Is it true, as he posits, that the large increase in government spending on antipoverty programs in recent decades (a 130 percent increase from 1980 to 2018, by his numbers) hasn’t made a dent in poverty?
There is widespread disagreement, including among experts, about how to define “poverty.” But contrary to Desmond’s claim that the stagnation “cannot be chalked up to how the poor are counted,” I would insist the answer to whether poverty has fallen or stagnated in America depends entirely on how the poor are counted.
One set of approaches gives a clear answer: Poverty has plummeted dramatically since the 1960s due to a huge increase in government spending on programs that help lower-income people. Another set of approaches suggests that poverty has, as Desmond insists, stagnated (and would have risen absent that government spending).
Both these approaches have useful, distinct stories to tell us about poverty in America. One point they agree on, though, is that safety net programs like food stamps, Medicaid, Social Security, and the earned income tax credit have played an important role in reducing poverty. That is, Desmond’s core premise, that expanding safety net programs haven’t slashed poverty, is wrong. They have. You just need to measure poverty carefully.
How to measure poverty
To come up with a poverty measure, one generally needs two things: a threshold at which a household becomes “poor” and a definition of income. For instance, in 2023, a family of four is defined by the government as officially in poverty in the US if they earn $30,000 or less. That’s the Official Poverty Measure’s threshold, and weirdly it’s the same for 48 states and DC, but higher in Alaska and Hawaii, supposedly due to their higher cost of living.
But what does it mean to earn $30,000 or less? Should we just count cash from a job? What about pensions and retirement accounts? What about Social Security, which is kind of like a pension? What about resources like food stamps that aren’t money but can be spent in some ways like money? What about health insurance?
These aren’t simple questions to answer, and scholars like the late, great Rebecca Blank devoted much of their careers to trying to answer them. But I think it’s fair to say there’s a broad consensus among researchers that income should be defined very broadly. It should at the very least include things like tax refunds and food stamps that are close to cash, and simpler to include than benefits like health insurance.
That’s why there’s also near-unanimous consensus among poverty researchers that the official poverty measure (OPM) in the United States is a disaster. I have written about poverty policy for over a decade and have never heard even one expert argue it is well-designed. I was frankly a little shocked to see Desmond cite it without qualification in his article.
Its biggest flaw is that it uses a restrictive and incoherent definition of income. Some government benefits, like Social Security, Supplemental Security Income (SSI), and Temporary Assistance to Needy Families (TANF), count. But others, like tax credits, food stamps, and health care, don’t count at all. So many programs designed to cut poverty, like food stamps or Medicaid or the earned income tax credit, therefore by definition cannot reduce the official poverty rate because they do not count as income.
The Census Bureau now publishes a supplemental poverty measure (SPM), which uses a much more comprehensive definition of income that includes the social programs the OPM excludes. It also varies thresholds regionally to account for different costs of living, rather than simply breaking off Alaska and Hawaii. That’s a clear improvement.
Some experts, notably economists Bruce D. Meyer and James X. Sullivan, argue that looking for a definition of income is itself a mistake: Poverty is most usefully defined in terms of consumption, the resources people actually buy and consume. They argue this makes conceiving of benefits like Medicaid easier. Getting Medicaid is hard to think of as “income,” but enrollees are definitely “consuming” things like doctor’s visits, prescription drugs, etc, that they would struggle to obtain without those benefits.
But overall, disputes among poverty experts about how to define income or consumption or “resources” tend, in my experience, to be muted compared to disputes over where to draw the thresholds: where to set the poverty line and how to adjust it over time.
The simplest way to approach this is to do what the official poverty measure does: Take a set amount of money and adjust it for inflation over time. Specifically, the poverty rate was devised in 1963 by Mollie Orshansky, an economist at the Social Security Administration, based on the US Department of Agriculture’s 1961 estimate, which itself was based on 1955 data, of how much money a family of four would need for food, if they were really pinching pennies. Orshansky tripled this estimate, since families of three more typically spent a third of their income on food at the time. (Americans now spend only about 10 percent of income on food, though the subset of families that Orshansky was looking at may spend more.)
That was the poverty line, and it has not changed since, with the exception of annual adjustments according to the Consumer Price Index.
That is, of course, an incredibly arbitrary threshold to draw, and it’s almost a cliché at this point to note how dumb it is. There’s an episode of The West Wing with a subplot about how old and dumb and outdated the poverty line is, and that episode is itself now over 21 years old.
But experts are split on what a better line to draw would be.
Absolute versus relative poverty
The official poverty measure is what’s sometimes known as an “absolute” poverty measure. Measures like this generally only adjust their thresholds for inflation. Many are based on less arbitrary numbers than “what people spent on food in 1955,” and many use different measurements of inflation, since a lot of economists think the Consumer Price Index overstates price increases compared to the Personal Consumption Expenditures (PCE) or chained CPI measures. But they fundamentally have a lot in common with the OPM’s approach: They set a dollar threshold for who is and isn’t poor and stick to it.
Absolute poverty measures are crystal clear about what has happened to poverty since the 1960s: It plummeted. The below chart shows three different absolute measures, all of which use expansive income definitions, unlike the official rate. All three have fallen dramatically.
(Many thanks to economist Kevin Corinth for passing along this series from his working paper with Richard Burkhauser, James Elwell, and Jeff Larrimore.)
The primary case for absolute measures like these is that they’re easy to interpret. Because the thresholds only change due to inflation, changes in the poverty rate only happen because people near the bottom get richer or poorer. If poverty falls, it’s because some low-income people gained more money or resources. If it increases, it’s because some low-income people lost out. Insofar as those kinds of material changes at the bottom are the main thing one cares about, absolute measures can be helpful. As a group of Columbia researchers argued in 2016, absolute measures are “more useful for establishing how families’ resources have changed against a fixed benchmark.”
Applied to the US, the takeaway is that many fewer people are living on a very small amount of money than was the case in the 1960s.
But many poverty scholars prefer to use what are called “relative” measures. Such measures set the threshold as a percentage of the country in question’s median income (usually 50 or 60 percent). Most rich countries other than the US define poverty in this way. The European Union, for instance, uses what it calls an “at risk of poverty” rate, defined as the share of residents in a country living on less than 60 percent of the median disposable income. The United Kingdom uses a “households below average income” (HBAI) statistic, with the main threshold set to 60 percent of median income.
The case for relative measures is that poverty is socially defined, and “being in poverty” is usually thought of as people not being able to exist with the level of comfort that is normal in the society in which they live. A common definition, from the British scholar Peter Townsend, posits that poverty is “the absence or inadequacy of those diets, amenities, standards, services and activities which are common or customary in society.” Commonness or customariness are relative attributes, not absolute ones. Some, like sociologist David Brady, have also argued for relative measures on the grounds that they correlate better with self-reported mental and physical health and well-being.
Looked at in relative terms, poverty hasn’t fallen in the US in recent decades. It’s stagnated:
Advocates of absolute measures counter that relative poverty measures inequality rather than actual deprivation. Bruce Meyer, for instance, cites the experience of Ireland in the 2000s, which experienced “real growth in incomes throughout the distribution including the bottom. However, because the middle grew a bit faster than the bottom, a relative poverty measure shows an increase in poverty. Thus, we have a situation of nearly everyone being better off, but poverty nonetheless rising.” The reverse can happen in recessions, where if median incomes fall faster than incomes at the bottom, poverty can fall, even though everyone’s worse off.
Some measures, sometimes called “quasi-relative” or “semi-relative,” split the difference between the two approaches. They don’t merely vary with inflation, but they’re not a simple percentage of average incomes, either. The US supplemental poverty measure is a good example: It’s based on the 33rd percentile of spending on “food, clothing, shelter, and utilities” (FCSU). That is, researchers rank households by the amount they spend on those categories, find the point such that a third of households are below it and two-thirds are above, and use that as the basis for the SPM line. Because spending on these goods varies year to year, the thresholds change year to year, and not just based on inflation, but the change tends to be minimal compared to the changes in pure relative measures.
Government taxing and spending has become more important in fighting poverty
So … who’s right? The boring but correct answer is that these measures capture different things and each tells us something interesting. The fall in absolute poverty tells us that fewer people are living on very low cash incomes than were in, say, 1980. One estimate suggests that the fall in absolute poverty since 1967 means that 55 million fewer people lived in poverty in 2020 than would have if absolute poverty had stagnated.
The stagnation in relative incomes tells us that income growth at the bottom isn’t faster than growth at the middle and that there’s still a substantial share of America living on substantially below-average incomes — with 23.1 percent of Americans living in poverty under the definition used by the EU and UK (compared to 15.5 percent in the UK and 16.5 percent in the EU).
I do, however, want to highlight a point where absolute and relative poverty measures align: Government spending on social programs plays an important role in reducing poverty, and such spending does more to fight poverty now than it did in the recent past.
One highly cited absolute poverty measure is the “anchored” supplemental poverty measure, produced by Columbia researchers Christopher Wimer, Liana Fox, Irwin Garfinkel, Neeraj Kaushal, and Jane Waldfogel. This measure simply uses the Supplemental Poverty Measure thresholds from 2012 and extends them back to 1967.
This measure shows a substantial decline in poverty — but more importantly, it shows that government transfer programs are the only reason poverty has substantially declined. Before taxes and transfers, the poverty rate by this metric was 26.4 percent in 1967 and 22.5 percent in 2019. In the pandemic year of 2020, it shot up to 24.9 percent, barely different from 53 years previous. But after taxes and transfers, poverty fell from 25 percent in 1967 to 11.2 percent in 2019 — and to 8.4 percent amid the flood of stimulus money in 2020. The big story here is that government programs are doing much more than they did in the 1960s or 1980s to slash poverty.
One sees the same pattern in relative poverty. A 2019 paper by researchers Koen Caminada, Jinxian Wang, Kees Goudswaard, and Chen Wang for LIS, an international research center for income and poverty issues, estimates that in 1985, taxes and transfers in the US reduced relative poverty by 6.2 points. In 2013, the reduction was 9.7 points. Without government intervention, relative poverty would have increased from 1985 to 2013; instead, it merely stagnated.
Desmond, in his essay, spends some time marveling that “federal investments in means-tested programs increased by 130 percent from 1980 to 2018,” a fact he finds hard to square with the official poverty rate remaining flat. Surely that spending should have reduced poverty!
The answer here is simple: It did reduce poverty. The escalation of government investment made a difference, no matter what reputable poverty data you look at, whether absolute or relative. The only data series where it doesn’t make a difference is the official poverty measure, which literally does not consider most of this spending and acts like it does not exist.
The points Desmond makes about forces of exploitation in the markets poor people interact with — from payday lenders to bosses who can take advantage of their monopoly power and weakened unions to set low wages to the landlords he profiled in his breakout book — are well-taken. These could very well help explain why poverty would have stagnated or risen without government intervention, and addressing them might prove effective at fighting poverty. But there’s no need to couple this argument with claims that government spending has done nothing to reduce poverty. It has done a tremendous amount.
Much of the confusion in Desmond’s piece is not his fault, exactly. It’s the fault of the US government and its official poverty measure. Congress and the Department of Health and Human Services urgently need to abolish the OPM. It’s a bad number that tells a misleading story about poverty in America, and acting to replace it would do a lot of good.