U.S. net debt hits $4 trillion in 2011—the cumulative toll of a generation of trade deficits

The U.S. Bureau of Economic Analysis (BEA) recently announced that the U.S. net international investment position (NIIP) was -$4 trillion at year-end in 2011 (see figure, below). The NIIP stood at -$2.5 trillion at year-end 2010. The $1.6 trillion increase in the net debt was largely caused by price changes of -$802 billion (on domestic and foreign holdings of stocks and bonds) and by net financial flows of -$556 billion. Net financial flows were largely explained by financing of the $466 billion U.S. current account deficit in 2011. The current account is the broadest measure of the U.S. trade deficit. While the costs of financing the NIIP were relatively small in 2011, they could rise rapidly if interest rates return to more normal levels in the future.

The United States has been borrowing hundreds of billions of dollars per year for more than a decade to finance its growing trade deficits. However, until 2011, the U.S. NIIP has not declined proportionately, as shown in the figure below, primarily because of gains in the prices of foreign stocks, the decline of the dollar (which made foreign currency holdings more valuable), and frequent accounting revisions (which have found more and more U.S. investments abroad).

Last year, several of those factors moved against the United States as the NIIP declined $1.6 trillion to -$4 trillion. That’s real money. Foreign investors (primarily foreign central banks) held $5.7 trillion in treasuries and other government securities at the end of 2011. The United States paid, on average, about 2.3 percent in interest on all of those securities. These low rates are caused by the still-depressed U.S. economy operating far below potential, and are unlikely to rise unless the U.S. economy begins operating much closer to full-employment. But, if this recovery happens and the NIIP remains roughly as large as it is today, then debt service costs could rise significantly. For example, if the average cost of government debt rises to 4.5 percent, it would add another $124 billion to the U.S. government deficit. If this rise in U.S. borrowing costs, furthermore, was not matched by a rise in global interest rates, then this would actually cause a net decline in U.S. GDP, as income flows out of the country to service debt increased and were not matched by increased inflows that paid U.S. owners of foreign assets.1

The U.S. NIIP represents a potential claim against future national income, and the size of this potential claim is growing dramatically as shown in the figure above. Each year that we allow large trade deficits to continue is another year that adds to this claim on future incomes—yet this actual intergenerational transfer is often ignored while a non-existent intergenerational transfer (that one allegedly caused by rising federal budget deficits) attracts much attention from pundits and economic commentators.2


Sources:

Board of Governors of the Federal Reserve System.  2012. “Selected Interest Rates (Daily) – H.15:  Historical Data.”

U.S. Bureau of Economic Analysis (BEA).  2012. “International Economic Accounts: Balance of Payments.”

U.S. Bureau of Economic Analysis.  2012. “International Economic Accounts: International Investment Position.

Endnotes

1. Average rate of return on U.S. government securities in 2011 calculated from data in the current account (BEA 2012a) and the NIIP (BEA 2012b). Return on seven-year treasury securities used for comparison. The average return on seven-year treasuries was 2.16 percent in 2011 (Board of Governors of the Federal Reserve System 2012). Their average return in the pre-recession period of 2000-2007 was 4.52 percent.

2. Interest payments on government debt owed to U.S. citizens only reallocate income from taxpayers to domestic bondholders. Foreign holdings of U.S. securities represent claims on future income, which are qualitatively different. Interest payments on foreign holdings reduce U.S. GDP, while interest paid to domestic holdings does not. Given the existence of substantial unemployment and the predominance of deficit opponents in Congress, increases in the government debt due to financial outflows could result in further spending cuts, which would cause a further decline in U.S. GDP.

Apple’s shine is fading

Apple is rapidly becoming the symbol of what’s wrong with our economy: a highly profitable enterprise where all the gains go to those at the top and the vast majority, including those with college degrees, struggle to get by. Saturday’s New York Times  article by David Segal deepens the story beyond Apple’s complicity in exploiting Chinese manufacturing workers. According to Segal, “About 30,000 of the 43,000 Apple employees in this country work in Apple Stores, as members of the service economy, and many of them earn about $25,000 a year.”

That $25,000 annual salary works out to $12.02 an hour for someone working full-time for one year (2,080 hours paid, either for work hours or paid leave). That’s pretty low; about $1 above the “poverty-level wage” (the poverty line for a family of four in 2011 was about $23,000, equivalent to an hourly wage of $11.07). Segal’s article starts off talking about a former Apple employee, Jordan Golson, who earned just $11.25 an hour. Many of these Apple store workers are young, so one wonders how Apple wages compare with those of other young college graduates. The short answer is “not so good,” or even “terrible.” The hourly wages of young college graduates (those ages 23-29) in 2011 was $21.68 for men and $18.80 for women. To be fair, Segal notes that, “The company also offers very good benefits for a retailer, including health care, 401(k) contributions and the chance to buy company stock, as well as Apple products, at a discount,” so including benefits may offset some of the discrepancy between pay by Apple and pay by other companies. The information necessary to calculate this offset is unavailable, but it is not believable that these benefits fully or even significantly make up such a large shortfall in wages.

How do Apple store wages compare to those of all college graduates? As the table below shows, $12.02 is far below the 20th percentile wage of college graduates, the wage that 80 percent of college graduates earn more than and 20 percent make less than. That’s right, Apple’s store employees’ wages are in the bottom 20 percent of all college graduates. In fact, $12.02 is $2.24, or 16 percent, less than the 20th percentile college wage in 2011. For college-educated men, $12.02 hourly is on par with the wage earned at the 10th decile, $11.87, meaning 90 percent of college graduates earned more than that in 2011.

Table 1 Table 1 (continued)

Hourly wage for college graduates, selected percentiles, 2011

Percentile* All Men Women
10  $   10.80  $   11.87  $   10.12
20      14.26      15.49      13.09
Median (50)      23.07      25.96      20.25
*The Xth percentile wage is the wage at which X percent of the wage earners earn less and (100-X) percent earn more

Source: Author’s analysis of Current Population Survey Outgoing Rotation Group files

It is already well-known that Apple benefits from the extremely low wages and harsh working conditions of the Chinese workers who manufacture its products. As EPI’s Ross Eisenbrey and Isaac Shapiro recently wrote, “Apple workers in China endure extraordinarily long hours (in violation of Chinese law and Apple’s code of conduct), meager pay, and coercive discipline.” Together with the mediocre pay for Apple employees, even compared with other retailers, it is clear that Apple’s success does not translate to high or rising living standards for the workers who one would hope would benefit from its success. Apple could readily afford to pay the Chinese Foxconn workers building iPhones because their costs are a miniscule part of the phone’s costs. Raising pay is not that heavy a lift for Apple: In 2011, Apple’s nine-person executive leadership team received total compensation of $441 million, equivalent to the estimated compensation of 95,000 Foxconn factory workers assembling Apple products.

The discrepancy between Apple’s profits/executive pay and its compensation to its workers is a particularly glaring example of what is occurring in the wider economy. The gap between CEO compensation and that of a typical worker is now 231-to-one, where it used to be just 58.5-to-one in 1989. Corporate profits are now higher as a share of corporate-sector income than in any year since the early 1940s when we had a War Labor Board consciously suppressing wage growth. And, this all contributes to the phenomenon that productivity—the ability to produce more goods and service per hour—has been rising rapidly but the hourly compensation of both high school and college-educated workers is totally flat. It does not look like much will change soon unless there’s a broad change of thinking among policymakers and a mobilized workforce. After all, current outcomes have been dictated by persistent high unemployment, low and weakly enforced labor standards (witness the failure of Apple to abide by California’s wage and hours law mandate of two 10-minute breaks a day, reported in the Times story), the inability of unions to set high labor standards, and the dominant political/policy influence of the wealthy and the business community. Apple’s labor practices and the overall failings of the economy have not been dictated by any economic laws. Rather, they are the result of eminently changeable public-sector policies and private-sector practices.

Supreme Court contorts itself to deny overtime protection to 90,000 pharmaceutical employees

In a 5-4 decision issued this week in Christopher v. SmithKline Beecham Corp., the Supreme Court, in its eagerness to reach a result favoring the pharmaceutical industry over its employees, abandoned the legal straight and narrow for some very sketchy shortcuts. The case concerned the application of overtime protection to medical detailers, also known as pharmaceutical representatives, employees who visit physicians and promote prescription drugs. If the detailers are “outside salesmen,” they are exempt employees and are not entitled to overtime pay.

Ignoring the plain meaning of key words, the “ordinary usage” which Justice Antonin Scalia elsewhere has claimed to favor, the court declared medical detailers to be outside salesmen because—even though they never make a sale of pharmaceuticals to anyone—they come as close to selling as the law governing their industry allows. The best the court could do in terms of identifying sales that these supposed salesmen make is to find that the detailers induce “non-binding commitments” from physicians to prescribe the drugs their pharmaceutical companies are promoting or marketing. The court found that the fact the detailers almost get commitments from these physician “gatekeepers”—without whom no one could sell the prescription drugs being promoted—is enough to treat the “transaction” as a sale. Whew, talk about bootstrapping and judicial activism! A justice could get a hernia with that kind of lifting!

But who in reality buys prescription drugs? Certainly, in any normal economic sense, it’s not the prescribing physician. There are, in fact, two parties that purchase them, and the detailers don’t sell (or even make binding commitments) to either: the retail drug stores like CVS and Walgreens, and the patients who are the end users. The court deals with sales to the drug stores in a most unsatisfactory way: It says that the people who actually make those sales are so few (2000 sales agents vs. 90,000 detailers), and their function is so rote, that we should ignore them.

The persons who make sales (exchanging money for a product) to patients are pharmacists, but the court argues that there would be no sales without the prescribing physicians, who deal with the medical detailers and have a completed transaction when they make a non-binding commitment—not to buy—but only to prescribe the drugs for appropriate patients. According to the court, this is” tantamount” to a sale.

An unfortunate lesson this case teaches is that no one knows what the law means until the Supreme Court decides the result it wants and then stretches the meaning of the statutory or regulatory language to (more or less) fit the result.

The other lesson from this decision is for the Labor Department, which had never in 60 years brought an enforcement action against a pharmaceutical company in a way that gave the industry notice that its widespread practice of denying overtime pay to detailers was unlawful. The medical detailers are relatively well paid and loosely supervised employees whose employers do not closely monitor their work time—not the classic employees we think of when we talk about overtime pay. Although there is no excuse for the tortured logic of the majority opinion, if the Labor Department had given fair notice that it disapproved the exemption of detailers, either by bringing enforcement actions over the years or even issuing consistent guidance that made its interpretation of the statute and its regulations clear, the court might have found that the exemption did not apply.

In other words, if we don’t enforce our rights, we can lose them.

Wealth losses by race and ethnicity

The Federal Reserve’s report on family wealth released last Monday illustrates how severely the Great Recession has hurt middle-class families. Median family net worth (assets minus debt) fell to levels not experienced since 1992. While all groups but the richest 10 percent of families saw declines in wealth, there was variation in the percentage decline by race.

In the Federal Reserve’s report, it is difficult to identify the specific trends for African Americans and Hispanics. While the net worth of white, non-Hispanic families are presented, all nonwhites and Hispanics are lumped together in the family net worth table. However, the report has a sentence detailing the net worth changes specifically for African American families (p. 21). By using the past few reports, we can see the recent trends for wealth in black America.

First, it is important to note the median black family only has a small fraction of the wealth of the median white family (Figure A). (The family data discussed here differs from our reported household data because families are a subset of households and the data are inflated to different years.) In 2010, the median black family only had 12 cents for every dollar of wealth the median white family had.

When one examines the percent decline in wealth from 2007 to 2010, it appears that whites have seen a greater percentage decline in wealth than blacks. White family net worth declined 27 percent over this period while black family net worth declined 13 percent (Figure B). But in the data, while white wealth peaked in 2007, black wealth peaked in 2004. As white wealth continued to grow from 2004 to 2007, black wealth had already declined significantly.

If we compare the white and black wealth declines from their most recent high points, we see white net worth down 27 percent (from 2007) and black net worth down 40 percent (from 2004). A 40 percent decline is a large drop for a population with very little wealth even at their peak.

The trend for black net worth is probably following the trend for black homeownership. For most middle-class families, their home is their primary source of wealth. African Americans have had a strong decline in homeownership since their rate peaked in 2004 (Figure C). Homeownership rates for black families are projected to drop to between 40 and 42 percent—which would erase 15 years of gains in homeownership. If this occurs, it could also mean a continued decline in black wealth.

It is not possible to determine the trends in Hispanic net worth precisely from the published Federal Reserve data. We can deduce, however, that from 2007 to 2010, Hispanic net worth probably declined about 45 percent. This decline is significantly larger than the 27 percent for whites over the same period. Even at their recent peak net worth, Hispanics, like blacks, only had a tiny fraction of the wealth that whites had. (In 2010, the median family for nonwhite and Hispanic families combined only had 16 cents for every dollar of wealth the median white family had.)

In terms of wealth, only the richest American families have come out of the Great Recession relatively unscathed. Significant declines in wealth have been broadly felt. But the losses to black and Hispanic families are particularly damaging because they are quite large, and they were experienced by groups that had very low levels of wealth even before the recession hit.

— Research assistance provided by Johnny Huynh

NLRB uses new tool to help us understand our rights

Not long ago, I blogged about the fact that our key labor law, the National Labor Relations Act, protects workers even if they don’t have a union or seek to have one represent them. When workers join together to protest working conditions, to petition management for raises or plead against pay cuts, or to report unsafe conditions to government agencies, the National Labor Relations Board backs them up. The NLRB can protect workers against retaliation by the employer, can order reinstatement for fired workers, and can obtain back pay.

It isn’t widely known, but since its inception, the National Labor Relations Act has given employees the right “to engage in … concerted activities for the purpose of collective bargaining or other mutual aid or protection.”

Now, for the first time, the NLRB has a nice-looking, somewhat interactive webpage devoted to this issue of “other mutual aid or protection.” Visitors to the site can read some heartening stories about how employers overreacted—almost always by firing someone—, to employees organizing to protest or to make a problem known to management and how the NLRB intervened to restore the job or lost wages of the workers.

It’s great to see the government helping people understand their rights and how to enforce them.

Failure to stimulate recovery is costing trillions in lost national income

In a recent blog post on the (negligible, if not nonexistent) long-run economic cost of deficit-financed fiscal stimulus at present, I noted in passing that the Congressional Budget Office (CBO) has downwardly revised potential economic output for 2017 by 6.6 percent since the start of the recession. This may seem trivial, but for a $15 trillion economy, this dip reflects roughly $1.3 trillion in lost future income in a single year, on top of years of cumulative forgone income (already at roughly $3 trillion and counting). The level of potential output projected for 2017 before the recession is now expected to be reached between 2019 and 2020—representing roughly two-and-a-half years of forgone potential income. This represents a failure of economic policy and merits considerably more attention than received, especially when weighing the benefit of near-term fiscal stimulus versus deficit reduction.

Potential output is the estimated level of economic activity that would occur if the economy’s productive resources were fully utilized—in the case of labor, this means something like a 5 percent unemployment rate rather than today’s 8.2 percent. Potential output is not a pure ceiling for economic activity, but the level of economic activity above which resource scarcity is believed to build inflationary pressures. As of the first quarter of 2012, the U.S. economy was running $861 billion (or 5.3 percent) below potential output—the shortfall known as the “output gap.” This has a number of implications for federal fiscal policy:

  1. Deficit-financed fiscal stimulus will have a very high bang-per-buck while large output gaps persist. The government spending multiplier is much larger in recessions than expansions (see Figure 3 of Auerbach and Gorodnichenko 2011) and the U.S. remains mired in recessionary conditions, where economic growth is insufficient to restore full employment.
  2. Deficit-financed fiscal stimulus is largely self-financing because every dollar of increased output relative to potential output is associated with a cyclical $0.37 reduction in budget deficits, and this feedback effect is greatly amplified by the large government spending multiplier.
  3. There is so much slack in the U.S. economy—i.e., supply of resources in excess of demand—that government borrowing will not “crowd-out” productive private investment; this can be seen in the near record-low 1.6 percent yield on 10-year U.S. Treasuries.

So deficit-financed fiscal stimulus is highly cost-effective, largely self-financing, has a very low opportunity cost, and poses no risk to inflation. But there is another potential benefit: closing today’s output gap can increase potential future output (thereby also increasing the ability to repay debt incurred). The reason is simple—if long bouts of inactivity leave permanent “scars” on the potentially productive resources (and they do), then the longer the economy operates below potential, the more future potential is damaged. Concretely, factories aren’t built because firms can’t even sell what existing factories are producing. Children’s educational outcomes are damaged as economic distress forces their families to move and as they lose access to decent nutrition and health. Desirable early-career jobs for recent graduates that could impart valuable skills throughout their working lives aren’t available to them, so lifetime earnings suffer. And so on.

The CBO certainly is worried about this scarring—look at the annual revisions to real potential GDP made by them since the onset of the recession: Estimates have consistently been revised downwards except between  Jan. 2009 and Jan. 2010, when the deficit-financed $831 billion Recovery Act arrested economic contraction and began shrinking the output gap.

The Recovery Act, however, was nowhere near large enough to restore full employment and close the output gap—the 10-year cost of the stimulus, after all, was smaller than the annual output gaps that have persisted since 2009. As the economy has slowed as fiscal support waned, CBO’s potential output forecasts have withered as well. So why did Congress pivot from job creation (i.e., stimulus) to deficit reduction at the start of the 112th Congress?

The whole point of long-term deficit reduction, after all, is to raise future income. But failure to restore full employment decreases potential future income. Worse, while the economy remains depressed below potential output, near-term deficit reduction—particularly spending cuts—greatly exacerbate the output gap because the government spending multiplier is so high. (We’ve seen this play out across much of Europe, where government “austerity” programs have cut spending, pushed economies back into recession, pushed up unemployment, and cyclical deterioration in the budget deficit has rendered spending cuts entirely counterproductive.)

The downward revisions to potential output in CBO’s forecast reflect a failure of Congress to resuscitate the economy and restore full employment, but it’s a policy failure that can still be reversed. Fiscal stimulus can increase employment and industrial capacity utilization today and actually “crowd-in” private investment, thereby increasing today’s capital stock and future potential output. With respect to fiscal tradeoffs, cost effective deficit-financed fiscal stimulus will actually decrease the near-term debt-to-GDP ratio (the relevant metric for fiscal sustainability), whereas deficit reduction cannot raise future income until the output gap is closed and the private sector is competing with government for savings instead of plowing cash into Treasuries. The full cost of Congress’ misguided pivot from job creation to austerity is larger than even just today’s mass underemployment—trillions of dollars of potential future income will also be lost unless we pivot back to addressing the real crisis at hand.

New Fed data shows families falling even farther behind in retirement saving

The Federal Reserve just published findings from the 2010 Survey of Consumer Finances, a triennial survey of household finances. Though it’s no surprise that these took a dive with the collapse of the housing and stock bubbles, the extent of the plunge is still shocking as the median family saw their net worth fall by 39 percent between 2007 and 2010.1

By 2010, the economy had begun its slow recovery. Housing prices had leveled off and stocks rebounded, recouping about half their losses by the end of the year. But this wasn’t just a temporary setback. Households—especially younger households—were in serious trouble long before the twin asset bubbles burst.

Families headed by someone age 35 to 44—the age when workers typically start getting serious about saving for retirement—had seen declines in net worth in the wake of two previous recessions (1990-91 and 2001) without fully regaining the lost ground in the intervening years (see chart below). So the financial meltdowns that precipitated the Great Recession only exacerbated an existing problem. As a result, GenXers had only accumulated $42,100 in 2010, less than half what the Baby Boomers had accumulated at the same age adjusted for inflation (in the chart, Depression and War Babies are indicated by squares, Early Boomers by triangles, Late Boomers by circles, and GenXers by an X).2

The fact that net worth declined for younger age groups even before the Great Recession is remarkable when you consider that the economy grew by a third on a per capita inflation-adjusted basis between 1989 and 2010, though this growth was not widely shared. Furthermore, families should have been saving more to make up for declines in pension coverage and Social Security benefits. As a result, the Center for Retirement Research has estimated that the average family in the broad 35-64 age range had a Retirement Income Deficit of $90,000 in 2010, a measure of how far behind they were in saving and accumulating benefits for retirement.

Even a generation that fared relatively well—the cohort born during the last years of the Great Depression and World War II—had only accumulated $227,000 as it approached retirement in 2001. This is roughly four times the median income for that age group in 2001, or enough to purchase a 20-year annuity worth $3,750 a year at a 3 percent real interest rate.3 As these Depression and War Babies began tapping their retirement savings during the boom and bust years of the new millennium, their net worth fell to $206,700 in 2010, whereas the preceding generation had seen increases in net worth during their early retirement years.

Baby Boomers fared much worse than the Depression and War Babies, lulled into complacency by asset bubbles that inflated during their prime earning years and popped as the leading edge of the Boomer generation approached retirement. Early Boomers born in the late 1940s and early 1950s saw their net worth increase by around $69,000 between 1989 and 2001 (a 4.6 percent annual rate), but only by a meager $14,500 between 2001 and 2010 (a 0.9 percent annual rate). Late Boomers fared no better, and, like GenXers, are now far behind where earlier generations had been at the same age.

Though it may be tempting to chastise families for not saving enough for retirement, most of the blame lies with former Federal Reserve Chairman Alan Greenspan and others in positions of responsibility who watched asset bubbles inflate without warning that these paper gains weren’t real, and promoted homeownership and 401(k)s as the path to a secure retirement without acknowledging the extent of the risks involved.


1. A special 2009 survey that re-interviewed families who had participated in the 2007 survey found a much smaller 19 percent drop in median net worth between 2007 and 2009.

2. The published survey results don’t allow precise tracking of generational cohorts because demographic breakdowns are by 10-year age group and the survey is conducted every three years. However, the 45-54 “Depression and War Baby” cohort in 1989 approximately corresponds to the 55-64 age group in 2001 and with the 65-74 age group in 2010, etc.

3. In practice, the typical household holds most of their wealth in the form of home equity and doesn’t annuitize  liquid assets.

Tagged

Another suicide at Apple’s key supplier in China

The latest suicide of a worker at Apple Computer’s Foxconn supplier plant in Chengdu, China may be another indication that Apple has not appreciably improved conditions for its manufacturing workers. Apple and Foxconn, working with the Fair Labor Association, announced that they would make changes in grueling overtime work schedules and in working conditions, including a promise to gradually come into compliance with China’s overtime laws. Yet this suicide, in conjunction with recent worker protests and new reports, suggests that needed reforms have not been made.

There are mixed reports from SACOM and China Labor Watch about whether work schedules have been reduced in any systematic way at Foxconn. Problematically, it appears that when the schedules are reduced, the reductions are not adequately balanced with hourly pay increases. So the already-inadequate monthly pay drops, leaving workers—72 percent of whom at the Chengdu plant told the FLA they could not meet their basic needs—in a desperate situation.

Ultimately, Apple has the power and moral responsibility to improve wages and conditions for Foxconn workers in Chengdu and elsewhere. Certainly, Apple and its executives can afford to do the right thing.

Congress should fix Postal Service pension problem it created

The Heritage Foundation’s latest attack on the Postal Service is a convoluted collection of half-truths and untruths. The author, David John, doesn’t want the Postal Service to benefit from $11.6 billion in overpayments it made for its pension obligations even though he grudgingly admits “this surplus appears to exist.” The overpayment should be refunded to the Postal Service to help it met its operating costs, but Heritage wants those funds locked up in the pension plan, which it claims would “follow the private-sector practice of using the current surplus—whatever it is—to defray future retirement payments.” This is baloney. When a private corporation overfunds its pension plan, it can transfer excess funds to pay retiree health obligations. In the case of USPS, it could use the funds to pay both current obligations ($2.4 billion a year) and the congressionally mandated pre-funding for future obligations ($5.6 billion a year).

When it’s inconvenient, Heritage abandons its suggestion that the Postal Service should be treated like the rest of the private sector. Private sector employers are not required to pre-fund their retiree health benefits, and most of them fund retiree health benefits on a pay-as-you-go basis.  If USPS “followed the private-sector practice,” it wouldn’t contribute a nickel to the future retiree health obligations; it would pay them as they came due, yet Heritage supports a requirement that USPS “fully prefund this benefit.”

Heritage also glosses over the findings of two independent agencies that the Postal Service was treated unfairly by Congress and the Office of Personnel Management in the allocation of its pension obligations. EPI published a report in 2010 that took the same position as the Postal Service’s Office of Inspector General and the Postal Rate Commission: USPS and its ratepayers were overcharged approximately $75 billion for past service obligations, and taxpayers were undercharged the same amount. But for Congress’ misallocation of costs, the Postal Service’s short-term finances would be manageable despite the Great Recession and the growth of electronic communication and payments.

Heritage shades the truth in its claim that the Government Accountability Office “bluntly rejected” the agencies’ claims that the Postal Service had been treated unfairly. In fact, GAO admitted that the cost allocation methodology is “a policy choice” whose fairness is debatable:

“Although the USPS OIG [Office of Inspector General] and PRC [Postal Rate Commission] reports present alternative methodologies for determining the allocation of pension costs, this determination is ultimately a policy choice rather than a question of accounting or actuarial standards. Some have referred to “overpayments” that USPS has made to the CSRS fund, which can imply an error of some type—mathematical, actuarial, or accounting. We have not found evidence of error of these types. While the USPS OIG and PRC reports make judgments about fairness, the 1974 law also implicitly reflected fairness.

GAO does not dispute that the PRC and USPS OIG methodologies for allocating the pension costs are sound, it simply prefers a different policy choice, which burdens the Postal Service:

“All three methodologies (current, PRC, and USPS OIG) fall within the range of reasonable actuarial methods for allocating cost to time periods. However, the allocation of costs between two entities is ultimately a business or policy decision.”

In its ideological zeal to see the Postal Service destroyed or dismembered, Heritage has been careless with its facts and inconsistent in its arguments.

Tagged

Job chart in Romney’s economic plan seems wrong still funky

UPDATE, June 15, 11:37 a.m.: Ah, mystery of the funky-seeming Mitt Romney jobs numbers revealed (see below for my puzzlement)—it’s a measure of  full-time jobs reported in the household survey. I guess half of this is my fault—they do reference the “full-time” aspect when talking about data from the 1970s—but the rest of the chart and paragraph just talk about “job growth.”

But I will note that this is the first time I’ve ever seen full-time jobs from the household survey used to measure job market performance over business cycles. And I’m not convinced it’s a useful innovation; in fact, I think it’s pretty obvious cherry-picking.

Say five people get brand-new jobs that provide 30 hours of work per week while five more see their hours cut from 40 to 34 hours. I’d say this is 120 hours of net new additional work being demanded in the economy; but using the full-time jobs from the household survey would simply say that it’s five “jobs” lost. This just doesn’t seem useful to me.

Also, since the Romney chart ends in June 2011, it might be useful to know what happened to their preferred number in the 11 months since then: 2.25 million jobs added. The industry-standard of economists measuring recessions and recoveries—the payroll survey—has 1.7 million jobs added over those same 11 months, so I do wonder which the campaign would cite if asked.

Lastly, I’d note that there is an obvious sector, full of full-time jobs, that has seen a particularly hard time since the June 2009 beginning of recovery: the public sector. Since June 2009, 600,000 state and local jobs have been lost, and in 2009, about three-fourths of these jobs were full-time.


I was asked to comment on the speech Mitt Romney made in front of the Business Roundtable, so I decided to do some light background reading: Believe in America: Mitt Romney’s Plan for Jobs and Economic Growth.

I noticed something odd in the jobs section of the plan—this chart (ripped directly from the Romney PDF):

I know jobs numbers and recoveries, and these looked wrong to me. For one, the absolute peak-to-trough employment loss following 2007’s Great Recession was 8.8 million jobs (between Jan. 2008 and Feb. 2010)  not the 8.9 million that the chart claims.

And given that this is the peak job loss, this means, by definition, that anything measured after this trough couldn’t be negative, as the chart implies. I also know that the U.S. economy didn’t begin adding jobs after the 2001 recession until the second half of 2003, so the 2001 numbers looked off, too.

So I decided to do the chart correctly—actually show job losses during the official recessions (i.e., not just employment peak to trough) and the 24 months following and sure enough:

Romney’s numbers are all slightly off, which is odd.

Odder is that the respective performance of the recoveries following the 2001 and 2007-2009 recession are reversed. Look closely at the the last two sets of bars in the respective figures.

The Romney chart  has jobs growing in the first 24 months of recovery following the 2001 recession, but shrinking in the first 24 months following the 2007-2009 recession. That’s the opposite pattern of what actually occurred—jobs shrank for the first two years after the 2001 recession and grew modestly in the first two years after the 2007-2009 recession.

I’ll note that we also tried to match the Romney numbers with quarterly data, with household-survey employment counts, with household-adjusted-for-payroll concepts survey data … nothing worked.

A little curious as to what’s going on here.

And since there’s been lots of discussion about the relative health of the private and public sectors, here’s the correct graph for private-sector jobs only.

Update to yesterday’s blog post “Fiscal hawks’ double standard for Social Security cuts vs. tax cuts”

This is an update to yesterday’s blog post Fiscal hawks’ double standard for Social Security cuts vs. tax cuts.”

The Committee for a Responsible Federal Budget (CRFB) subsequently updated the table in their blog post, adding a column with average scheduled (i.e., promised) initial Social Security benefits for 2050. This is certainly an improvement, but their revised table still only depicts the relative comparison between initial benefits under the Bowles-Simpson plan and payable benefits. Here’s what their table would show with the additional relative comparison between initial benefits under the Bowles-Simpson plan and scheduled benefits (the lightly shaded column).

Under the Bowles-Simpson plan, medium earners reaching the normal retirement age in 2050 would see an initial benefit cut of 6 percent relative to scheduled benefits. And as CRFB duly notes in their blog post, the Bowles-Simpson proposal to use a “chained” consumer price index for cost-of-living adjustments would further reduce all beneficiaries’ benefits in subsequent years relative to scheduled benefits—a benefit cut that compounds annually, as explained in this EPI Briefing Paper.

Claims about the efficacy of fiscal stimulus in a depressed economy are based on as-flimsy evidence as the Laffer Curve?! Seriously false equivalence

Peter Orzsag calls the claim that the debt-to-GDP ratio can be lowered by providing a fiscal boost to a depressed economy the “Laffer curve of the left.” For those who have real lives and may not get the reference, the “Laffer curve” refers to the theoretical possibility that one can raise overall tax revenues by cutting tax rates. The intuition is that cutting tax rates provides incentives for working longer and saving more. In turn, this will boost economic growth sufficiently to bring in more revenue despite rates having been cut. The claim that it is relevant to the U.S. economy has been discredited empirically (and a long time ago).

In light of this, Orzsag’s claim that the “Laffer curve of the left seems to have as much empirical relevance as the original Laffer curve” is not only odd but also flat wrong.

Orzsag’s target is clearly a recent paper by DeLong and Summers that shows fiscal stimulus in a depressed economy has multiple salutary effects, not just on economic growth but even on long-run budget measures (like the debt-to-GDP ratio). The paper shows stimulus boosts near-term growth directly by relieving the constraint of insufficient demand; it boosts productive investments by giving firms an incentive (i.e., more customers coming in the door) to expand capacity; and it keeps chronic long-term unemployment from turning into a permanent erosion of workers’ skills (i.e., economic “scarring”). The assumptions about the strength of each of these effects that are needed to make fiscal stimulus debt-improving in a depressed economy are probably pretty close to real-life parameters.

Let’s do some simple math with widely-agreed upon parameters, even ignoring some of the supply-side measures DeLong and Summers examine. I’m going to round very aggressively here, but it doesn’t affect results much.

Today’s publicly-held debt is about 70 percent of GDP (call it $10.5 trillion on a base of GDP that is $15 trillion). Let’s say we decided to undertake fiscal stimulus in the form of $150 billion spent on high-multiplier activities like extending unemployment insurance, giving aid to states, or investing in infrastructure (we actually need more than this, but it’s a nice round 1 percent of overall GDP, so we’ll stick with it).

The “fiscal multipliers” on these activities are roughly 1.5, meaning they generate $1.50 in economic activity for every dollar spent on them (actually, it may be quite a bit higher, but we’ll take 1.5 as given).

So, (roughly) a year from now, this stimulus has increased the level of GDP by $225 billion (i.e., the $150 billion stimulus multiplied by 1.5). This extra GDP does indeed lower the budget deficit by bringing in more revenue. A reasonable estimate, based on CBO data, is that when the economy is operating below potential, each 1 percent increase in GDP growth yields a cyclical reduction in the budget deficit of about 0.35 percent of GDP. So, this $225 billion in additional output leads to a $79 billion improvement in the budget deficit, making the “net” fiscal cost of the stimulus just $71 billion ($150 billion minus the $79 billion offset from higher growth).

This $71 billion “net” cost of stimulus increases debt by roughly 0.7% ($71 billion divided by the current $10.5 trillion public debt). But GDP has increased by 1.5 percent. Given the current debt-to-GDP ratio of 70 percent, this means that this measure actually declines because the stimulus has increased debt by 0.7 percent but GDP by 1.5 percent.

None of these parameters, by the way, are particularly contested.1 And let’s say they’re slightly wrong, and that instead of outright improving the debt-to-GDP ratio, providing fiscal stimulus in today’s depressed economy actually makes it slightly worse – say it’s only 80 percent self-financing in terms of its impact on debt-to-GDP ratios. Would this really justify calling claims that providing fiscal stimulus in depressed economies does not damage public finances “the Laffer Curve of the left”? Not by my read of the evidence.


1. For those who like analytical solutions, all of the preceding boils down to: So long as the initial debt/GDP ratio is higher than [(1/multiplier) – fiscal clawback ratio], then fiscal stimulus reduces the debt to GDP ratio. The “fiscal clawback ratio” is simply how much a 1% boost to economic growth leads to a reduction in the budget deficit (measured also as a share in GDP). For the arithmetic above, the multiplier of 1.5 and a clawback ratio of .35 means that fiscal stimulus would reduce debt/GDP for any initial debt ratio above 32%.

Take much more conservative assumptions – a multiplier of 1 and a clawback ratio of just 0.25. Then, stimulus is debt/GDP reducing for all initial debt ratios above 75%.

Also note that this means the calculus for whether or not stimulus reduces the debt/GDP ratio gets more favorable as the initial debt ratio rises, a perhaps counter-intuitive result.

Fiscal hawks’ double standard for Social Security cuts vs. tax cuts

The Committee for a Responsible Federal Budget (CRFB) has taken sides in a scuffle between Social Security advocates and former Senator Alan Simpson. This scuffle concerns Simpson’s colorful defense of Social Security proposals within the report he co-authored with fellow Fiscal Commission co-chair Erskine Bowles—a report CRFB has gone to great lengths to champion.

CRFB was responding to a letter signed by young budget and social insurance experts—myself and others at EPI included—disagreeing with Simpson’s claim that the Bowles-Simpson proposals would strengthen the program for our generation. The merits of these proposals aside, CRFB is shamelessly cherry-picking baselines in response to the letter. Whereas CRFB and other fiscal hawks use a current policy baseline for almost all budget projections—e.g., assuming the continuation of the Bush tax cuts past their scheduled expiration—CRFB doesn’t adopt the same convention when it comes to Social Security. This is hypocritical and reveals what can only be described as a biased policy agenda.

In order to minimize the severity of the Bowles-Simpson cuts, CRFB’s defense of the Bowles-Simpson Social Security plan revolves around a comparison of projected future benefits proposed by Bowles-Simpson relative to benefits payable under current law. However, comparing benefits under Bowles-Simpson to payable benefits assumes that Congress will allow an abrupt 25 percent reduction in Social Security benefits when the trust fund is exhausted in 2033 since Social Security is prohibited from borrowing and benefits are generally funded through a dedicated payroll tax rather than general revenue.1

Social Security’s finances are routinely analyzed using scheduled rather than payable benefits—if for no other reason than the system would always appear to be in actuarial balance if projections were based on payable benefits. On a more practical level, it is inconceivable that Congress would allow draconian cuts to fall on elderly retirees. Unlike active workers, who can theoretically save more (or put off retirement) when benefits are cut, elderly retirees are usually viewed as having few other financial recourses. Thus, even in the unlikely event that nothing is done to shore up the system before the trust fund is exhausted, Congress would almost certainly use general revenues to pay promised benefits. Similarly, Congress routinely prevents scheduled cuts to Medicare physician reimbursements (the so-called “doc fix”). In other words, the difference between the current policy baseline and the current law baseline reflects the difference between what budget analysts assume future Congresses are likely to do versus what is currently set in legislation, including scheduled or automatic tax increases and benefit cuts.

Fiscal hawks—including CRFB—overwhelmingly use a current policy baseline to advocate staunch deficit reduction measures because these baselines show a much larger rise in public debt over the long-term, largely due to assumptions about the continuation of temporary tax cuts and the inability of Congress to contain health care cost growth. If CRFB wants to deviate from past practice and score the Bowles-Simpson plan relative to current law, they should also acknowledge that the plan proposes cutting taxes by $1.4 trillion relative to current law, all in the name of deficit reduction.2 Indeed, the plan “saved” $4.1 trillion over a decade relative to an adjusted current policy baseline, whereas continuing the Bush-era tax cuts will cost $4.4 trillion relative to current law.3 (Without the Bush tax cuts, there would not have been a fiscal commission.) Likewise, CRFB should argue in favor of leaving Social Security out of deficit discussions entirely, since by their definition Social Security is in long-run actuarial balance.

Using a current policy baseline when analyzing tax policies or clamoring for near- and long-term deficit reduction while cherry picking a current law baseline to justify Social Security benefit cuts is a gimmicky double standard that reflects a bias toward cutting social insurance programs.

 

 

1.  Exceptions to this rule include the current payroll tax holiday and income taxes levied on Social Security benefits for high-income beneficiaries, which revert to Social Security.

2. Estimate based on CRFB’s Moment of Truth Project July 2011 re-estimate of the Bowles-Simpson plan relative to CBO’s March 2011 current law baseline for an apples-to-apples comparison over FY2012-21.

3. This is not an apples-to-apples comparison because the Bowles-Simpson adjusted current policy baseline assumed the Bush tax cuts would expire for households with adjusted gross income above $200,000 ($250,000), for a revenue increase of roughly $700 billion relative to full continuation, but even adjusting accordingly the two are very much in the same ballpark.

The long-term budget outlook has improved dramatically over the last three years

Yesterday, the Congressional Budget Office (CBO) released its annual Long Term Budget Outlook (LTBO), which projects federal spending, revenues, deficits, and debt over the next 75 years.  There are many points of controversy with regards to the LTBO, not the least of which is that it’s pretty ridiculous for CBO to pretend it knows what health care costs will look like in 2087. Personally, I think that CBO’s LTBO provides a lot more heat than light, and I would be the first to applaud if CBO decided to only release ten-year budget projections (in themselves subject to a huge margin of error).

Nevertheless, there is still value in looking at the change in projections from one year to the next.  The figure below clearly shows that over the past three years CBO’s extended current law budget projections—which assumes no changes are made to the law—have improved drastically.

2009: CBO projected that debt held by the public would rise from around 60 percent of GDP to just over 300 percent of GDP in 75 years.

2010: CBO markedly improves its 75-year outlook, which now shows debt rising to just over 110 percent of GDP.  This improvement largely reflected passage of the Affordable Care Act (ACA), which prioritized reducing long-run deficits and slowing the rate health of care cost growth (the predominant driver of long-run deficits).

2011: CBO again improves its outlook, now projecting debt rising to 87 percent of GDP in the first 30 years but then actually falling to 75 percent over the next 45 years.  This improvement was largely due to three changes in CBO’s assumptions and projections: (1) lower costs for the new ACA health insurance exchange subsidies; (2) higher taxable wages due to the employer-sponsored health insurance excise tax (pushing worker compensation away from the tax-free health coverage); and (3) a slightly higher long-run economic growth rate.

The ultimate goal of budget reform is to reach “fiscal sustainability,” a point at which public debt is growing no faster than the economy (stabilizing debt relative to national income, i.e., ability to pay).  According to 2011 LTBO projections, the federal government had already achieved long-run “fiscal sustainability.”

2012: For the third straight year in a row, CBO favorably revises its long-run budget outlook: Starting in 2014, public debt is projected to fall by 0-3 percentage points each year.  The public debt is shown to be fully paid down by 2070, and within 75 years the federal government is projected to have accrued reserve surpluses equal to about a third of the economy.

This improvement is primarily due to two factors.  First, the Budget Control Act (the result of last summer’s debt ceiling crisis) cuts spending by over $2.1 trillion through 2021, and because of the way CBO indexes discretionary spending for inflation in its projections, it continues to reduce deficits in subsequent years.  And second, CBO changed the way it projects health care cost growth. In the past, it used the average growth rate over the last 25 years, but in this report it calculated a weighted 25-year average that puts more weight on recent years.  This new methodology does a better job of taking into account the fact that health care costs have been slowing recently, possibly evidence that the ACA has exceeded expectations.

Budget wonks will rightly point out that the projections in question are CBO’s extended baseline, which assumes no changes to current law.  This means that the Bush-era tax cuts expire next year, the sequestration cuts also go into full effect next year, the Alternative Minimum Tax will apply to more upper middle-income households, and Medicare reimbursements to doctors will be allowed to fall dramatically.  But with the exception of the sequestration trigger, all those other factors were also present when CBO made their projections in 2009, 2010, and 2011.  The fact is the fiscal outlook of the federal government has improved dramatically in the last three years.

More importantly, this report clearly shows that the path toward fiscal sustainability includes allowing some—if not all—of the Bush-era tax cuts to expire and fully implementing and protecting the Affordable Care Act.

Not all debt is created equal, David Brooks

New York Times columnist David Brooks went all out in heralding the “debt is evil” stigma in his column yesterday. Regrettably, this blanket condemnation of borrowing as intemperate or immoral, intergenerational theft is all too pervasive among Washington’s policymaking elite, and all too wrong: Not all debt is created equal and suggesting otherwise impedes sound fiscal policy.

Economic actors borrow money for a wide array of activities, and both businesses and households know better than to apply a universal value judgment to debt. Borrowing money for college tuition allows for human capital accumulation, which will hopefully yield a high rate of return; borrowing money to take to the casino is widely viewed as imprudent, as the expected rate of return at any casino is negative. Businesses borrow money to build factories, buy equipment, finance research and development, and engage in other productive activities that add value to the economy. Financial firms leveraging themselves the way of Long-Term Capital Management (using debt to proportionally magnify both risk and potential returns), on the other hand, adds systemic financial risk and zero—more likely negative—economic value. Similarly, there are good and bad reasons alike to run federal budget deficits. What matters much more than the accumulation of nominal debt is the purpose of the borrowing and the ability to repay the amount borrowed.

Brooks laments that the “federal government has borrowed more than $6 trillion in the last four years alone, trying to counteract the effects of the [dotcom and housing] bubbles.” Yes, the implosion of the housing market and the ensuing financial crisis and recession forced Congress to borrow heavily as the cyclical portion of the budget deficit ballooned and fiscal policy was used to arrest a steep economic contraction, propping up aggregate demand and the financial sector alike. The alternative, however, was a depression that would have swollen budget deficits regardless, while greatly impeding our ability to repay debt because of lost income and economic scarring reducing future potential income. Indeed, policymakers’ failure to restore full employment—which still necessitates much more deficit-financed stimulusis producing such scarring effects: The U.S. economy is still running $861 billion—or 5.3 percent—below potential output and the Congressional Budget Office has downwardly revised projected potential output for 2017 by 6.6 percent since the onset of the recession. That is real, welfare-reducing economic waste resulting from insufficient public borrowing—borrowing that could have put productive resources to use instead of allowing them to atrophy.

Economists Lawrence Summers and Brad DeLong compellingly argue that given present U.S. economic conditions (where the Fed cannot singlehandedly stabilize the economy), deficit-financed stimulus is actually self-financing. Essentially, if nominal interest rates are below long-run trend real GDP growth adjusted for reduced economic scarring effects and improvements in the cyclical budget deficit resulting from stimulus, a dollar of debt more than pays for itself in the long-run. CBO projects real GDP growth will average 2.4 percent over the next 25 years, whereas the yield on 10-year Treasuries is only 1.55 percent (hovering around a record low); high bang-per-buck fiscal stimulus passes any reasonable cost-benefit analysis test so long as the economy remains mired well below potential in a liquidity trap.

What Brooks misses entirely is that any value judgment regarding debt boils down to the opportunity cost of debt and the value added of the tax or spending program being deficit-financed—particularly in ways that affect the ability to repay debt.

Example 1: The Bush-era tax cuts were entirely deficit-financed, adding some $2.6 trillion to the public debt between 2001 and 2010, while failing to produce even mediocre economic performance (the 2001-2007 Bush economic expansion was the weakest since World War II). Numerous economists believe that, between their dismal efficacy and the reduction in national savings they induced, the Bush tax cuts decreased long-run potential output.

Example 2: If the rate of return on infrastructure spending exceeds the cost of financing, it makes sense to borrow money to build a bridge, or better yet repair a bridge (the cost of repair increases with time and preventative maintenance is much more cost effective than rebuilding infrastructure from scratch). As my colleague Ethan Pollack points out, the case with infrastructure is a clear cut “win-win-win” because it raises potential future output, making the incurred borrowing relatively easier to pay back, and infrastructure spending increases actual present output and employment (reducing cyclical deficits). And today, the opportunity cost of infrastructure investment is at historic lows.

There is good debt and wasteful debt alike, just as both constructive editorializing and gibberish can be found scrawled across op-ed pages. Brooks’ failure to recognize any economic context or nuance only feeds the misguided debt hysteria that has pushed most of Europe back into recession and encouraged U.S. policymakers to give up job creation in favor of premature, counterproductive austerity.

‘Simplistic Keynesians’ still right about the economy

Brad DeLong links to what he calls a “DeLong-Summers ‘Simplistic Keynesians’ Smackdown Watch“—a piece by Ken Rogoff calling “dangerously facile” those who argue for the “simplistic Keynesian remedy that assumes that government deficits don’t matter when the economy is in deep recession; indeed, the bigger the better.”

Since “simplistic Keynesianism” is a pretty good description of my diagnosis and remedy for today’s U.S. economic troubles, and since I don’t want to ever be “dangerously facile,” I read both the Rogoff commentary and the Reinhart, Reinhart, and Rogoff (2012) paper that it links to.  

I did learn one thing—it turns out that my earlier post about the likely provenance of a Rogoff claim about the potential damage from high public debt isn’t quite right—but the new provenance of this claim isn’t right either.

There’s not much particularly new in either piece. Instead, they recycle the finding that, looked at over several centuries, there is an odd threshold of debt-to-GDP ratios—90 percent—that sees growth beneath the threshold run about 1 percentage point higher per year than growth above the threshold. They then do the arithmetic and argue that every year that the public debt-to-GDP ratio is over 90 percent is a year of GDP growth 1 percent lower than it would otherwise be and voila, the damage from high debt has been documented.

Or not. We’ve already noted why we think this threshold, while it might be an interesting (if odd and deeply atheoretical) curiosity, has no relevance to current U.S. policy debates (and yet somehow the 90 percent scare-mongering won’t stop—see David Brooks’ latest invocation of it).

The main reason for this judgment is that the causality between slow growth and high public debt is extremely two-way. There have almost surely been times when exogenous decisions to add to public debt have hampered countries’ growth. But there have also surely been times (and many more times, in my guess) when slow growth has led directly to rising debt-to-GDP ratios. And when this is the case, noting a simple negative correlation between GDP growth and a particular debt-to-GDP threshold tells us nothing about how dangerous—or, more likely, useful—a policy of further fiscal support would be.

And, there is no doubt that the increase in public debt over the past four years in the U.S. is directly the result of the Great Recession, and not a cause of it. Further, adding to this public debt going forward (so long as it was intelligently spent on job creation) would not only not harm the economy, it would reduce the debt/GDP ratio.

To be blunter, applying results gleaned from the over 80 percent of the country-years in their high-debt sample period that began before World War II, as well as the other clear-as-day cases where high debt was driven by slow growth (Japan in the 1990s and 2000s) does nothing to aid policy analysis about fiscal support in the here-and-now.

The authors even miss an obvious clue regarding those episodes in their data where high debt is driven by slow growth—the failure of elevated public debt to lead to upward pressure on interest rates. High public debt-to-GDP ratios combined with no upward pressure on interest rates is a key tell that it’s likely that below-potential growth is driving the debt ratio and not vice-versa.

Further, if interest rates are not pushed up by rising debt-to-GDP ratios, there is no mechanism for rising debt to impede growth. The authors gloss over this—just noting that “the growth-reducing effects of public debt are apparently not transmitted exclusively through high real interest rates.” More likely, the growth-reducing effects of public debt are simply non-existent when economies are deeply depressed.

Lastly, the paper makes a mistake that I think is key to understanding why policymakers keep getting blindsided by bad news (like the last two-months’ poor job growth) that just should not be that surprising: it assumes that economies naturally heal themselves from recessions, and quite quickly. (more…)

Union decline and rising inequality in two charts

One hallmark of the first 30 years after World War II was the “countervailing power” of labor unions (not just at the bargaining table but in local, state, and national politics) and their ability to raise wages and working standards for members and non-members alike. There were stark limits to union power—which was concentrated in some sectors of the economy and in some regions of the country—but the basic logic of the postwar accord was clear: Into the early 1970s, both median compensation and labor productivity roughly doubled. Labor unions both sustained prosperity, and ensured that it was shared. The impact of all of this on wage or income inequality is a complex question (shaped by skill, occupation, education, and demographics) but the bottom line is clear: There is a demonstrable wage premium for union workers. In addition, this wage premium is more pronounced for lesser skilled workers, and even spills over and benefits non-union workers. The wage effect alone underestimates the union contribution to shared prosperity. Unions at midcentury also exerted considerable political clout, sustaining other political and economic choices (minimum wage, job-based health benefits, Social Security, high marginal tax rates, etc.) that dampened inequality. And unions not only raise the wage floor but can also lower the ceiling; union bargaining power has been shown to moderate the compensation of executives at unionized firms.

Over the second 30 years post-WWII—an era highlighted by an impasse over labor law reform in 1978, the Chrysler bailout in 1979 (which set the template for “too big to fail” corporate rescues built around deep concessions by workers), and the Reagan administration’s determination to “zap labor” into submission—labor’s bargaining power collapsed. The consequences are driven home by the two graphs below. Figure 1 simply juxtaposes the historical trajectory of union density and the income share claimed by the richest 10 percent of Americans. Early in the century, the share of the American workforce which belonged to a union was meager, barely 10 percent. At the same time, inequality was stark—the share of national income going to the richest 10 percent of Americans stood at nearly 40 percent. This gap widened in the 1920s. But in 1935, the New Deal granted workers basic collective bargaining rights; over the next decade, union membership grew dramatically, followed by an equally dramatic decline in income inequality. This yielded an era of broadly shared prosperity, running from the 1940s into the 1970s. After that, however, unions came under attack—in the workplace, in the courts, and in public policy. As a result, union membership has fallen and income inequality has worsened—reaching levels not seen since the 1920s.

By most estimates, declining unionization accounted for about a third of the increase in inequality in the 1980s and 1990s. This is underscored by Figure 2, which plots income inequality (Gini coefficient) against union coverage (the share of the workforce covered by union contracts) by state, for 1979, 1989, 1999, and 2009. The relationship between union coverage and inequality varies widely by state. In 1979, union stalwarts in the northeast and Rust Belt combined high rates of union coverage and relatively low rates of inequality, while just the opposite held true for the southern “right to work” states. A large swath of states—including the upper Midwest, the mountain west, and the less urban industrialized states of the northeast—showed lower-than-national rates of inequality at union coverage rates a bit above or a bit below that of the nation. More importantly, as we plot the same relationship in 1989, 1999, and 2009, those states move as a group towards the less-union coverage, higher-inequality corner of the graph. The relationship between declining union coverage and rising inequality is starkest in the earlier years (between 1979 and 1989). After 1999, union coverage has bottomed out in most states and changes in the Gini coefficient at the state level are clearly driven by other factors, such as financialization and the real estate bubble.

MORE: View interactive graphic of union decline and rising inequality in the U.S.

Colin Gordon is Professor of History at the University of Iowa and a Senior Research Consultant at the Iowa Policy Project

Adding to Joe Nocera’s piece: A revival of the labor movement is necessary to preserve our democracy

It was good to see Joe Nocera’s column today affirming Tim Noah’s recent call for a revival of the labor movement, saying “if liberals really want to reverse income inequality, they should think seriously about rejoining labor’s side.” I would add that such a revival is necessary to rebuild the middle class and to preserve our democracy.

I’m proud that EPI has provided a lot of great research addressing the role of unions in the economy, ranging from: the impact on firms and competitiveness; the impact on the wages and benefits of union and nonunion workers; the impact on wage inequality; the flawed nature of the current process for choosing union representation; and much more. Here’s a brief guide:

  • See a talk by Paul Krugman addressing the problem of income inequality, including the problem of eroded unionization. Krugman expresses some of the same sentiment as Nocera, paraphrasing “we didn’t know what we were missing until they were gone.” Pieces by Tom Kochan and Beth Shulman, and by Harley Shaiken, echo his arguments.
  • Testimony by me, and another by Rutgers professor Paula Voos, articulate the importance of unions for American workers and the role unionism can play in rebuilding the middle class.
  • Matt Vidal and David Kusnet provide 12 case studies from a variety of industries, including nursing, meatpacking, and janitorial, to show how unions can benefit workers and communities while making companies more productive. They also illustrate the damage inflicted when union representation is removed.
  • Professor John DiNardo of the University of Michigan describes his and other research that unionization does not cause businesses to fail. Using a ‘regression discontinuity’ technique, DiNardo compares places that unionize to those that don’t and finds that differences in representation election outcomes were very similar: The near-losers are a very good “control group” for firms where the workers have just won the right to bargain collectively. DiNardo says: “This research provides evidence that this causal effect of union recognition is zero and has been zero since at least the 1960s, which is how far back we can go with the available data. In short, the biggest fear voiced by employer groups regarding unionization—that it will inevitably drive them out of business—has no evidentiary basis.”
  • EPI Research and Policy Director Josh Bivens  shows why unions are not to blame for the loss of U.S. manufacturing jobs, and that in fact, the real culprits are manipulated currency rates that make U.S.-made goods overly expensive. A dysfunctional health care system that burdens responsible employers with outsized costs, and high executive and managerial salaries, also contribute to any lack of competitiveness.
  • Richard Freeman of Harvard University, perhaps the world’s leading labor market economist (I think so at least), writes that an overwhelming majority of workers say in surveys that they want a stronger collective voice on the job, and believe that a union would be good for their firm as well. Freeman’s findings “suggest that if workers were provided the union representation they desired in 2005, then the overall unionization rate would have been about 58%.”
  • To get a picture of the broken process of union representation elections where employers freely intimidate workers, read Kate Bronfenbrenner’s report. Private-sector employer opposition to workers’ efforts to form unions has intensified and become more punitive than in the past. Employers are more than twice as likely to use 10 or more tactics—including threats of and actual firings—in their campaigns to thwart workers’ organizing efforts.
  • Last, see the statement in support of the Employee Free Choice Act by me, along with Richard Freeman of Harvard and Frank Levy of MIT, citing the recent unprecedented growth of inequality in household income and the urgent need to give workers more bargaining power. Forty prominent economists signed the original statement, including three Nobel Prize winners, agreeing that the reform would be an overall benefit to the economy, and would provide a boost to workers when they need it most. Other economists later added their voices by signing the same statement, which resulted in close to 200 more signatories. The statement is available for download in both its original and updated versions.

We still have a long way to go to achieve racial equality

Washington Post columnist Richard Cohen recently illustrated how much overt racial bigotry against blacks has been reduced. He used the case of Wesley A. Brown, the first African American graduate of the United States Naval Academy. Brown was the first to “successfully endure the racist hazing that had forced the others to quit.” When Brown joined the Naval Academy, if blacks dared to enroll, they were harassed to force them out. Today, there is a building in the Naval Academy named in Brown’s honor.

Cohen is correct. Today, black children know that there is no occupation that is categorically off limits to them. They can grow up to be president, an idea that seemed farfetched just a few years ago.

On the other hand, the picture Cohen painted would have looked starkly different had he focused less on interpersonal discrimination and more on institutional discrimination. By “institutional discrimination,” I am referring to the ways that the normal policies and practices of social institutions like the educational system, the labor market, and the criminal justice system serve to maintain racial inequality.

Cohen celebrates the end of legally enforced segregation, but fails to acknowledge that we still live with a great deal of de facto racial segregation. A large number of our neighborhoods are racially segregated, which means that many of our schools are racially segregated. Segregation concentrates black children not merely in majority-black schools, but also in schools where a majority of students are in poverty. While, in theory, there are no limits facing black children, children born into economically disadvantaged families, in economically disadvantaged communities, who then attend economically disadvantaged schools have the odds stacked against them.

One reason black families are disproportionately economically disadvantaged is because blacks are still about twice as likely as whites to be unemployed. This was the case in the 1960s, and it remains true today. This basic relationship holds true at all education levels. Black high school dropouts are about twice as likely to be unemployed as white high school dropouts. Black college graduates are about twice as likely to be unemployed as white college graduates. Research shows that employers still have a preference for hiring whites over blacks.

Our criminal justice system is another site where policies and practices systematically disadvantage blacks. As the book Dorm Room Dealers illustrates, white middle-class youth use illicit drugs and sell illicit drugs, but this population is much, much less likely to be incarcerated for these offenses than are poor black youth engaging in the same activities. Michelle Alexander’s The New Jim Crow goes into greater detail about how our illicit drug policies and practices produce institutional discrimination against African Americans.

Cohen is correct. There is no better time to be black in America than today. While this is a true statement, we also still have a long way to go before there is equal opportunity for all.

Center for Public Integrity makes a strong case for more regulation and better enforcement

Business groups and conservatives constantly attack the federal government for overregulating. They claim that businesses are “drowning in a sea of regulations” and that job creation and profitability are being sacrificed in favor of a nanny state. Workplace safety rules, in particular, have been a favorite target of the Chamber of Commerce and other business associations, but the fact is that the federal government regulates too little, not too much. Most of the 4,500 workplace fatalities and 50,000 occupational disease deaths each year could be prevented with better rules, more diligent employers, and better enforcement by the Occupational Safety and Health Administration.

The Center for Public Integrity has begun publishing Hard Labor, a series of articles exploring this reality, and the first two stories make for compelling reading. One describes the consequences of OSHA’s inability to issue a combustible dust standard to protect against the kind of fires and explosions that have killed 130 workers since 1980, injured another 800-plus, and caused more than 450 accidents. Factory managers ignore hazards in plain sight—for example, piles of metallic dust that crackle with static electricity and ignite into small fires every week. Nothing is done to prevent the build-up, despite the past occurrence of catastrophic explosions at the same company that left some workers dead and others with gruesome, debilitating injuries. Finally, the critical elements come together and instead of a small fire, another terrible explosion occurs as airborne dust ignites, and more workers die from horrendous burns.

OSHA has no standard that addresses this hazard in spite of the pleas of union representatives and the urgings of the federal Chemical Safety Board, which has jurisdiction to investigate explosions and recommend preventive standards but has no power to issue them. OSHA hasn’t regulated, and workers continue to be burned, disfigured and killed unnecessarily.

Industry representatives resist any new standard, reflexively making the same tired arguments about flexibility and cost they always make. But as the story points out, in the case of grain dust explosions, an industry that fought OSHA’s efforts to issue a standard now realizes that the standard has saved workers’ lives and saved the companies money. The National Grain and Feed Association, which at one point sued OSHA to block the grain dust rule, recognizes today that the standard was win-win regulation, and that the grain industry is financially better off as a result of the rule and the unprecedented reduction in deaths and injuries it achieved.

The second Hard Labor story focused on the weakness of OSHA’s enforcement of the rules it already has on the books. Violations that cause the death of a worker result in an average fine of less than $9,000, and companies contest every citation, no matter how justified. The chances of an executive being indicted as a criminal for intentional or recklessly indifferent acts or omissions that kill their employees are infinitesimal, and the penalties are tougher for someone who harasses a wild burro on federal land than for an employer who sends a worker into a known hazard that causes the worker’s death.

The Center for Public Integrity is doing a real service by publishing these stories that reveal just how weak OSHA’s standards and enforcement are, and how light the regulatory burden that OSHA imposes really is. In the case of workplace safety and health, we need more regulation, not less.

Tagged

New York Times pension reporter ignores inconvenient truths

Just once, I wish Mary Williams Walsh would write a story about public employee pensions that included key information that isn’t convenient to an agenda of doing away with or greatly reducing public employee pensions. Every story she writes, including her most recent, seems designed to scare the public, make public employees look bad, their unions look greedy, and government administrators seem weak or stupid.

In her most recent piece, Walsh lends great support to those claiming that public pension plans are erring (or even dissembling) in using assumptions about annual rates of return for their assets that are unrealistically high. The further claim is that using more “reasonable” rates of return (i.e., lower ones) will show the “true” crisis in public pensions.

Walsh writes: “The typical public pension plan assumes its investments will earn average annual returns of 8 percent over the long term, according to the Center for Retirement Research at Boston College. Actual experience since 2000 has been much less, 5.7 percent over the last 10 years, according to the National Association of State Retirement Administrators.”

This may seem like bloodless analysis, but it’s not—it’s giving great aid to a bogus argument forwarded by ideologues that are deeply hostile to public pension plans on principle. Because most plans look to be in decent shape based on current actuarial standards that justify assuming 8 percent rates of return, these ideologues have to claim that these assumptions are somehow wrong. But pointing to returns over the past 10 years as evidence of this is ridiculous because it completely ignores the fact that the U.S. and world economies experienced the biggest financial downturn in 80 years! How can Walsh be surprised that returns over a period that included two recessions have been subpar? This isn’t front-page news or news at all. It would be news if returns over that period had met expectations.

Even more serious is Walsh’s distortion of the National Association of State Retirement Administrators’ report, which was a very positive statement about the returns public employee plans have achieved:

Although public pension funds, along with most other investors, have experienced sub-par returns over the past decade, median public pension fund returns over longer periods exceed the assumed rates used by most plans. As shown in Figure 1, median annualized investment returns for the 20- and 25-year periods ended June 30, 2011, exceed the most-used investment return assumption of 8.0 percent. For example, for the 25-year period ended June 30, 2011, the median annualized return was 8.5 percent.

Walsh quotes the professed doubts of Edward McMahon, a fellow at the anti-government Empire Center for New York State Policy, that even a 7 percent return on investment can be safely assumed. But McMahon is not a neutral observer; he’s a right-wing, anti-union ideologue with an agenda to do away with public employee defined benefit pensions altogether. It is not news to me that the Empire Center has long wanted to cut public employee benefits and compensation, but Walsh would have done most readers a service by mentioning that to her readers.

Just once I wish Walsh would cite Dean Baker’s opposing analysis, which is based on the fact that the stock market is currently priced low enough, as measured by the ratio of prices to earnings, to justify expected returns of 8 percent or more. As Baker, the co-director of the Center for Economic and Policy Research, points out, individuals who sold Social Security privatization with visions of never-ending 8-10 percent stock market returns back when price-to-earnings ratios were at historic highs (hence making inflated returns hugely unlikely) now have the gall to attack pension plans that expect returns of 7.5 percent when price-to-earnings ratios have returned to historic norms (norms generally consistent with long-run returns of 8 percent).

What does this detour into what people claimed during fights over Social Security privatization have to do with the attack on public pensions? Earlier, I referred to ideologues like McMahon pushing the claim that projected returns of 8 percent for public pension plans are unrealistically high. How do I know this claim is ideology instead of professional judgment? Well, because in 2003 McMahon claimed that replacing public pensions with 401(k) plans would be fair for employees because they could expect returns of 9.75 percent.

In short, what at first seem like wonky debates over appropriate rates of return have actually degenerated into misinformation campaigns waged by committed opponents of public pensions. The rates these plans are assuming today are in line with actuarial practice and (much more importantly) economically reasonable. I’m sorry that this view doesn’t advance the much juicier story of a fiscal crisis coming, but it’s based on the facts.

Tagged

How’s that immigrant-bashing thing workin’ for ya?

A majority of Alabama’s politicians apparently believe that they can improve their state economy by chasing away the undocumented workers who live there. By making them criminals (turning them into illegal aliens), denying them basic services like water and electricity, and terrifying their families, they hope to rid the state of people they see as a burden on taxpayers and competitors for scarce jobs. Well, after a year’s application of this medicine (June marks the first anniversary of the passage of HB 56), how’s the experiment coming along? Has the economy been jump-started or even improved?

Apparently not.

Let’s start with job creation. Has Alabama created more jobs than its neighbors over the last year? No; in fact, it’s both below the regional average and well below the national average. Alabama’s employment growth has been only one-seventh the national average (0.2 percent vs. 1.4 percent). The United States has regained about 43 percent of the jobs lost at the bottom of the recession; Alabama has only recovered about 9 percent of the jobs it lost.

Figure 1: Source: EPI analysis of Local Area Unemployment Statistics public data sets

Has it made the state or its workers richer or better off? No, apparently not. Even with fewer workers, personal income per worker fell in Alabama during the two quarters that followed enactment of HB 56, while in the neighboring states, it was unchanged.

Figure 2: Source: EPI analysis of Current Employment Statistics and Bureau of Economic Analysis National Income and Product Accounts public data

How about unemployment? Has chasing away all of those immigrants opened up tens of thousands of existing jobs for native Alabamans and cut the number of unemployed more than Alabama’s neighbors? No, not exactly. Compared to all four of its neighboring states (Tennessee, Georgia, Mississippi, and Florida), Alabama’s unemployment fell a little faster over the past year—2 percentage points vs 1.7 percentage points—but Alabama lost 52,000 workers from its labor force in less than a year while the labor force grew in the four neighboring states. Alabama doesn’t have a positive story to tell.

Figure 3: Source: EPI analysis Local Area Unemployment Statistics public data series

Far from being an economic panacea, the early returns suggest that HB 56 has not been good for Alabamans in terms of job creation or personal income. Immigrant-bashing isn’t the path to prosperity.

Conservatives say CEO compensation levels are fine now that it takes 10 hours to earn a typical worker’s annual compensation

There have been some interesting responses by conservatives to the new data Natalie Sabadish and I have released on the CEO-to-worker pay ratio. Apparently, our study reporting that CEO pay has fallen during the fiscal crisis and is far down from the dizzying heights of the tech bubble in 2000 is taken to mean that that any concern about the growth of top incomes is now out-of-date and inappropriate.

Conservative columnist Wynton Hall at Breitbart.com writes:

“A graph by the Economic Policy Institute  shows that while the relative pay of CEOs shot up in the 1990s, it has since fallen by nearly half, a trajectory that hardly supports the class warfare rhetoric of Occupy Wall Street and the Obama Administration.”

And Greg Mankiw also touted our findings, writing, “The relative pay of CEOs skyrocketed during the 1990s and has since fallen by about half.”

The attention and the recognition of the accuracy of our empirical work are much appreciated. A few comments are in order. First, it seems that these folks are celebrating that a non-problem, at least in their view, has been solved. After all, I don’t recall conservatives being upset by the roughly $20 million CEO pay packages in 2000 or the $18 million CEO packages in 2007. So, it is hard to understand why they feel so gratified by CEO compensation packages averaging $11 or $12 million in 2011.

Second, while it is true that the CEO-to-worker compensation ratio fell from 411.3 in 2000 to 209.4 in 2011, that still means that CEO compensation is spectacularly high. For instance, that means that the average CEO earns in 10 hours what a typical worker earns in an entire year. Moreover, as we reported in our study (page 4):

“CEO compensation in 2011 is very high by any metric, except when compared with its own peak in 2000, after the 1990s stock bubble. From 1978–2011, CEO compensation grew more than 725 percent, substantially more than the stock market [which grew less than 400 percent] and remarkably more than worker compensation, at a meager 5.7 percent.”

The trend in CEO compensation since 1965 is in the figure. Two measures are presented, one where stock options granted are included and the other where stock options exercised are included. In either measure of CEO compensation, the growth between 1978 ($1.3 or 1.4 million), 1989 ($2.5 or 2.6 million), or 1995 ($5.6 or $6.2 million) and 2011 ($11.1 or $12.1 million) is pretty astounding and very hard to justify. Exactly how does one justify/explain that CEO compensation has doubled since 1995?


Figure 1: Note: “Options granted” compensation series includes salary, bonus, restricted stock grants, options granted, and long-term incentive payouts for CEOs at the top 350 firms ranked by sales. “Options exercised” compensation series includes salary, bonus, restricted stock grants, options exercised, and long-term incentive payouts for CEOs at the top 350 firms ranked by sales. Sources: Authors’ analysis of data from Compustat ExecuComp database, Bureau of Labor Statistics Current Employment Statistics program, and Bureau of Economic Analysis National Income and Product Accounts Tables

One more time: Public debt incurred when the economy is depressed does not damage the economy

I was on PBS’ NewsHour last night, talking austerity. I’m against it. Ken Rogoff from Harvard was also on, and he’s actually against it too. One point of disagreement came up, though, when I made the argument that public debt incurred when the economy is depressed causes no economic damage (in fact, it acts instead as a useful palliative).

Rogoff disagreed in principle and then said something kind of startling—that increases in deficits and debt could lead to incomes in the near-ish future (i.e., less than 30 years from now) that are “20 percent lower.”

I’m assuming this claim has some relation to a Congressional Budget Office estimate of the effect of one particular fiscal scenario (the “alternative fiscal scenario,” or AFS) that projects the effects of large increases in budget deficits in coming decades on economic growth (see table below from the CBO report (p. 28)). The mechanism is that rising deficits increase interest rates which lead to lower private investment and a stronger dollar, which leads in turn to higher trade deficits and rising foreign debt.

http://s2.epi.org/files/2012//cbo-estimate.png

Set aside for a second whether or not there are some problems with these calculations—both in relying on the AFS to make predictions and in how to apportion the impact of higher interest rates between crowded-out domestic investment versus increased trade deficits. The more salient point is simply that there is nothing in the CBO analysis that rebuts my larger point: Potential damage from increased public debt does not materialize when this debt is taken on when the economy is depressed. Here’s the CBO on the issue (p. 21 in the linked report):

“… when the economy has substantial unemployment and unused factories, offices, and equipment, federal budget deficits—and thus additional debt—generally boost demand, thereby increasing output and employment relative to what would occur with a balanced budget. … CBO’s estimates in this chapter [ed: estimates about the output-depressing effects of budget deficits and extra public debt] do not take those short-run effects on demand into account. Indeed, the estimates reflect the assumption that over the long run, output is always at its potential level

In short, the potential output-depressing effects stemming from budget deficits that the CBO is estimating only hold when “output is at its potential level.” Or to say it another way, the exact way I said it earlier, extra public debt incurred when the economy is depressed (i.e., output is not “at its potential level”) causes no economic damage.

And in fact, when extra public debt is incurred when the economy is depressed, the boost it gives (if spent wisely) to economic output can easily be large enough to actually reduce overall the overall debt/GDP ratio by boosting the denominator and by spurring enough additional tax collections to actually self-finance part of the extra debt. How are people so sure that the extra debt incurred in recent years hasn’t led to any of the downsides from crowding out or upward pressure on the value of the dollar? Simple—interest rates have not risen. And remember, the entire economic chain wherein incurring public debt leads to crowding-out and trade deficits is through upward pressure on interest rates. And, since the depressed economy is putting ferocious downward pressure on interest rates, there is no damage done.

Until the economy recovers and this downward pressure on interest rates relents, additional increments of public debt do not hurt, and scare-stories about the 20 percent income loss possible 25 years from now because of deficits do not change this calculus at all.

Four disturbing consequences of Pelosi’s tax retreat

For the past few months, I (and others in favor of allowing the Bush-era tax cuts for the rich to expire) had worried that once we got into the lame duck session, congressional Democrats would let their position slip and start supporting extending tax cuts for couples with income above $250,000 (individuals above $200,000). But I thought at the very least it would happen in the last month or two, when the pressure was really being brought to bear.

Turns out, it happened a lot sooner than that. Yesterday, House Minority Leader Nancy Pelosi (D-Calif.) signaled support for allowing the Bush-era tax cuts under $1 million to expire. Pelosi explained, “It is unacceptable to hold tax cuts for the middle class hostage to extending multi-billion dollar tax breaks for millionaires, Big Oil, special interests, and corporations that ship jobs overseas.”

Yes, I understand that “tax breaks for millionaires” sounds better in a press release than “tax breaks for households with income over $200,000, or $250,000 for couples.” And perhaps she felt forced into this, worrying that she might not be able to hold her caucus at the $250,000 mark, opting instead to retreat to more defensible terrain before the battle royal later this year.

But this shift has a number of very disturbing consequences:

1) Slipping to the right.This will now be the left pole of the debate. The Democratic Party has moved from opposing the Bush-era tax cuts to supporting 80 percent of them, to now supporting nearly 90 percent of them. And yet these concessions have been given for free, without any countervailing progressive demands. This is just more evidence that the tax debate is shifting further to the right. Pelosi may have done this for short-term advantage, but in the long run, these shifts tend to be very difficult to reverse.

(From Flickr Creative Commons by nasa hq photo)

2) More spending cuts. Given that the Bush-era tax cuts cost $2.6 trillion over the last decade and will cost over $4 trillion in the next decade, this concession will put even greater pressure on the budgets of vital safety net and public investment programs.

3) The definition of “middle class” is losing relevance. The previous definition of the middle class as being anyone under the $250,000 threshold was already a severe stretch. After all, you’re talking about people who (1) are making five times that of the typical American household (which makes closer to $50,000 a year in combined income), and (2) whose incomes are higher than 98 percent of American households. To now extend the definition of middle class to people who make 20 times that of the average household and whose income is greater than over 99 percent of households is to define away the entire concept of the middle class.

4) Bigger tax cuts to the highest-income Americans. This shift isn’t just a huge boon to upper-income households making between $250,000 and $1 million in income.  In fact, about half of these additional tax cuts would go toward households with over $1 million in income. This is because the cut-off—be it $250,000 or $1 million—represents the portion of a taxpayer’s income that is subject to the continued tax cut.  So the previous Democratic position—which remains President Obama’s public position—is that if you make over $250,000, you still get to keep your tax cuts for all your income below that threshold and only have to pay higher rates on income above that threshold. Revising the threshold up to $1 million basically means that all income between $250,000 and $1 million also retains its tax cuts, and as it turns out, about half of those tax cuts will go to people with income over $1 million.

As Jared Bernstein said, we’ll let the game theorists argue over whether this helps or hurts the Democrats’ negotiating position. But even if this does give the Democrats the upper hand in negotiations, what then? The whole point is to enact a tax code that can adequately fund the social safety net and public investments that we need to create a stronger economy with equal opportunity for all. Retaining the tax cuts for most people making over $250,000 and reducing the tax increase that people making over a $1 million would be subject to, makes that job significantly more difficult, if not impossible.

Increasing New Jersey’s minimum wage helps the economy and the state’s lower-income earners

With New Jersey joining several other states in considering raising its minimum wage, it is appropriate to do some myth-busting around the minimum wage, in particular addressing myths around who comprises the minimum-wage workforce, and what the employment impact of increasing the minimum wage would be.

EPI’s analysis of the New Jersey proposal shows that 307,000 workers will be directly helped by raising the New Jersey minimum wage from $7.25 an hour to $8.50 an hour (because their current wages fall between those points), with another 233,000 workers benefiting indirectly (those whose wages are slightly above the new minimum wage who would see their wages increased modestly). The prevailing misconception is that most minimum-wage workers are middle-class teenagers working part-time for extra spending money. The facts tell a different story. Of those workers benefiting from the proposed minimum wage increase, slightly over half (55 percent) are female, more than four in five (85 percent) are 20 years of age or older, and nearly four in five (79 percent) work more than part-time (29 percent work mid-time, between 20-35 hours a week, and 50 percent work full-time, 35-plus hours a week). More than 3 in 4 workers (76 percent) benefiting from an increase in the minimum wage have a high school diploma or more—and nearly half (46 percent) of workers affected are white non-Hispanic (as seen in Figure 1). Over 282,000 New Jersey children have a parent who would benefit from increasing the minimum wage.


Figure 1: Source: EPI Analysis of 2011 Current Population Survey, ORG data

GDP impact and job creation

The EPI analysis of the impact of the proposed minimum wage increase shows that those workers benefiting (both directly and indirectly) from increased wages, will see an additional $439 million in wages in the first year following the proposed minimum wage increase. For those benefiting, the average increase in their annual income would be $810.

In the first year following the increase in the minimum wage, we estimate that increased spending by workers who see a raise will boost GDP by $278 million. Wage increases resulting from indexing to inflation would result in further GDP boosts in future years. Economists widely recognize the relationship between GDP growth and employment growth. Our model shows that 2,420 full-time equivalent (FTE) jobs would be created in the first year as a result of the GDP boost resulting from New Jersey’s proposed minimum wage increase. These jobs would be concentrated within New Jersey, since lower-income workers disproportionately spend their wages locally to meet the immediate needs of their families.

The minimum wage as defense against further erosion of wages 

As seen in Figure 2, New Jersey saw significant erosion of low wages (those at the 20th percentile) between 2009-2011. New Jersey’s $0.60 erosion of wages at the 20th percentile was the 11th greatest wage loss of all states, exceeding the national low-wage erosion by $0.14. With unemployment rates remaining high, employers do not have to provide wage increases to get and keep the workers they need. Since well over half (56 percent) of those receiving additional income as a result of the proposed minimum wage increase fall in the bottom two income quintiles, it is clear that slowing this wage erosion would significantly help lower-income workers.


Figure 2: Source: EPI analysis of Current Population Survey, ORG data. Note, “Low Wage” = wage at the 20th percentile. Figure shows change in the 20th percentile real wage between 2009 and 2011.

Increasing New Jersey’s minimum wage is the right thing to do for several reasons. It is smart economics, boosting a weak economic recovery that has New Jersey firmly in its grips, and it improves the well-being of working families still reeling from the effects of the Great Recession.

Getting to a better Fed is about more than just Jamie Dimon

Does J.P. Morgan Chairman and Chief Executive Jamie Dimon belong on the board of the New York Federal Reserve? Of course not. And there’s actually a petition demanding his resignation or removal, being pushed by former IMF Chief Economist Simon Johnson.

But it’s also important to note that he didn’t belong on the board two months ago either—before the large trading loss J.P. Morgan suffered made news. And it’s not just Dimon, it’s the whole structure of Federal Reserve banks that needs reform.

The problem is that the boards of directors for regional Federal Reserve banks are composed of financial-sector executives—and these boards then get to choose five of the 12 voting members of the Federal Reserve’s Open Market Committee (FOMC), the body that makes monetary policy decisions. So, essentially 42 percent of the committee that controls the single most important lever of macroeconomic policy for the country is picked by banking executives. Oh, and the New York Fed, while technically a regional bank, occupies a permanent seat on the FOMC.

This is all made more ironic by the fact that any attempt by outsiders to criticize the Fed often leads to distressed hand-wringing by the Beltway elite about the sanctity of Fed “independence.” But of course, they are not talking about “independence” in any normal sense of the word, but rather independence from having to consider the views of those in the economy who might have different interests from finance.

So, sign the Dimon petition. But also, and much more importantly, support the efforts by legislators in Congress like Barney Frank and Bernie Sanders to undertake more comprehensive reform of the Fed. Because none of this is personal, it’s just business.

Does just arguing over the debt ceiling damage recovery? Maybe

Given that Speaker of the House John Boehner (R-Ohio) essentially promised last week to engineer a replay of last summer’s debt ceiling fight at the first opportunity, people have been wondering if that previous fight could be tied directly to subsequent economic damage in the form of lost output or jobs.

The short answer: maybe.

Before going into this question, however, it is worth noting that there is absolutely zero economic reason to believe that current levels (and expansions in recent years) of public debt are damaging to the economy. We can say this with confidence for a couple of reasons, based in pretty boring textbook macroeconomics. First, interest rates remain historically low, meaning that lenders are not just willing, but intensely eager, to keep financing budget deficits. Second, these low interest rates are not based on capricious market sentiment that could turn on a dime; instead they’re based on economic fundamentals.

To put it quickly, the same thing that is keeping interest rates low is what is keeping the economy weak – an excess of desired savings from households and businesses over demand for new borrowing and investments. So there will be upward pressure on interest rates if and only if this fundamental economic weakness is resolved, and not before.

But while economic fundamentals regarding public debt pose no threat to the U.S. economy, political brinksmanship might. As the nation approached the (arbitrary, unuseful, and dangerous) statutory debt ceiling last summer, what had been historically a pro forma vote to keep the federal government from defaulting on its obligations became this time a chance for GOP members of Congress to extort passage of some of their own pet policies in exchange for not causing an economic crisis. Eventually, the debt ceiling was raised in a deal to cut more than $1 trillion in spending over the next decade.

Given Boehner’s threat/promise of last week, this brings us back to the main question: Did last summer’s political wrangling over the debt ceiling inflict tangible harm on the economy?

Circumstantial piece of evidence exhibit A is that economic growth decelerated markedly in the middle of the year (see figure below on year-over-year GDP growth), roughly as the debt ceiling debate came to a head.


Source: Author’s analysis of Bureau of Economic Analysis National Income and Product Accounts public data series

Further, a paper by Scott Baker, Nicholas Bloom, and Stephen Davis (2012) has attempted to construct an empirical measure of economic uncertainty and estimate the association of rising uncertainty with economic performance. While the paper has often been cited by critics of the Obama administration who think that it estimates the effect of regulatory uncertainty on growth, there is actually almost nothing in the paper to support this reading.

But the paper does show a very large increase in economic uncertainty associated with the debt ceiling fight (see the figure below, which is lifted directly from the paper, including the association of the last spike with the fight over the debt ceiling). In fact, the index of economic uncertainty rose more in the midst of last summer’s debt ceiling fight than it did during the financial meltdown of fall 2008 (when Lehman Brothers collapsed).

Lastly, Baker, Bloom and Davis (2012) estimate that the rise in economic uncertainty as large as the total rise that occurred between the index’s trough in 2006 and its peak in 2011 is associated with a contraction in GDP of more than two percentage points and a reduction in employment of more than 2 million jobs. Eyeballing their chart, the fight over the debt ceiling accounts for nearly half of this 2006 to 2011 rise. If one believes their estimates of the effect of a rise in uncertainty of roughly this size and one is willing to make very strong assumptions about causality (more on this below) this means that the fight over the debt ceiling could have cost the economy a percentage point of GDP and a million jobs (Baker, Bloom and Davis say that these effects manifest between nine and 24 months later).

How hard should we lean on an estimate like this? Not very—the causal relationship between measures of economic uncertainty and performance clearly runs both ways. As Baker, Bloom and Davis note, “So, for example, it could be that policy uncertainty causes recessions, or that policy uncertainty is a forward-looking variable that rises in advance of anticipated recessions.”

Does fighting about the statutory debt ceiling in and of itself damage the economy? Maybe—and it’s clear that the current economy needs no further drags on growth in the coming year.

What’s much clearer is that an actual financial crisis caused by debt ceiling brinksmanship would have real economic consequences, and would also be the first purely self-inflicted sovereign debt crisis in history. Even flirting with this is unspeakably stupid.

Management—bad management—crippled the auto industry’s Big Three, not the UAW

Many in the media have accepted the notion put forward by conservatives and business associations that unions make businesses uncompetitive by raising wages and benefits irresponsibly. The poster child for this view of the world is the auto industry, where the United Auto Workers supposedly drove the “Big Three” (Chrysler, Ford, and General Motors) into the ground while foreign competitors ate their lunch.

This is false history. As Case Western Reserve University manufacturing scholar Sue Helper has helped me understand, the auto industry’s problem stemmed from decades of mismanagement, and regardless of the UAW contracts, the Big Three made choices that doomed them to lose market share and the ability to compete.

The biggest element of mismanagement was designing and selling poor products. Anyone who lived in Michigan in the 1970s remembers when Detroit began building truly terrible cars, like the Chevy Vega, the AMC Gremlin, the Chrysler Imperial, and the Ford Pinto; it was the beginning of what became a slow-moving train wreck. As the Economist published in the May 2009 story, “A Giant Falls,” Detroit began making cars that were both dull and unreliable:

“Only in the 1970s, after the first oil shock, did faults start to become visible. The finned and chromed V8-powered monsters beloved of Americans were replaced by dumpy, front-wheel-drive boxes designed to meet new rules (known as CAFE standards) limiting the average fuel economy of carmakers’ fleets and to compete with Japanese imports. As well as being dull to look at, the new cars were less reliable than equivalent Japanese models.

By the early 1980s it had begun to dawn on GM that the Japanese could not only make better cars but also do so far more efficiently. A joint venture with Toyota to manufacture cars in California was an eye-opener. It convinced GM’s management that “lean” manufacturing was of the highest importance. Unfortunately, that meant still less attention being paid to the quality of the cars GM was turning out. Most were indistinguishable, badge-engineered nonentities.”1

Bad design and engineering were accompanied by disastrous pricing decisions, which further jeopardized quality:

“As the appeal of its products sank, so did the prices GM could ask. New ways had to be found to cut costs further, making the cars still less attractive to buyers.”

Autoworker wages didn’t make the Big Three uncompetitive by driving prices up; poor value drove prices down. As prices and quality fell together, consumers fled. The UAW’s contracts were almost irrelevant. One way to show this is to compare the pricing of the competitors’ vehicles with the size of the labor cost differential bargained by the UAW.  Labor costs make up only 10 percent of the cost of a typical automobile. Before the auto rescue, the Big Three paid $55 an hour in compensation per auto worker while the Japanese paid only $46 an hour. (Company lobbyists and publicists inflated the total Big Three labor cost to $71 by attributing the unfunded pension and health benefit costs for decades of retired workers to the much smaller currently employed workforce2; the legacy costs for Japanese transplants were only $3 an hour.)3 But even if, for the sake of argument, we accept the unfairly inflated $71 figure, the difference in the cost of a vehicle attributable to the UAW (the UAW premium) would be 30 percent of the average 10 percent labor cost, or 3 percent of total cost.

In 2008, according to Edmunds, GM sold its average large car for $21,518. Assuming GM sold its cars at cost, the UAW premium would have been only $645 (3 percent of $21,518). Did the UAW premium raise the selling price so high as to make GM cars uncompetitive with Toyotas? Not exactly. Toyota sold its comparably equipped average large car for $31,753—$10,000 more than GM.4 It wasn’t price that made GM cars uncompetitive, it was the quality of the product and the customers’ perception of quality.5

For nearly 30 years, the Big Three’s market share fell steadily, from 77 percent in 1980 to 45 percent in 2009, almost entirely because the U.S. companies built cars that were noisier and less comfortable, had poorer fit and finish, poorer gas mileage, more defects, and a poorer repair record and resale value.6 Helper has documented the hostile relationships the Big Three developed with their suppliers,7 which led to the provision and assembly of parts that did not work well together, did not fit seamlessly, and whose inherent quality was sometimes substandard.8 In 2006, before the auto industry collapsed (and before gas prices skyrocketed), economists Kenneth Train and Clifford Winston did a careful econometric analysis of buyer preferences and concluded that:

“… the U.S. automakers’ loss in market share during the past decade can be explained almost entirely by the difference in the basic attributes that measure the quality and value of their vehicles. Recent efforts by U.S. firms to offset this disadvantage by offering much larger incentives than foreign automakers offer have not met with much success. In contrast to the numerous hypotheses that have been proffered to explain the industry’s problems, our findings lead to the conclusion that the only way for the U.S. industry to stop its decline is to improve the basic attributes of their vehicles as rapidly as foreign competitors have been able to improve the basic attributes of theirs.”

The authors conducted a simulation to determine “how much U.S. manufacturers would have to reduce their prices in 2000 to attain the same market share in 2000 that they had in 1990 and found that prices would have to fall more than 50 percent.” In other words, reducing the cars’ price by the UAW premium would have had no discernible effect on market share. The Big Three were building cars that most people simply didn’t want to buy, and only by cutting the price in half could they have retained their market share.

What happens to a corporation that sells its products at a low price while losing market share for 30 years? It goes bankrupt.

Fundamental mismanagement and building cars that customers didn’t want doomed the Big Three, not the UAW. (more…)

Alan Simpson isn’t ‘saving Social Security’

Alan Simpson is at it again. Launching another off-color attack on people who oppose the Bowles-Simpson plan to bomb Social Security in order to save it, Simpson claims he is saving it for young people who would otherwise be “gutted.” In fact, Simpson’s overarching desire to protect rich Americans from paying their fair share of Social Security taxes (if wealthy earners paid FICA taxes on all of their income, most of Social Security’s solvency problems would be solved) leads him to propose cuts in Social Security almost as large as the automatic benefit reductions that will occur in 2033 under current assumptions.

According to an analysis of the Bowles-Simpson plan by Social Security’s chief actuary, middle-class workers with average earnings over the course of their careers (around $43,084 in 2010) would see a 22 percent cut in benefits by 2080, not significantly different from the 23.5 percent cut in benefits these workers would face if nothing were done to shore up Social Security’s finances. Our children and grandchildren will lose critical benefits under Simpson’s plan, while seniors like him are mostly protected.

Notwithstanding Simpson’s crocodile tears for young people, under the Bowles-Simpson plan, if someone who is born in 2015 retires at age 65 with a middle-class income in 2080, Social Security will replace only 28 percent of their pre-retirement earnings. By contrast, a 65-year old who retired in 1980 replaced 49 percent of pre-retirement earnings. It is Simpson himself who wants to gut the Social Security of coming generations.