China Labor Watch just released a new report investigating working conditions at 10 of Apple’s suppliers in China, including the Foxconn factory in Shenzhen. The New York-based group was able to collect this information even though local authorities in China sometimes literally kicked its investigators out of town. As others have also determined, including the Fair Labor Association in a study sponsored by Apple, CLW found working conditions at the Foxconn factory to be severe, with workers employed long hours at low pay under harsh living conditions. The CLW report also breaks new ground in three areas. The report finds:
- Deplorable labor practices are not just characteristic of Foxconn factories, but exist in factories throughout Apple’s supply chain. The report documents, for instance, that employees in most of the factories typically work 11 hours a day and can only take one day off a month (low wage levels and management pressure compel them to work such hours); that employee dorms are frequently overcrowded, dirty and lacking in facilities; and that there is little ability for workers at Apple suppliers to push for reasonable working conditions on their own.
- As bad as working conditions at Foxconn are, they are even worse at some of the other factories in China that supply Apple. The report flags the three Riteng factories investigated as particularly difficult places to work. The table below includes key findings from the report. It indicates: Riteng workers typically work 12 hours per day nearly every day of the year (including weekends and holidays), compared to 10 hours per day at the Foxconn factories, with some days off. The average wage for the Riteng workers amounts to $1.28 per hour, or well below the already quite low average hourly wage of $1.65 for Foxconn workers. Health and safety conditions are much worse at the Riteng factories than at the Foxconn factory, and living conditions are worse for the Riteng workers as well.
Riteng vs. Foxconn
|Riteng (Shanghai)||Foxconn (Shenzhen)|
|Approximate number of workers||
|Percent of workers that are dispatched||
|Average number of hours worked per day||
|Average number of days worked per month||
|Average hourly wage (RMB)||
|Average hourly wage in U.S. dollars||
|Percent rating factory’s performance on work safety and health as ‘bad’||
|Percent rating dorm conditions as ‘bad’ or ‘very bad’||
|Percent indicating food is unsanitary||
Source: China Labor Watch
- Certain serious labor problems have so far been neglected in the discussion of work practices at Apple suppliers in China. In particular, the new report documents the troubling yet common practice by Apple suppliers of using dispatched labor. This practice enables factories to reduce the compensation and benefits they provide to their workers, makes it even easier to compel workers to work exceptionally long overtime hours, and creates damaging uncertainty over who is responsible for any worker injuries.
In recent months, stories about when the next iPhone will be released or whether Apple will add a television to its product line have helped push the troubling issues concerning how Apple’s products are made to the sidelines. The new CLW report is a needed reminder that those issues should not be forgotten. Apple has the responsibility to ensure that basic labor standards are met not just at Foxconn factories, but also at the factories of other suppliers that have received less media attention. And, as I summarized previously, Apple easily has the resources to advance any necessary changes.
Following the Supreme Court’s ruling in favor of the Patient Protection and Affordable Care Act (ACA) and its lynchpin—the individual mandate—my colleague Josh Bivens noted all the ways conservatives have tried to keep health care from being delivered efficiently, notably by blocking government from using its monopsony power and economies of scale wisely. This, of course, is difficult to square with conservatives’ professed concerns about public debt, because rapidly rising health costs are, by far, the single biggest impediment to stabilizing long-run public debt (if the economy operates at full potential over this long-run). Political opportunism aside, reasonable policy should unequivocally aim to lower health care cost-growth; so here’s some evidence worth revisiting on the comparative efficiency of public versus private provision of health care.
The United States has a patchwork health care system of universal single-payer insurance for seniors (Medicare), publicly funded health coverage for the disabled and poor children and seniors (Medicaid and SCHIP), a rapidly unraveling system of employer-sponsored health insurance, fragmented private self-insurance markets, and 49 million non-elderly Americans (under the age of 65) without any health insurance. It’s important to note that the ACA was already a preemptive compromise with those opposed to a much more expansive role of government in directly financing health care. This, of course, doesn’t stop its opponents from lambasting it as a “government takeover,” but the ACA actually preserved the basic (inelegant) structure of American health care, seeking to fill in its gaps rather than a total overhaul. This makes its cost-containment provisions subject to much variability—some may work very well to restrain growth while others might not. And it also means that a clear, evidence-based tool for restraining these costs was left on the table: direct public provision of care and financing of costs.
By using its monopsony power and economies of scale gained by insuring tens of millions of people, public health programs have done a better job at restraining costs than private insurers. For example, since 1970, cost growth in inflation-adjusted Medicare spending per beneficiary has averaged 4.5 percent annually, versus 5.7 percent for private insurers.1 This underlying trend has been remarkably consistent over time: The 10-year rolling average of annual per enrollee cost growth for all benefits provided by private health insurers has exceeded that of Medicare in 28 of the past 31 years.
This divergent rate of cost growth compounds markedly over time. Since 1969, cumulative growth in private insurance spending per beneficiary has increased 60.8 percent more than that of Medicare.
And as I noted a while back, the Congressional Budget Office has estimated that Medicare is 11 percent cheaper than an actuarially equivalent private insurance plan, an efficiency premium that will similarly compound with time: Fee-for-service Medicare is projected to be at least 29 percent cheaper than an equivalent private insurance plan by 2030 (relative to CBO’s alternative fiscal scenario for the long-term budget outlook).
The ACA is projected to expand coverage to some 30-33 million additional non-elderly Americans by the end of the decade, a critical step for risk-pooling, increasing cost-saving preventive care, and decreasing uncompensated care costs passed along to providers and policy holders. It also included ambitious reforms to control costs (particularly the Independent Payment Advisory Board, or IPAB), but too many provisions leveraging the public sector’s ability to directly contain costs—notably offering a public insurance option (e.g., Medicare buy-in) and negotiating Medicare Part D prescription drug prices with pharmaceutical companies (as is done for Medicaid)—were lobbied out of the bill. Even though stronger cost-containments could have been included, the Supreme Court’s ruling in favor of the ACA is a major victory for long-run fiscal sustainability, as health reform is projected to reduce annual long-run budget deficits by roughly half-a-percentage point of GDP.
The ACA is a momentous step toward more efficient and comprehensive health care coverage in the United States, but reform will undoubtedly remain a work in progress—particularly as the various cost-containment provisions in the ACA are evaluated and successes merit replication. Our experience over the last 40 years should guide policymakers as they inevitably go back to the drawing board on health care reform; and the evidence over this time overwhelmingly suggests that public provision of health care is more effective at containing excess cost growth and more efficient than private insurance provision.
The individual mandate lives! Excellent.
For uninsured Americans anyway. But for those of us who had comments ready in case it was struck down, it’s kind of inconvenient.
So, in the interest of recycling, I do want to keep something front-and-center about this particular conservative attack (opposition to the mandate) on health reform: Whatever it’s premised upon, the practical impact of opposing the mandate (and since this is true of all recent conservative ideas on health care one might be forgiven for thinking that it’s a strategy, not a quirk) is simply to make health care more expensive.
And why are conservatives dedicated to making sure Americans pay too much for health insurance? Sometimes, it’s just the price of shoveling subsidies to corporations as part of any health reform. Other times, it’s making sure that Americans don’t see government doing things too efficiently and outperforming the private sector (witness the fevered desire to “reform” Medicare by privatizing it—which will predictably make it more expensive). In the end, I guess you don’t need to believe me when I say that that’s the goal of conservative health reform; but when it’s the practical impact of everything they propose, then I think my argument is looking pretty good.
Anyway, here’s my quick primer on the mandate and why opposing it was simply another exercise in making sure Americans paid too much for health insurance.
A key barrier to individuals gaining coverage if they’re not employed by a large company (which has the clout and the legal protections to force insurance companies to cover all their employees as a group, rather than just cherry-pick the healthy ones) is insurance companies refusing to cover those with pre-existing conditions—or even just those that may become sick (and hence expensive to insure) sometime in the future. The Affordable Care Act (ACA) dealt with this by mandating insurance companies offer coverage to everybody who comes to their door (“guaranteed issue,” in the jargon of reform), and to make this a real, not just a notional “offer,” mandating that these companies charge each beneficiary the same premium (“community rating,” in the jargon, with some variation allowed by age and smoking status). These provisions, again, keep insurance companies from being able to cherry-pick just the healthy to cover.
But, if I could get insured whenever I wanted and at the same rate as everybody else, shouldn’t I just choose to not pay premiums while I’m healthy and then buy coverage after I’m already sick? This would be a big problem for insurance companies, as their pool of covered beneficiaries would be a pretty unhealthy group. And since the ACA provides subsidies to help make coverage affordable, this means that the per-beneficiary level of subsidy would be pretty high, as only unhealthy people would be receiving subsidies.
The answer to this “free-rider” problem? Make sure people carry insurance even while healthy, to make for a larger, more predictable, and healthier insurance pool to keep costs down. This is what the mandate is for.
Essentially, the ACA imposes some restrictions on insurance companies (guaranteed issue and community rating) but then gives them something in return to make sure these restrictions don’t lead to them having to cover an unhealthy pool of beneficiaries (that something in return is the mandate) and rising costs.
So the mandate makes reform more efficient. This means it must be opposed by conservatives, because they have all along been determined to make any health reform as inefficient as possible. Remember the 2006 Medicare Part D legislation that cost way too much because it barred the government from bargaining with pharmaceutical companies over drug prices? And which subsidized private HMOs to cover Medicare beneficiaries? Remember the public option, which would’ve saved the public money but was taken out of the ACA in the early stages? Remember the voucherization of Medicare called for in the Ryan budget, which would insure that Americans spend far more to cover health costs in the future?
This was no grand constitutional issue, this was just conservatives doing what they reflexively do when it comes to health reform: trying to make sure it’s as inefficient as possible.
The Affordable Care Act (ACA) is valuable legislation for a host of reasons, but most notably, it provides coverage for millions of Americans who would not have been able to secure insurance, and therefore, health care when they need it. The Supreme Court decision to uphold ACA was also important because it gives clarity and certainty to states and private industry that they should start preparing for the main provision to kick in in 2014. It resolves any uncertainty that was felt throughout the country by the important players, and now provides the necessary push for its implementation.
The expansion of insurance is particularly important now as a growing share of Americans are without health coverage. Historically, Americans under age 65 have received insurance through the workplace, but since 2000, that valuable source of coverage has declined every year for 11 years running, a total decline of over 10 percentage points, as shown below.
These statistics are already bleak, but without the valuable health care legislation, the situation could have gotten much worse. Because of the ACA, more than 30 million people will get health insurance in coming years that would not have received it—making them more likely to get needed medical care and less likely to come under severe financial distress when they do.
Specifically, the fact that the Supreme Court upheld the individual mandate is one of the reasons so many more people get insured, making the law more cost-effective. The effect of the decision with regards to Medicaid is unclear, but could potentially lead to fewer of the most vulnerable Americans getting access to affordable health care.
In sum, the Supreme Court decision today reaffirms the constitutionality of the health care legislation and its valuable provisions, providing a necessary safety net for millions of Americans. It also provides the added motivation for the implementation of health reform to move full-speed ahead.
The U.S. Bureau of Economic Analysis (BEA) recently announced that the U.S. net international investment position (NIIP) was -$4 trillion at year-end in 2011 (see figure, below). The NIIP stood at -$2.5 trillion at year-end 2010. The $1.6 trillion increase in the net debt was largely caused by price changes of -$802 billion (on domestic and foreign holdings of stocks and bonds) and by net financial flows of -$556 billion. Net financial flows were largely explained by financing of the $466 billion U.S. current account deficit in 2011. The current account is the broadest measure of the U.S. trade deficit. While the costs of financing the NIIP were relatively small in 2011, they could rise rapidly if interest rates return to more normal levels in the future.
The United States has been borrowing hundreds of billions of dollars per year for more than a decade to finance its growing trade deficits. However, until 2011, the U.S. NIIP has not declined proportionately, as shown in the figure below, primarily because of gains in the prices of foreign stocks, the decline of the dollar (which made foreign currency holdings more valuable), and frequent accounting revisions (which have found more and more U.S. investments abroad).
Last year, several of those factors moved against the United States as the NIIP declined $1.6 trillion to -$4 trillion. That’s real money. Foreign investors (primarily foreign central banks) held $5.7 trillion in treasuries and other government securities at the end of 2011. The United States paid, on average, about 2.3 percent in interest on all of those securities. These low rates are caused by the still-depressed U.S. economy operating far below potential, and are unlikely to rise unless the U.S. economy begins operating much closer to full-employment. But, if this recovery happens and the NIIP remains roughly as large as it is today, then debt service costs could rise significantly. For example, if the average cost of government debt rises to 4.5 percent, it would add another $124 billion to the U.S. government deficit. If this rise in U.S. borrowing costs, furthermore, was not matched by a rise in global interest rates, then this would actually cause a net decline in U.S. GDP, as income flows out of the country to service debt increased and were not matched by increased inflows that paid U.S. owners of foreign assets.1
The U.S. NIIP represents a potential claim against future national income, and the size of this potential claim is growing dramatically as shown in the figure above. Each year that we allow large trade deficits to continue is another year that adds to this claim on future incomes—yet this actual intergenerational transfer is often ignored while a non-existent intergenerational transfer (that one allegedly caused by rising federal budget deficits) attracts much attention from pundits and economic commentators.2
Board of Governors of the Federal Reserve System. 2012. “Selected Interest Rates (Daily) – H.15: Historical Data.”
U.S. Bureau of Economic Analysis (BEA). 2012. “International Economic Accounts: Balance of Payments.”
U.S. Bureau of Economic Analysis. 2012. “International Economic Accounts: International Investment Position.”
1. Average rate of return on U.S. government securities in 2011 calculated from data in the current account (BEA 2012a) and the NIIP (BEA 2012b). Return on seven-year treasury securities used for comparison. The average return on seven-year treasuries was 2.16 percent in 2011 (Board of Governors of the Federal Reserve System 2012). Their average return in the pre-recession period of 2000-2007 was 4.52 percent.
2. Interest payments on government debt owed to U.S. citizens only reallocate income from taxpayers to domestic bondholders. Foreign holdings of U.S. securities represent claims on future income, which are qualitatively different. Interest payments on foreign holdings reduce U.S. GDP, while interest paid to domestic holdings does not. Given the existence of substantial unemployment and the predominance of deficit opponents in Congress, increases in the government debt due to financial outflows could result in further spending cuts, which would cause a further decline in U.S. GDP.
Apple is rapidly becoming the symbol of what’s wrong with our economy: a highly profitable enterprise where all the gains go to those at the top and the vast majority, including those with college degrees, struggle to get by. Saturday’s New York Times article by David Segal deepens the story beyond Apple’s complicity in exploiting Chinese manufacturing workers. According to Segal, “About 30,000 of the 43,000 Apple employees in this country work in Apple Stores, as members of the service economy, and many of them earn about $25,000 a year.”
That $25,000 annual salary works out to $12.02 an hour for someone working full-time for one year (2,080 hours paid, either for work hours or paid leave). That’s pretty low; about $1 above the “poverty-level wage” (the poverty line for a family of four in 2011 was about $23,000, equivalent to an hourly wage of $11.07). Segal’s article starts off talking about a former Apple employee, Jordan Golson, who earned just $11.25 an hour. Many of these Apple store workers are young, so one wonders how Apple wages compare with those of other young college graduates. The short answer is “not so good,” or even “terrible.” The hourly wages of young college graduates (those ages 23-29) in 2011 was $21.68 for men and $18.80 for women. To be fair, Segal notes that, “The company also offers very good benefits for a retailer, including health care, 401(k) contributions and the chance to buy company stock, as well as Apple products, at a discount,” so including benefits may offset some of the discrepancy between pay by Apple and pay by other companies. The information necessary to calculate this offset is unavailable, but it is not believable that these benefits fully or even significantly make up such a large shortfall in wages.
How do Apple store wages compare to those of all college graduates? As the table below shows, $12.02 is far below the 20th percentile wage of college graduates, the wage that 80 percent of college graduates earn more than and 20 percent make less than. That’s right, Apple’s store employees’ wages are in the bottom 20 percent of all college graduates. In fact, $12.02 is $2.24, or 16 percent, less than the 20th percentile college wage in 2011. For college-educated men, $12.02 hourly is on par with the wage earned at the 10th decile, $11.87, meaning 90 percent of college graduates earned more than that in 2011.
Hourly wage for college graduates, selected percentiles, 2011
|10||$ 10.80||$ 11.87||$ 10.12|
|*The Xth percentile wage is the wage at which X percent of the wage earners earn less and (100-X) percent earn more|
Source: Author's analysis of Current Population Survey Outgoing Rotation Group files
It is already well-known that Apple benefits from the extremely low wages and harsh working conditions of the Chinese workers who manufacture its products. As EPI’s Ross Eisenbrey and Isaac Shapiro recently wrote, “Apple workers in China endure extraordinarily long hours (in violation of Chinese law and Apple’s code of conduct), meager pay, and coercive discipline.” Together with the mediocre pay for Apple employees, even compared with other retailers, it is clear that Apple’s success does not translate to high or rising living standards for the workers who one would hope would benefit from its success. Apple could readily afford to pay the Chinese Foxconn workers building iPhones because their costs are a miniscule part of the phone’s costs. Raising pay is not that heavy a lift for Apple: In 2011, Apple’s nine-person executive leadership team received total compensation of $441 million, equivalent to the estimated compensation of 95,000 Foxconn factory workers assembling Apple products.
The discrepancy between Apple’s profits/executive pay and its compensation to its workers is a particularly glaring example of what is occurring in the wider economy. The gap between CEO compensation and that of a typical worker is now 231-to-one, where it used to be just 58.5-to-one in 1989. Corporate profits are now higher as a share of corporate-sector income than in any year since the early 1940s when we had a War Labor Board consciously suppressing wage growth. And, this all contributes to the phenomenon that productivity—the ability to produce more goods and service per hour—has been rising rapidly but the hourly compensation of both high school and college-educated workers is totally flat. It does not look like much will change soon unless there’s a broad change of thinking among policymakers and a mobilized workforce. After all, current outcomes have been dictated by persistent high unemployment, low and weakly enforced labor standards (witness the failure of Apple to abide by California’s wage and hours law mandate of two 10-minute breaks a day, reported in the Times story), the inability of unions to set high labor standards, and the dominant political/policy influence of the wealthy and the business community. Apple’s labor practices and the overall failings of the economy have not been dictated by any economic laws. Rather, they are the result of eminently changeable public-sector policies and private-sector practices.
In a 5-4 decision issued this week in Christopher v. SmithKline Beecham Corp., the Supreme Court, in its eagerness to reach a result favoring the pharmaceutical industry over its employees, abandoned the legal straight and narrow for some very sketchy shortcuts. The case concerned the application of overtime protection to medical detailers, also known as pharmaceutical representatives, employees who visit physicians and promote prescription drugs. If the detailers are “outside salesmen,” they are exempt employees and are not entitled to overtime pay.
Ignoring the plain meaning of key words, the “ordinary usage” which Justice Antonin Scalia elsewhere has claimed to favor, the court declared medical detailers to be outside salesmen because—even though they never make a sale of pharmaceuticals to anyone—they come as close to selling as the law governing their industry allows. The best the court could do in terms of identifying sales that these supposed salesmen make is to find that the detailers induce “non-binding commitments” from physicians to prescribe the drugs their pharmaceutical companies are promoting or marketing. The court found that the fact the detailers almost get commitments from these physician “gatekeepers”—without whom no one could sell the prescription drugs being promoted—is enough to treat the “transaction” as a sale. Whew, talk about bootstrapping and judicial activism! A justice could get a hernia with that kind of lifting!
But who in reality buys prescription drugs? Certainly, in any normal economic sense, it’s not the prescribing physician. There are, in fact, two parties that purchase them, and the detailers don’t sell (or even make binding commitments) to either: the retail drug stores like CVS and Walgreens, and the patients who are the end users. The court deals with sales to the drug stores in a most unsatisfactory way: It says that the people who actually make those sales are so few (2000 sales agents vs. 90,000 detailers), and their function is so rote, that we should ignore them.
The persons who make sales (exchanging money for a product) to patients are pharmacists, but the court argues that there would be no sales without the prescribing physicians, who deal with the medical detailers and have a completed transaction when they make a non-binding commitment—not to buy—but only to prescribe the drugs for appropriate patients. According to the court, this is” tantamount” to a sale.
An unfortunate lesson this case teaches is that no one knows what the law means until the Supreme Court decides the result it wants and then stretches the meaning of the statutory or regulatory language to (more or less) fit the result.
The other lesson from this decision is for the Labor Department, which had never in 60 years brought an enforcement action against a pharmaceutical company in a way that gave the industry notice that its widespread practice of denying overtime pay to detailers was unlawful. The medical detailers are relatively well paid and loosely supervised employees whose employers do not closely monitor their work time—not the classic employees we think of when we talk about overtime pay. Although there is no excuse for the tortured logic of the majority opinion, if the Labor Department had given fair notice that it disapproved the exemption of detailers, either by bringing enforcement actions over the years or even issuing consistent guidance that made its interpretation of the statute and its regulations clear, the court might have found that the exemption did not apply.
In other words, if we don’t enforce our rights, we can lose them.
The Federal Reserve’s report on family wealth released last Monday illustrates how severely the Great Recession has hurt middle-class families. Median family net worth (assets minus debt) fell to levels not experienced since 1992. While all groups but the richest 10 percent of families saw declines in wealth, there was variation in the percentage decline by race.
In the Federal Reserve’s report, it is difficult to identify the specific trends for African Americans and Hispanics. While the net worth of white, non-Hispanic families are presented, all nonwhites and Hispanics are lumped together in the family net worth table. However, the report has a sentence detailing the net worth changes specifically for African American families (p. 21). By using the past few reports, we can see the recent trends for wealth in black America.
First, it is important to note the median black family only has a small fraction of the wealth of the median white family (Figure A). (The family data discussed here differs from our reported household data because families are a subset of households and the data are inflated to different years.) In 2010, the median black family only had 12 cents for every dollar of wealth the median white family had.
When one examines the percent decline in wealth from 2007 to 2010, it appears that whites have seen a greater percentage decline in wealth than blacks. White family net worth declined 27 percent over this period while black family net worth declined 13 percent (Figure B). But in the data, while white wealth peaked in 2007, black wealth peaked in 2004. As white wealth continued to grow from 2004 to 2007, black wealth had already declined significantly.
If we compare the white and black wealth declines from their most recent high points, we see white net worth down 27 percent (from 2007) and black net worth down 40 percent (from 2004). A 40 percent decline is a large drop for a population with very little wealth even at their peak.
The trend for black net worth is probably following the trend for black homeownership. For most middle-class families, their home is their primary source of wealth. African Americans have had a strong decline in homeownership since their rate peaked in 2004 (Figure C). Homeownership rates for black families are projected to drop to between 40 and 42 percent—which would erase 15 years of gains in homeownership. If this occurs, it could also mean a continued decline in black wealth.
It is not possible to determine the trends in Hispanic net worth precisely from the published Federal Reserve data. We can deduce, however, that from 2007 to 2010, Hispanic net worth probably declined about 45 percent. This decline is significantly larger than the 27 percent for whites over the same period. Even at their recent peak net worth, Hispanics, like blacks, only had a tiny fraction of the wealth that whites had. (In 2010, the median family for nonwhite and Hispanic families combined only had 16 cents for every dollar of wealth the median white family had.)
In terms of wealth, only the richest American families have come out of the Great Recession relatively unscathed. Significant declines in wealth have been broadly felt. But the losses to black and Hispanic families are particularly damaging because they are quite large, and they were experienced by groups that had very low levels of wealth even before the recession hit.
— Research assistance provided by Johnny Huynh
Not long ago, I blogged about the fact that our key labor law, the National Labor Relations Act, protects workers even if they don’t have a union or seek to have one represent them. When workers join together to protest working conditions, to petition management for raises or plead against pay cuts, or to report unsafe conditions to government agencies, the National Labor Relations Board backs them up. The NLRB can protect workers against retaliation by the employer, can order reinstatement for fired workers, and can obtain back pay.
It isn’t widely known, but since its inception, the National Labor Relations Act has given employees the right “to engage in … concerted activities for the purpose of collective bargaining or other mutual aid or protection.”
Now, for the first time, the NLRB has a nice-looking, somewhat interactive webpage devoted to this issue of “other mutual aid or protection.” Visitors to the site can read some heartening stories about how employers overreacted—almost always by firing someone—, to employees organizing to protest or to make a problem known to management and how the NLRB intervened to restore the job or lost wages of the workers.
It’s great to see the government helping people understand their rights and how to enforce them.
In a recent blog post on the (negligible, if not nonexistent) long-run economic cost of deficit-financed fiscal stimulus at present, I noted in passing that the Congressional Budget Office (CBO) has downwardly revised potential economic output for 2017 by 6.6 percent since the start of the recession. This may seem trivial, but for a $15 trillion economy, this dip reflects roughly $1.3 trillion in lost future income in a single year, on top of years of cumulative forgone income (already at roughly $3 trillion and counting). The level of potential output projected for 2017 before the recession is now expected to be reached between 2019 and 2020—representing roughly two-and-a-half years of forgone potential income. This represents a failure of economic policy and merits considerably more attention than received, especially when weighing the benefit of near-term fiscal stimulus versus deficit reduction.
Potential output is the estimated level of economic activity that would occur if the economy’s productive resources were fully utilized—in the case of labor, this means something like a 5 percent unemployment rate rather than today’s 8.2 percent. Potential output is not a pure ceiling for economic activity, but the level of economic activity above which resource scarcity is believed to build inflationary pressures. As of the first quarter of 2012, the U.S. economy was running $861 billion (or 5.3 percent) below potential output—the shortfall known as the “output gap.” This has a number of implications for federal fiscal policy:
- Deficit-financed fiscal stimulus will have a very high bang-per-buck while large output gaps persist. The government spending multiplier is much larger in recessions than expansions (see Figure 3 of Auerbach and Gorodnichenko 2011) and the U.S. remains mired in recessionary conditions, where economic growth is insufficient to restore full employment.
- Deficit-financed fiscal stimulus is largely self-financing because every dollar of increased output relative to potential output is associated with a cyclical $0.37 reduction in budget deficits, and this feedback effect is greatly amplified by the large government spending multiplier.
- There is so much slack in the U.S. economy—i.e., supply of resources in excess of demand—that government borrowing will not “crowd-out” productive private investment; this can be seen in the near record-low 1.6 percent yield on 10-year U.S. Treasuries.
So deficit-financed fiscal stimulus is highly cost-effective, largely self-financing, has a very low opportunity cost, and poses no risk to inflation. But there is another potential benefit: closing today’s output gap can increase potential future output (thereby also increasing the ability to repay debt incurred). The reason is simple—if long bouts of inactivity leave permanent “scars” on the potentially productive resources (and they do), then the longer the economy operates below potential, the more future potential is damaged. Concretely, factories aren’t built because firms can’t even sell what existing factories are producing. Children’s educational outcomes are damaged as economic distress forces their families to move and as they lose access to decent nutrition and health. Desirable early-career jobs for recent graduates that could impart valuable skills throughout their working lives aren’t available to them, so lifetime earnings suffer. And so on.
The CBO certainly is worried about this scarring—look at the annual revisions to real potential GDP made by them since the onset of the recession: Estimates have consistently been revised downwards except between Jan. 2009 and Jan. 2010, when the deficit-financed $831 billion Recovery Act arrested economic contraction and began shrinking the output gap.
The Recovery Act, however, was nowhere near large enough to restore full employment and close the output gap—the 10-year cost of the stimulus, after all, was smaller than the annual output gaps that have persisted since 2009. As the economy has slowed as fiscal support waned, CBO’s potential output forecasts have withered as well. So why did Congress pivot from job creation (i.e., stimulus) to deficit reduction at the start of the 112th Congress?
The whole point of long-term deficit reduction, after all, is to raise future income. But failure to restore full employment decreases potential future income. Worse, while the economy remains depressed below potential output, near-term deficit reduction—particularly spending cuts—greatly exacerbate the output gap because the government spending multiplier is so high. (We’ve seen this play out across much of Europe, where government “austerity” programs have cut spending, pushed economies back into recession, pushed up unemployment, and cyclical deterioration in the budget deficit has rendered spending cuts entirely counterproductive.)
The downward revisions to potential output in CBO’s forecast reflect a failure of Congress to resuscitate the economy and restore full employment, but it’s a policy failure that can still be reversed. Fiscal stimulus can increase employment and industrial capacity utilization today and actually “crowd-in” private investment, thereby increasing today’s capital stock and future potential output. With respect to fiscal tradeoffs, cost effective deficit-financed fiscal stimulus will actually decrease the near-term debt-to-GDP ratio (the relevant metric for fiscal sustainability), whereas deficit reduction cannot raise future income until the output gap is closed and the private sector is competing with government for savings instead of plowing cash into Treasuries. The full cost of Congress’ misguided pivot from job creation to austerity is larger than even just today’s mass underemployment—trillions of dollars of potential future income will also be lost unless we pivot back to addressing the real crisis at hand.
The Federal Reserve just published findings from the 2010 Survey of Consumer Finances, a triennial survey of household finances. Though it’s no surprise that these took a dive with the collapse of the housing and stock bubbles, the extent of the plunge is still shocking as the median family saw their net worth fall by 39 percent between 2007 and 2010.1
By 2010, the economy had begun its slow recovery. Housing prices had leveled off and stocks rebounded, recouping about half their losses by the end of the year. But this wasn’t just a temporary setback. Households—especially younger households—were in serious trouble long before the twin asset bubbles burst.
Families headed by someone age 35 to 44—the age when workers typically start getting serious about saving for retirement—had seen declines in net worth in the wake of two previous recessions (1990-91 and 2001) without fully regaining the lost ground in the intervening years (see chart below). So the financial meltdowns that precipitated the Great Recession only exacerbated an existing problem. As a result, GenXers had only accumulated $42,100 in 2010, less than half what the Baby Boomers had accumulated at the same age adjusted for inflation (in the chart, Depression and War Babies are indicated by squares, Early Boomers by triangles, Late Boomers by circles, and GenXers by an X).2
The fact that net worth declined for younger age groups even before the Great Recession is remarkable when you consider that the economy grew by a third on a per capita inflation-adjusted basis between 1989 and 2010, though this growth was not widely shared. Furthermore, families should have been saving more to make up for declines in pension coverage and Social Security benefits. As a result, the Center for Retirement Research has estimated that the average family in the broad 35-64 age range had a Retirement Income Deficit of $90,000 in 2010, a measure of how far behind they were in saving and accumulating benefits for retirement.
Even a generation that fared relatively well—the cohort born during the last years of the Great Depression and World War II—had only accumulated $227,000 as it approached retirement in 2001. This is roughly four times the median income for that age group in 2001, or enough to purchase a 20-year annuity worth $3,750 a year at a 3 percent real interest rate.3 As these Depression and War Babies began tapping their retirement savings during the boom and bust years of the new millennium, their net worth fell to $206,700 in 2010, whereas the preceding generation had seen increases in net worth during their early retirement years.
Baby Boomers fared much worse than the Depression and War Babies, lulled into complacency by asset bubbles that inflated during their prime earning years and popped as the leading edge of the Boomer generation approached retirement. Early Boomers born in the late 1940s and early 1950s saw their net worth increase by around $69,000 between 1989 and 2001 (a 4.6 percent annual rate), but only by a meager $14,500 between 2001 and 2010 (a 0.9 percent annual rate). Late Boomers fared no better, and, like GenXers, are now far behind where earlier generations had been at the same age.
Though it may be tempting to chastise families for not saving enough for retirement, most of the blame lies with former Federal Reserve Chairman Alan Greenspan and others in positions of responsibility who watched asset bubbles inflate without warning that these paper gains weren’t real, and promoted homeownership and 401(k)s as the path to a secure retirement without acknowledging the extent of the risks involved.
2. The published survey results don’t allow precise tracking of generational cohorts because demographic breakdowns are by 10-year age group and the survey is conducted every three years. However, the 45-54 “Depression and War Baby” cohort in 1989 approximately corresponds to the 55-64 age group in 2001 and with the 65-74 age group in 2010, etc.
3. In practice, the typical household holds most of their wealth in the form of home equity and doesn’t annuitize liquid assets.
The latest suicide of a worker at Apple Computer’s Foxconn supplier plant in Chengdu, China may be another indication that Apple has not appreciably improved conditions for its manufacturing workers. Apple and Foxconn, working with the Fair Labor Association, announced that they would make changes in grueling overtime work schedules and in working conditions, including a promise to gradually come into compliance with China’s overtime laws. Yet this suicide, in conjunction with recent worker protests and new reports, suggests that needed reforms have not been made.
There are mixed reports from SACOM and China Labor Watch about whether work schedules have been reduced in any systematic way at Foxconn. Problematically, it appears that when the schedules are reduced, the reductions are not adequately balanced with hourly pay increases. So the already-inadequate monthly pay drops, leaving workers—72 percent of whom at the Chengdu plant told the FLA they could not meet their basic needs—in a desperate situation.
Ultimately, Apple has the power and moral responsibility to improve wages and conditions for Foxconn workers in Chengdu and elsewhere. Certainly, Apple and its executives can afford to do the right thing.
The Heritage Foundation’s latest attack on the Postal Service is a convoluted collection of half-truths and untruths. The author, David John, doesn’t want the Postal Service to benefit from $11.6 billion in overpayments it made for its pension obligations even though he grudgingly admits “this surplus appears to exist.” The overpayment should be refunded to the Postal Service to help it met its operating costs, but Heritage wants those funds locked up in the pension plan, which it claims would “follow the private-sector practice of using the current surplus—whatever it is—to defray future retirement payments.” This is baloney. When a private corporation overfunds its pension plan, it can transfer excess funds to pay retiree health obligations. In the case of USPS, it could use the funds to pay both current obligations ($2.4 billion a year) and the congressionally mandated pre-funding for future obligations ($5.6 billion a year).
When it’s inconvenient, Heritage abandons its suggestion that the Postal Service should be treated like the rest of the private sector. Private sector employers are not required to pre-fund their retiree health benefits, and most of them fund retiree health benefits on a pay-as-you-go basis. If USPS “followed the private-sector practice,” it wouldn’t contribute a nickel to the future retiree health obligations; it would pay them as they came due, yet Heritage supports a requirement that USPS “fully prefund this benefit.”
Heritage also glosses over the findings of two independent agencies that the Postal Service was treated unfairly by Congress and the Office of Personnel Management in the allocation of its pension obligations. EPI published a report in 2010 that took the same position as the Postal Service’s Office of Inspector General and the Postal Rate Commission: USPS and its ratepayers were overcharged approximately $75 billion for past service obligations, and taxpayers were undercharged the same amount. But for Congress’ misallocation of costs, the Postal Service’s short-term finances would be manageable despite the Great Recession and the growth of electronic communication and payments.
Heritage shades the truth in its claim that the Government Accountability Office “bluntly rejected” the agencies’ claims that the Postal Service had been treated unfairly. In fact, GAO admitted that the cost allocation methodology is “a policy choice” whose fairness is debatable:
“Although the USPS OIG [Office of Inspector General] and PRC [Postal Rate Commission] reports present alternative methodologies for determining the allocation of pension costs, this determination is ultimately a policy choice rather than a question of accounting or actuarial standards. Some have referred to “overpayments” that USPS has made to the CSRS fund, which can imply an error of some type—mathematical, actuarial, or accounting. We have not found evidence of error of these types. While the USPS OIG and PRC reports make judgments about fairness, the 1974 law also implicitly reflected fairness.”
GAO does not dispute that the PRC and USPS OIG methodologies for allocating the pension costs are sound, it simply prefers a different policy choice, which burdens the Postal Service:
“All three methodologies (current, PRC, and USPS OIG) fall within the range of reasonable actuarial methods for allocating cost to time periods. However, the allocation of costs between two entities is ultimately a business or policy decision.”
In its ideological zeal to see the Postal Service destroyed or dismembered, Heritage has been careless with its facts and inconsistent in its arguments.
UPDATE, June 15, 11:37 a.m.: Ah, mystery of the funky-seeming Mitt Romney jobs numbers revealed (see below for my puzzlement)—it’s a measure of full-time jobs reported in the household survey. I guess half of this is my fault—they do reference the “full-time” aspect when talking about data from the 1970s—but the rest of the chart and paragraph just talk about “job growth.”
But I will note that this is the first time I’ve ever seen full-time jobs from the household survey used to measure job market performance over business cycles. And I’m not convinced it’s a useful innovation; in fact, I think it’s pretty obvious cherry-picking.
Say five people get brand-new jobs that provide 30 hours of work per week while five more see their hours cut from 40 to 34 hours. I’d say this is 120 hours of net new additional work being demanded in the economy; but using the full-time jobs from the household survey would simply say that it’s five “jobs” lost. This just doesn’t seem useful to me.
Also, since the Romney chart ends in June 2011, it might be useful to know what happened to their preferred number in the 11 months since then: 2.25 million jobs added. The industry-standard of economists measuring recessions and recoveries—the payroll survey—has 1.7 million jobs added over those same 11 months, so I do wonder which the campaign would cite if asked.
Lastly, I’d note that there is an obvious sector, full of full-time jobs, that has seen a particularly hard time since the June 2009 beginning of recovery: the public sector. Since June 2009, 600,000 state and local jobs have been lost, and in 2009, about three-fourths of these jobs were full-time.
I was asked to comment on the speech Mitt Romney made in front of the Business Roundtable, so I decided to do some light background reading: Believe in America: Mitt Romney’s Plan for Jobs and Economic Growth.
I noticed something odd in the jobs section of the plan—this chart (ripped directly from the Romney PDF):
I know jobs numbers and recoveries, and these looked wrong to me. For one, the absolute peak-to-trough employment loss following 2007’s Great Recession was 8.8 million jobs (between Jan. 2008 and Feb. 2010) not the 8.9 million that the chart claims.
And given that this is the peak job loss, this means, by definition, that anything measured after this trough couldn’t be negative, as the chart implies. I also know that the U.S. economy didn’t begin adding jobs after the 2001 recession until the second half of 2003, so the 2001 numbers looked off, too.
So I decided to do the chart correctly—actually show job losses during the official recessions (i.e., not just employment peak to trough) and the 24 months following and sure enough:
Romney’s numbers are all slightly off, which is odd.
Odder is that the respective performance of the recoveries following the 2001 and 2007-2009 recession are reversed. Look closely at the the last two sets of bars in the respective figures.
The Romney chart has jobs growing in the first 24 months of recovery following the 2001 recession, but shrinking in the first 24 months following the 2007-2009 recession. That’s the opposite pattern of what actually occurred—jobs shrank for the first two years after the 2001 recession and grew modestly in the first two years after the 2007-2009 recession.
I’ll note that we also tried to match the Romney numbers with quarterly data, with household-survey employment counts, with household-adjusted-for-payroll concepts survey data … nothing worked.
A little curious as to what’s going on here.
And since there’s been lots of discussion about the relative health of the private and public sectors, here’s the correct graph for private-sector jobs only.
Claims about the efficacy of fiscal stimulus in a depressed economy are based on as-flimsy evidence as the Laffer Curve?! Seriously false equivalence
Peter Orzsag calls the claim that the debt-to-GDP ratio can be lowered by providing a fiscal boost to a depressed economy the “Laffer curve of the left.” For those who have real lives and may not get the reference, the “Laffer curve” refers to the theoretical possibility that one can raise overall tax revenues by cutting tax rates. The intuition is that cutting tax rates provides incentives for working longer and saving more. In turn, this will boost economic growth sufficiently to bring in more revenue despite rates having been cut. The claim that it is relevant to the U.S. economy has been discredited empirically (and a long time ago).
In light of this, Orzsag’s claim that the “Laffer curve of the left seems to have as much empirical relevance as the original Laffer curve” is not only odd but also flat wrong.
Orzsag’s target is clearly a recent paper by DeLong and Summers that shows fiscal stimulus in a depressed economy has multiple salutary effects, not just on economic growth but even on long-run budget measures (like the debt-to-GDP ratio). The paper shows stimulus boosts near-term growth directly by relieving the constraint of insufficient demand; it boosts productive investments by giving firms an incentive (i.e., more customers coming in the door) to expand capacity; and it keeps chronic long-term unemployment from turning into a permanent erosion of workers’ skills (i.e., economic “scarring”). The assumptions about the strength of each of these effects that are needed to make fiscal stimulus debt-improving in a depressed economy are probably pretty close to real-life parameters.
Let’s do some simple math with widely-agreed upon parameters, even ignoring some of the supply-side measures DeLong and Summers examine. I’m going to round very aggressively here, but it doesn’t affect results much.
Today’s publicly-held debt is about 70 percent of GDP (call it $10.5 trillion on a base of GDP that is $15 trillion). Let’s say we decided to undertake fiscal stimulus in the form of $150 billion spent on high-multiplier activities like extending unemployment insurance, giving aid to states, or investing in infrastructure (we actually need more than this, but it’s a nice round 1 percent of overall GDP, so we’ll stick with it).
The “fiscal multipliers” on these activities are roughly 1.5, meaning they generate $1.50 in economic activity for every dollar spent on them (actually, it may be quite a bit higher, but we’ll take 1.5 as given).
So, (roughly) a year from now, this stimulus has increased the level of GDP by $225 billion (i.e., the $150 billion stimulus multiplied by 1.5). This extra GDP does indeed lower the budget deficit by bringing in more revenue. A reasonable estimate, based on CBO data, is that when the economy is operating below potential, each 1 percent increase in GDP growth yields a cyclical reduction in the budget deficit of about 0.35 percent of GDP. So, this $225 billion in additional output leads to a $79 billion improvement in the budget deficit, making the “net” fiscal cost of the stimulus just $71 billion ($150 billion minus the $79 billion offset from higher growth).
This $71 billion “net” cost of stimulus increases debt by roughly 0.7% ($71 billion divided by the current $10.5 trillion public debt). But GDP has increased by 1.5 percent. Given the current debt-to-GDP ratio of 70 percent, this means that this measure actually declines because the stimulus has increased debt by 0.7 percent but GDP by 1.5 percent.
None of these parameters, by the way, are particularly contested.1 And let’s say they’re slightly wrong, and that instead of outright improving the debt-to-GDP ratio, providing fiscal stimulus in today’s depressed economy actually makes it slightly worse – say it’s only 80 percent self-financing in terms of its impact on debt-to-GDP ratios. Would this really justify calling claims that providing fiscal stimulus in depressed economies does not damage public finances “the Laffer Curve of the left”? Not by my read of the evidence.
1. For those who like analytical solutions, all of the preceding boils down to: So long as the initial debt/GDP ratio is higher than [(1/multiplier) – fiscal clawback ratio], then fiscal stimulus reduces the debt to GDP ratio. The “fiscal clawback ratio” is simply how much a 1% boost to economic growth leads to a reduction in the budget deficit (measured also as a share in GDP). For the arithmetic above, the multiplier of 1.5 and a clawback ratio of .35 means that fiscal stimulus would reduce debt/GDP for any initial debt ratio above 32%.
Take much more conservative assumptions – a multiplier of 1 and a clawback ratio of just 0.25. Then, stimulus is debt/GDP reducing for all initial debt ratios above 75%.
Also note that this means the calculus for whether or not stimulus reduces the debt/GDP ratio gets more favorable as the initial debt ratio rises, a perhaps counter-intuitive result.
Yesterday, the Congressional Budget Office (CBO) released its annual Long Term Budget Outlook (LTBO), which projects federal spending, revenues, deficits, and debt over the next 75 years. There are many points of controversy with regards to the LTBO, not the least of which is that it’s pretty ridiculous for CBO to pretend it knows what health care costs will look like in 2087. Personally, I think that CBO’s LTBO provides a lot more heat than light, and I would be the first to applaud if CBO decided to only release ten-year budget projections (in themselves subject to a huge margin of error).
Nevertheless, there is still value in looking at the change in projections from one year to the next. The figure below clearly shows that over the past three years CBO’s extended current law budget projections—which assumes no changes are made to the law—have improved drastically.
2009: CBO projected that debt held by the public would rise from around 60 percent of GDP to just over 300 percent of GDP in 75 years.
2010: CBO markedly improves its 75-year outlook, which now shows debt rising to just over 110 percent of GDP. This improvement largely reflected passage of the Affordable Care Act (ACA), which prioritized reducing long-run deficits and slowing the rate health of care cost growth (the predominant driver of long-run deficits).
2011: CBO again improves its outlook, now projecting debt rising to 87 percent of GDP in the first 30 years but then actually falling to 75 percent over the next 45 years. This improvement was largely due to three changes in CBO’s assumptions and projections: (1) lower costs for the new ACA health insurance exchange subsidies; (2) higher taxable wages due to the employer-sponsored health insurance excise tax (pushing worker compensation away from the tax-free health coverage); and (3) a slightly higher long-run economic growth rate.
The ultimate goal of budget reform is to reach “fiscal sustainability,” a point at which public debt is growing no faster than the economy (stabilizing debt relative to national income, i.e., ability to pay). According to 2011 LTBO projections, the federal government had already achieved long-run “fiscal sustainability.”
2012: For the third straight year in a row, CBO favorably revises its long-run budget outlook: Starting in 2014, public debt is projected to fall by 0-3 percentage points each year. The public debt is shown to be fully paid down by 2070, and within 75 years the federal government is projected to have accrued reserve surpluses equal to about a third of the economy.
This improvement is primarily due to two factors. First, the Budget Control Act (the result of last summer’s debt ceiling crisis) cuts spending by over $2.1 trillion through 2021, and because of the way CBO indexes discretionary spending for inflation in its projections, it continues to reduce deficits in subsequent years. And second, CBO changed the way it projects health care cost growth. In the past, it used the average growth rate over the last 25 years, but in this report it calculated a weighted 25-year average that puts more weight on recent years. This new methodology does a better job of taking into account the fact that health care costs have been slowing recently, possibly evidence that the ACA has exceeded expectations.
Budget wonks will rightly point out that the projections in question are CBO’s extended baseline, which assumes no changes to current law. This means that the Bush-era tax cuts expire next year, the sequestration cuts also go into full effect next year, the Alternative Minimum Tax will apply to more upper middle-income households, and Medicare reimbursements to doctors will be allowed to fall dramatically. But with the exception of the sequestration trigger, all those other factors were also present when CBO made their projections in 2009, 2010, and 2011. The fact is the fiscal outlook of the federal government has improved dramatically in the last three years.
More importantly, this report clearly shows that the path toward fiscal sustainability includes allowing some—if not all—of the Bush-era tax cuts to expire and fully implementing and protecting the Affordable Care Act.
New York Times columnist David Brooks went all out in heralding the “debt is evil” stigma in his column yesterday. Regrettably, this blanket condemnation of borrowing as intemperate or immoral, intergenerational theft is all too pervasive among Washington’s policymaking elite, and all too wrong: Not all debt is created equal and suggesting otherwise impedes sound fiscal policy.
Economic actors borrow money for a wide array of activities, and both businesses and households know better than to apply a universal value judgment to debt. Borrowing money for college tuition allows for human capital accumulation, which will hopefully yield a high rate of return; borrowing money to take to the casino is widely viewed as imprudent, as the expected rate of return at any casino is negative. Businesses borrow money to build factories, buy equipment, finance research and development, and engage in other productive activities that add value to the economy. Financial firms leveraging themselves the way of Long-Term Capital Management (using debt to proportionally magnify both risk and potential returns), on the other hand, adds systemic financial risk and zero—more likely negative—economic value. Similarly, there are good and bad reasons alike to run federal budget deficits. What matters much more than the accumulation of nominal debt is the purpose of the borrowing and the ability to repay the amount borrowed.
Brooks laments that the “federal government has borrowed more than $6 trillion in the last four years alone, trying to counteract the effects of the [dotcom and housing] bubbles.” Yes, the implosion of the housing market and the ensuing financial crisis and recession forced Congress to borrow heavily as the cyclical portion of the budget deficit ballooned and fiscal policy was used to arrest a steep economic contraction, propping up aggregate demand and the financial sector alike. The alternative, however, was a depression that would have swollen budget deficits regardless, while greatly impeding our ability to repay debt because of lost income and economic scarring reducing future potential income. Indeed, policymakers’ failure to restore full employment—which still necessitates much more deficit-financed stimulus—is producing such scarring effects: The U.S. economy is still running $861 billion—or 5.3 percent—below potential output and the Congressional Budget Office has downwardly revised projected potential output for 2017 by 6.6 percent since the onset of the recession. That is real, welfare-reducing economic waste resulting from insufficient public borrowing—borrowing that could have put productive resources to use instead of allowing them to atrophy.
Economists Lawrence Summers and Brad DeLong compellingly argue that given present U.S. economic conditions (where the Fed cannot singlehandedly stabilize the economy), deficit-financed stimulus is actually self-financing. Essentially, if nominal interest rates are below long-run trend real GDP growth adjusted for reduced economic scarring effects and improvements in the cyclical budget deficit resulting from stimulus, a dollar of debt more than pays for itself in the long-run. CBO projects real GDP growth will average 2.4 percent over the next 25 years, whereas the yield on 10-year Treasuries is only 1.55 percent (hovering around a record low); high bang-per-buck fiscal stimulus passes any reasonable cost-benefit analysis test so long as the economy remains mired well below potential in a liquidity trap.
What Brooks misses entirely is that any value judgment regarding debt boils down to the opportunity cost of debt and the value added of the tax or spending program being deficit-financed—particularly in ways that affect the ability to repay debt.
Example 1: The Bush-era tax cuts were entirely deficit-financed, adding some $2.6 trillion to the public debt between 2001 and 2010, while failing to produce even mediocre economic performance (the 2001-2007 Bush economic expansion was the weakest since World War II). Numerous economists believe that, between their dismal efficacy and the reduction in national savings they induced, the Bush tax cuts decreased long-run potential output.
Example 2: If the rate of return on infrastructure spending exceeds the cost of financing, it makes sense to borrow money to build a bridge, or better yet repair a bridge (the cost of repair increases with time and preventative maintenance is much more cost effective than rebuilding infrastructure from scratch). As my colleague Ethan Pollack points out, the case with infrastructure is a clear cut “win-win-win” because it raises potential future output, making the incurred borrowing relatively easier to pay back, and infrastructure spending increases actual present output and employment (reducing cyclical deficits). And today, the opportunity cost of infrastructure investment is at historic lows.
There is good debt and wasteful debt alike, just as both constructive editorializing and gibberish can be found scrawled across op-ed pages. Brooks’ failure to recognize any economic context or nuance only feeds the misguided debt hysteria that has pushed most of Europe back into recession and encouraged U.S. policymakers to give up job creation in favor of premature, counterproductive austerity.
Brad DeLong links to what he calls a “DeLong-Summers ‘Simplistic Keynesians’ Smackdown Watch“—a piece by Ken Rogoff calling “dangerously facile” those who argue for the “simplistic Keynesian remedy that assumes that government deficits don’t matter when the economy is in deep recession; indeed, the bigger the better.”
Since “simplistic Keynesianism” is a pretty good description of my diagnosis and remedy for today’s U.S. economic troubles, and since I don’t want to ever be “dangerously facile,” I read both the Rogoff commentary and the Reinhart, Reinhart, and Rogoff (2012) paper that it links to.
I did learn one thing—it turns out that my earlier post about the likely provenance of a Rogoff claim about the potential damage from high public debt isn’t quite right—but the new provenance of this claim isn’t right either.
There’s not much particularly new in either piece. Instead, they recycle the finding that, looked at over several centuries, there is an odd threshold of debt-to-GDP ratios—90 percent—that sees growth beneath the threshold run about 1 percentage point higher per year than growth above the threshold. They then do the arithmetic and argue that every year that the public debt-to-GDP ratio is over 90 percent is a year of GDP growth 1 percent lower than it would otherwise be and voila, the damage from high debt has been documented.
Or not. We’ve already noted why we think this threshold, while it might be an interesting (if odd and deeply atheoretical) curiosity, has no relevance to current U.S. policy debates (and yet somehow the 90 percent scare-mongering won’t stop—see David Brooks’ latest invocation of it).
The main reason for this judgment is that the causality between slow growth and high public debt is extremely two-way. There have almost surely been times when exogenous decisions to add to public debt have hampered countries’ growth. But there have also surely been times (and many more times, in my guess) when slow growth has led directly to rising debt-to-GDP ratios. And when this is the case, noting a simple negative correlation between GDP growth and a particular debt-to-GDP threshold tells us nothing about how dangerous—or, more likely, useful—a policy of further fiscal support would be.
And, there is no doubt that the increase in public debt over the past four years in the U.S. is directly the result of the Great Recession, and not a cause of it. Further, adding to this public debt going forward (so long as it was intelligently spent on job creation) would not only not harm the economy, it would reduce the debt/GDP ratio.
To be blunter, applying results gleaned from the over 80 percent of the country-years in their high-debt sample period that began before World War II, as well as the other clear-as-day cases where high debt was driven by slow growth (Japan in the 1990s and 2000s) does nothing to aid policy analysis about fiscal support in the here-and-now.
The authors even miss an obvious clue regarding those episodes in their data where high debt is driven by slow growth—the failure of elevated public debt to lead to upward pressure on interest rates. High public debt-to-GDP ratios combined with no upward pressure on interest rates is a key tell that it’s likely that below-potential growth is driving the debt ratio and not vice-versa.
Further, if interest rates are not pushed up by rising debt-to-GDP ratios, there is no mechanism for rising debt to impede growth. The authors gloss over this—just noting that “the growth-reducing effects of public debt are apparently not transmitted exclusively through high real interest rates.” More likely, the growth-reducing effects of public debt are simply non-existent when economies are deeply depressed.
Lastly, the paper makes a mistake that I think is key to understanding why policymakers keep getting blindsided by bad news (like the last two-months’ poor job growth) that just should not be that surprising: it assumes that economies naturally heal themselves from recessions, and quite quickly.Read more
One hallmark of the first 30 years after World War II was the “countervailing power” of labor unions (not just at the bargaining table but in local, state, and national politics) and their ability to raise wages and working standards for members and non-members alike. There were stark limits to union power—which was concentrated in some sectors of the economy and in some regions of the country—but the basic logic of the postwar accord was clear: Into the early 1970s, both median compensation and labor productivity roughly doubled. Labor unions both sustained prosperity, and ensured that it was shared. The impact of all of this on wage or income inequality is a complex question (shaped by skill, occupation, education, and demographics) but the bottom line is clear: There is a demonstrable wage premium for union workers. In addition, this wage premium is more pronounced for lesser skilled workers, and even spills over and benefits non-union workers. The wage effect alone underestimates the union contribution to shared prosperity. Unions at midcentury also exerted considerable political clout, sustaining other political and economic choices (minimum wage, job-based health benefits, Social Security, high marginal tax rates, etc.) that dampened inequality. And unions not only raise the wage floor but can also lower the ceiling; union bargaining power has been shown to moderate the compensation of executives at unionized firms.
Over the second 30 years post-WWII—an era highlighted by an impasse over labor law reform in 1978, the Chrysler bailout in 1979 (which set the template for “too big to fail” corporate rescues built around deep concessions by workers), and the Reagan administration’s determination to “zap labor” into submission—labor’s bargaining power collapsed. The consequences are driven home by the two graphs below. Figure 1 simply juxtaposes the historical trajectory of union density and the income share claimed by the richest 10 percent of Americans. Early in the century, the share of the American workforce which belonged to a union was meager, barely 10 percent. At the same time, inequality was stark—the share of national income going to the richest 10 percent of Americans stood at nearly 40 percent. This gap widened in the 1920s. But in 1935, the New Deal granted workers basic collective bargaining rights; over the next decade, union membership grew dramatically, followed by an equally dramatic decline in income inequality. This yielded an era of broadly shared prosperity, running from the 1940s into the 1970s. After that, however, unions came under attack—in the workplace, in the courts, and in public policy. As a result, union membership has fallen and income inequality has worsened—reaching levels not seen since the 1920s.
By most estimates, declining unionization accounted for about a third of the increase in inequality in the 1980s and 1990s. This is underscored by Figure 2, which plots income inequality (Gini coefficient) against union coverage (the share of the workforce covered by union contracts) by state, for 1979, 1989, 1999, and 2009. The relationship between union coverage and inequality varies widely by state. In 1979, union stalwarts in the northeast and Rust Belt combined high rates of union coverage and relatively low rates of inequality, while just the opposite held true for the southern “right to work” states. A large swath of states—including the upper Midwest, the mountain west, and the less urban industrialized states of the northeast—showed lower-than-national rates of inequality at union coverage rates a bit above or a bit below that of the nation. More importantly, as we plot the same relationship in 1989, 1999, and 2009, those states move as a group towards the less-union coverage, higher-inequality corner of the graph. The relationship between declining union coverage and rising inequality is starkest in the earlier years (between 1979 and 1989). After 1999, union coverage has bottomed out in most states and changes in the Gini coefficient at the state level are clearly driven by other factors, such as financialization and the real estate bubble.
Adding to Joe Nocera’s piece: A revival of the labor movement is necessary to preserve our democracy
It was good to see Joe Nocera’s column today affirming Tim Noah’s recent call for a revival of the labor movement, saying “if liberals really want to reverse income inequality, they should think seriously about rejoining labor’s side.” I would add that such a revival is necessary to rebuild the middle class and to preserve our democracy.
I’m proud that EPI has provided a lot of great research addressing the role of unions in the economy, ranging from: the impact on firms and competitiveness; the impact on the wages and benefits of union and nonunion workers; the impact on wage inequality; the flawed nature of the current process for choosing union representation; and much more. Here’s a brief guide:
- See a talk by Paul Krugman addressing the problem of income inequality, including the problem of eroded unionization. Krugman expresses some of the same sentiment as Nocera, paraphrasing “we didn’t know what we were missing until they were gone.” Pieces by Tom Kochan and Beth Shulman, and by Harley Shaiken, echo his arguments.
- Testimony by me, and another by Rutgers professor Paula Voos, articulate the importance of unions for American workers and the role unionism can play in rebuilding the middle class.
- Matt Vidal and David Kusnet provide 12 case studies from a variety of industries, including nursing, meatpacking, and janitorial, to show how unions can benefit workers and communities while making companies more productive. They also illustrate the damage inflicted when union representation is removed.
- Professor John DiNardo of the University of Michigan describes his and other research that unionization does not cause businesses to fail. Using a ‘regression discontinuity’ technique, DiNardo compares places that unionize to those that don’t and finds that differences in representation election outcomes were very similar: The near-losers are a very good “control group” for firms where the workers have just won the right to bargain collectively. DiNardo says: “This research provides evidence that this causal effect of union recognition is zero and has been zero since at least the 1960s, which is how far back we can go with the available data. In short, the biggest fear voiced by employer groups regarding unionization—that it will inevitably drive them out of business—has no evidentiary basis.”
- EPI Research and Policy Director Josh Bivens shows why unions are not to blame for the loss of U.S. manufacturing jobs, and that in fact, the real culprits are manipulated currency rates that make U.S.-made goods overly expensive. A dysfunctional health care system that burdens responsible employers with outsized costs, and high executive and managerial salaries, also contribute to any lack of competitiveness.
- Richard Freeman of Harvard University, perhaps the world’s leading labor market economist (I think so at least), writes that an overwhelming majority of workers say in surveys that they want a stronger collective voice on the job, and believe that a union would be good for their firm as well. Freeman’s findings “suggest that if workers were provided the union representation they desired in 2005, then the overall unionization rate would have been about 58%.”
- To get a picture of the broken process of union representation elections where employers freely intimidate workers, read Kate Bronfenbrenner’s report. Private-sector employer opposition to workers’ efforts to form unions has intensified and become more punitive than in the past. Employers are more than twice as likely to use 10 or more tactics—including threats of and actual firings—in their campaigns to thwart workers’ organizing efforts.
- Last, see the statement in support of the Employee Free Choice Act by me, along with Richard Freeman of Harvard and Frank Levy of MIT, citing the recent unprecedented growth of inequality in household income and the urgent need to give workers more bargaining power. Forty prominent economists signed the original statement, including three Nobel Prize winners, agreeing that the reform would be an overall benefit to the economy, and would provide a boost to workers when they need it most. Other economists later added their voices by signing the same statement, which resulted in close to 200 more signatories. The statement is available for download in both its original and updated versions.
Washington Post columnist Richard Cohen recently illustrated how much overt racial bigotry against blacks has been reduced. He used the case of Wesley A. Brown, the first African American graduate of the United States Naval Academy. Brown was the first to “successfully endure the racist hazing that had forced the others to quit.” When Brown joined the Naval Academy, if blacks dared to enroll, they were harassed to force them out. Today, there is a building in the Naval Academy named in Brown’s honor.
Cohen is correct. Today, black children know that there is no occupation that is categorically off limits to them. They can grow up to be president, an idea that seemed farfetched just a few years ago.
On the other hand, the picture Cohen painted would have looked starkly different had he focused less on interpersonal discrimination and more on institutional discrimination. By “institutional discrimination,” I am referring to the ways that the normal policies and practices of social institutions like the educational system, the labor market, and the criminal justice system serve to maintain racial inequality.
Cohen celebrates the end of legally enforced segregation, but fails to acknowledge that we still live with a great deal of de facto racial segregation. A large number of our neighborhoods are racially segregated, which means that many of our schools are racially segregated. Segregation concentrates black children not merely in majority-black schools, but also in schools where a majority of students are in poverty. While, in theory, there are no limits facing black children, children born into economically disadvantaged families, in economically disadvantaged communities, who then attend economically disadvantaged schools have the odds stacked against them.
One reason black families are disproportionately economically disadvantaged is because blacks are still about twice as likely as whites to be unemployed. This was the case in the 1960s, and it remains true today. This basic relationship holds true at all education levels. Black high school dropouts are about twice as likely to be unemployed as white high school dropouts. Black college graduates are about twice as likely to be unemployed as white college graduates. Research shows that employers still have a preference for hiring whites over blacks.
Our criminal justice system is another site where policies and practices systematically disadvantage blacks. As the book Dorm Room Dealers illustrates, white middle-class youth use illicit drugs and sell illicit drugs, but this population is much, much less likely to be incarcerated for these offenses than are poor black youth engaging in the same activities. Michelle Alexander’s The New Jim Crow goes into greater detail about how our illicit drug policies and practices produce institutional discrimination against African Americans.
Cohen is correct. There is no better time to be black in America than today. While this is a true statement, we also still have a long way to go before there is equal opportunity for all.
Business groups and conservatives constantly attack the federal government for overregulating. They claim that businesses are “drowning in a sea of regulations” and that job creation and profitability are being sacrificed in favor of a nanny state. Workplace safety rules, in particular, have been a favorite target of the Chamber of Commerce and other business associations, but the fact is that the federal government regulates too little, not too much. Most of the 4,500 workplace fatalities and 50,000 occupational disease deaths each year could be prevented with better rules, more diligent employers, and better enforcement by the Occupational Safety and Health Administration.
The Center for Public Integrity has begun publishing Hard Labor, a series of articles exploring this reality, and the first two stories make for compelling reading. One describes the consequences of OSHA’s inability to issue a combustible dust standard to protect against the kind of fires and explosions that have killed 130 workers since 1980, injured another 800-plus, and caused more than 450 accidents. Factory managers ignore hazards in plain sight—for example, piles of metallic dust that crackle with static electricity and ignite into small fires every week. Nothing is done to prevent the build-up, despite the past occurrence of catastrophic explosions at the same company that left some workers dead and others with gruesome, debilitating injuries. Finally, the critical elements come together and instead of a small fire, another terrible explosion occurs as airborne dust ignites, and more workers die from horrendous burns.
OSHA has no standard that addresses this hazard in spite of the pleas of union representatives and the urgings of the federal Chemical Safety Board, which has jurisdiction to investigate explosions and recommend preventive standards but has no power to issue them. OSHA hasn’t regulated, and workers continue to be burned, disfigured and killed unnecessarily.
Industry representatives resist any new standard, reflexively making the same tired arguments about flexibility and cost they always make. But as the story points out, in the case of grain dust explosions, an industry that fought OSHA’s efforts to issue a standard now realizes that the standard has saved workers’ lives and saved the companies money. The National Grain and Feed Association, which at one point sued OSHA to block the grain dust rule, recognizes today that the standard was win-win regulation, and that the grain industry is financially better off as a result of the rule and the unprecedented reduction in deaths and injuries it achieved.
The second Hard Labor story focused on the weakness of OSHA’s enforcement of the rules it already has on the books. Violations that cause the death of a worker result in an average fine of less than $9,000, and companies contest every citation, no matter how justified. The chances of an executive being indicted as a criminal for intentional or recklessly indifferent acts or omissions that kill their employees are infinitesimal, and the penalties are tougher for someone who harasses a wild burro on federal land than for an employer who sends a worker into a known hazard that causes the worker’s death.
The Center for Public Integrity is doing a real service by publishing these stories that reveal just how weak OSHA’s standards and enforcement are, and how light the regulatory burden that OSHA imposes really is. In the case of workplace safety and health, we need more regulation, not less.
Just once, I wish Mary Williams Walsh would write a story about public employee pensions that included key information that isn’t convenient to an agenda of doing away with or greatly reducing public employee pensions. Every story she writes, including her most recent, seems designed to scare the public, make public employees look bad, their unions look greedy, and government administrators seem weak or stupid.
In her most recent piece, Walsh lends great support to those claiming that public pension plans are erring (or even dissembling) in using assumptions about annual rates of return for their assets that are unrealistically high. The further claim is that using more “reasonable” rates of return (i.e., lower ones) will show the “true” crisis in public pensions.
Walsh writes: “The typical public pension plan assumes its investments will earn average annual returns of 8 percent over the long term, according to the Center for Retirement Research at Boston College. Actual experience since 2000 has been much less, 5.7 percent over the last 10 years, according to the National Association of State Retirement Administrators.”
This may seem like bloodless analysis, but it’s not—it’s giving great aid to a bogus argument forwarded by ideologues that are deeply hostile to public pension plans on principle. Because most plans look to be in decent shape based on current actuarial standards that justify assuming 8 percent rates of return, these ideologues have to claim that these assumptions are somehow wrong. But pointing to returns over the past 10 years as evidence of this is ridiculous because it completely ignores the fact that the U.S. and world economies experienced the biggest financial downturn in 80 years! How can Walsh be surprised that returns over a period that included two recessions have been subpar? This isn’t front-page news or news at all. It would be news if returns over that period had met expectations.
Even more serious is Walsh’s distortion of the National Association of State Retirement Administrators’ report, which was a very positive statement about the returns public employee plans have achieved:
Although public pension funds, along with most other investors, have experienced sub-par returns over the past decade, median public pension fund returns over longer periods exceed the assumed rates used by most plans. As shown in Figure 1, median annualized investment returns for the 20- and 25-year periods ended June 30, 2011, exceed the most-used investment return assumption of 8.0 percent. For example, for the 25-year period ended June 30, 2011, the median annualized return was 8.5 percent.
Walsh quotes the professed doubts of Edward McMahon, a fellow at the anti-government Empire Center for New York State Policy, that even a 7 percent return on investment can be safely assumed. But McMahon is not a neutral observer; he’s a right-wing, anti-union ideologue with an agenda to do away with public employee defined benefit pensions altogether. It is not news to me that the Empire Center has long wanted to cut public employee benefits and compensation, but Walsh would have done most readers a service by mentioning that to her readers.
Just once I wish Walsh would cite Dean Baker’s opposing analysis, which is based on the fact that the stock market is currently priced low enough, as measured by the ratio of prices to earnings, to justify expected returns of 8 percent or more. As Baker, the co-director of the Center for Economic and Policy Research, points out, individuals who sold Social Security privatization with visions of never-ending 8-10 percent stock market returns back when price-to-earnings ratios were at historic highs (hence making inflated returns hugely unlikely) now have the gall to attack pension plans that expect returns of 7.5 percent when price-to-earnings ratios have returned to historic norms (norms generally consistent with long-run returns of 8 percent).
What does this detour into what people claimed during fights over Social Security privatization have to do with the attack on public pensions? Earlier, I referred to ideologues like McMahon pushing the claim that projected returns of 8 percent for public pension plans are unrealistically high. How do I know this claim is ideology instead of professional judgment? Well, because in 2003 McMahon claimed that replacing public pensions with 401(k) plans would be fair for employees because they could expect returns of 9.75 percent.
In short, what at first seem like wonky debates over appropriate rates of return have actually degenerated into misinformation campaigns waged by committed opponents of public pensions. The rates these plans are assuming today are in line with actuarial practice and (much more importantly) economically reasonable. I’m sorry that this view doesn’t advance the much juicier story of a fiscal crisis coming, but it’s based on the facts.
A majority of Alabama’s politicians apparently believe that they can improve their state economy by chasing away the undocumented workers who live there. By making them criminals (turning them into illegal aliens), denying them basic services like water and electricity, and terrifying their families, they hope to rid the state of people they see as a burden on taxpayers and competitors for scarce jobs. Well, after a year’s application of this medicine (June marks the first anniversary of the passage of HB 56), how’s the experiment coming along? Has the economy been jump-started or even improved?
Let’s start with job creation. Has Alabama created more jobs than its neighbors over the last year? No; in fact, it’s both below the regional average and well below the national average. Alabama’s employment growth has been only one-seventh the national average (0.2 percent vs. 1.4 percent). The United States has regained about 43 percent of the jobs lost at the bottom of the recession; Alabama has only recovered about 9 percent of the jobs it lost.
Figure 1: Source: EPI analysis of Local Area Unemployment Statistics public data sets
Has it made the state or its workers richer or better off? No, apparently not. Even with fewer workers, personal income per worker fell in Alabama during the two quarters that followed enactment of HB 56, while in the neighboring states, it was unchanged.
Figure 2: Source: EPI analysis of Current Employment Statistics and Bureau of Economic Analysis National Income and Product Accounts public data
How about unemployment? Has chasing away all of those immigrants opened up tens of thousands of existing jobs for native Alabamans and cut the number of unemployed more than Alabama’s neighbors? No, not exactly. Compared to all four of its neighboring states (Tennessee, Georgia, Mississippi, and Florida), Alabama’s unemployment fell a little faster over the past year—2 percentage points vs 1.7 percentage points—but Alabama lost 52,000 workers from its labor force in less than a year while the labor force grew in the four neighboring states. Alabama doesn’t have a positive story to tell.
Figure 3: Source: EPI analysis Local Area Unemployment Statistics public data series
Far from being an economic panacea, the early returns suggest that HB 56 has not been good for Alabamans in terms of job creation or personal income. Immigrant-bashing isn’t the path to prosperity.
Conservatives say CEO compensation levels are fine now that it takes 10 hours to earn a typical worker’s annual compensation
There have been some interesting responses by conservatives to the new data Natalie Sabadish and I have released on the CEO-to-worker pay ratio. Apparently, our study reporting that CEO pay has fallen during the fiscal crisis and is far down from the dizzying heights of the tech bubble in 2000 is taken to mean that that any concern about the growth of top incomes is now out-of-date and inappropriate.
Conservative columnist Wynton Hall at Breitbart.com writes:
“A graph by the Economic Policy Institute shows that while the relative pay of CEOs shot up in the 1990s, it has since fallen by nearly half, a trajectory that hardly supports the class warfare rhetoric of Occupy Wall Street and the Obama Administration.”
And Greg Mankiw also touted our findings, writing, “The relative pay of CEOs skyrocketed during the 1990s and has since fallen by about half.”
The attention and the recognition of the accuracy of our empirical work are much appreciated. A few comments are in order. First, it seems that these folks are celebrating that a non-problem, at least in their view, has been solved. After all, I don’t recall conservatives being upset by the roughly $20 million CEO pay packages in 2000 or the $18 million CEO packages in 2007. So, it is hard to understand why they feel so gratified by CEO compensation packages averaging $11 or $12 million in 2011.
Second, while it is true that the CEO-to-worker compensation ratio fell from 411.3 in 2000 to 209.4 in 2011, that still means that CEO compensation is spectacularly high. For instance, that means that the average CEO earns in 10 hours what a typical worker earns in an entire year. Moreover, as we reported in our study (page 4):
“CEO compensation in 2011 is very high by any metric, except when compared with its own peak in 2000, after the 1990s stock bubble. From 1978–2011, CEO compensation grew more than 725 percent, substantially more than the stock market [which grew less than 400 percent] and remarkably more than worker compensation, at a meager 5.7 percent.”
The trend in CEO compensation since 1965 is in the figure. Two measures are presented, one where stock options granted are included and the other where stock options exercised are included. In either measure of CEO compensation, the growth between 1978 ($1.3 or 1.4 million), 1989 ($2.5 or 2.6 million), or 1995 ($5.6 or $6.2 million) and 2011 ($11.1 or $12.1 million) is pretty astounding and very hard to justify. Exactly how does one justify/explain that CEO compensation has doubled since 1995?
Figure 1: Note: “Options granted” compensation series includes salary, bonus, restricted stock grants, options granted, and long-term incentive payouts for CEOs at the top 350 firms ranked by sales. “Options exercised” compensation series includes salary, bonus, restricted stock grants, options exercised, and long-term incentive payouts for CEOs at the top 350 firms ranked by sales. Sources: Authors’ analysis of data from Compustat ExecuComp database, Bureau of Labor Statistics Current Employment Statistics program, and Bureau of Economic Analysis National Income and Product Accounts Tables
I was on PBS’ NewsHour last night, talking austerity. I’m against it. Ken Rogoff from Harvard was also on, and he’s actually against it too. One point of disagreement came up, though, when I made the argument that public debt incurred when the economy is depressed causes no economic damage (in fact, it acts instead as a useful palliative).
Rogoff disagreed in principle and then said something kind of startling—that increases in deficits and debt could lead to incomes in the near-ish future (i.e., less than 30 years from now) that are “20 percent lower.”
I’m assuming this claim has some relation to a Congressional Budget Office estimate of the effect of one particular fiscal scenario (the “alternative fiscal scenario,” or AFS) that projects the effects of large increases in budget deficits in coming decades on economic growth (see table below from the CBO report (p. 28)). The mechanism is that rising deficits increase interest rates which lead to lower private investment and a stronger dollar, which leads in turn to higher trade deficits and rising foreign debt.
Set aside for a second whether or not there are some problems with these calculations—both in relying on the AFS to make predictions and in how to apportion the impact of higher interest rates between crowded-out domestic investment versus increased trade deficits. The more salient point is simply that there is nothing in the CBO analysis that rebuts my larger point: Potential damage from increased public debt does not materialize when this debt is taken on when the economy is depressed. Here’s the CBO on the issue (p. 21 in the linked report):
“… when the economy has substantial unemployment and unused factories, offices, and equipment, federal budget deficits—and thus additional debt—generally boost demand, thereby increasing output and employment relative to what would occur with a balanced budget. … CBO’s estimates in this chapter [ed: estimates about the output-depressing effects of budget deficits and extra public debt] do not take those short-run effects on demand into account. Indeed, the estimates reflect the assumption that over the long run, output is always at its potential level”
In short, the potential output-depressing effects stemming from budget deficits that the CBO is estimating only hold when “output is at its potential level.” Or to say it another way, the exact way I said it earlier, extra public debt incurred when the economy is depressed (i.e., output is not “at its potential level”) causes no economic damage.
And in fact, when extra public debt is incurred when the economy is depressed, the boost it gives (if spent wisely) to economic output can easily be large enough to actually reduce overall the overall debt/GDP ratio by boosting the denominator and by spurring enough additional tax collections to actually self-finance part of the extra debt. How are people so sure that the extra debt incurred in recent years hasn’t led to any of the downsides from crowding out or upward pressure on the value of the dollar? Simple—interest rates have not risen. And remember, the entire economic chain wherein incurring public debt leads to crowding-out and trade deficits is through upward pressure on interest rates. And, since the depressed economy is putting ferocious downward pressure on interest rates, there is no damage done.
Until the economy recovers and this downward pressure on interest rates relents, additional increments of public debt do not hurt, and scare-stories about the 20 percent income loss possible 25 years from now because of deficits do not change this calculus at all.
For the past few months, I (and others in favor of allowing the Bush-era tax cuts for the rich to expire) had worried that once we got into the lame duck session, congressional Democrats would let their position slip and start supporting extending tax cuts for couples with income above $250,000 (individuals above $200,000). But I thought at the very least it would happen in the last month or two, when the pressure was really being brought to bear.
Turns out, it happened a lot sooner than that. Yesterday, House Minority Leader Nancy Pelosi (D-Calif.) signaled support for allowing the Bush-era tax cuts under $1 million to expire. Pelosi explained, “It is unacceptable to hold tax cuts for the middle class hostage to extending multi-billion dollar tax breaks for millionaires, Big Oil, special interests, and corporations that ship jobs overseas.”
Yes, I understand that “tax breaks for millionaires” sounds better in a press release than “tax breaks for households with income over $200,000, or $250,000 for couples.” And perhaps she felt forced into this, worrying that she might not be able to hold her caucus at the $250,000 mark, opting instead to retreat to more defensible terrain before the battle royal later this year.
But this shift has a number of very disturbing consequences:
1) Slipping to the right.This will now be the left pole of the debate. The Democratic Party has moved from opposing the Bush-era tax cuts to supporting 80 percent of them, to now supporting nearly 90 percent of them. And yet these concessions have been given for free, without any countervailing progressive demands. This is just more evidence that the tax debate is shifting further to the right. Pelosi may have done this for short-term advantage, but in the long run, these shifts tend to be very difficult to reverse.
2) More spending cuts. Given that the Bush-era tax cuts cost $2.6 trillion over the last decade and will cost over $4 trillion in the next decade, this concession will put even greater pressure on the budgets of vital safety net and public investment programs.
3) The definition of “middle class” is losing relevance. The previous definition of the middle class as being anyone under the $250,000 threshold was already a severe stretch. After all, you’re talking about people who (1) are making five times that of the typical American household (which makes closer to $50,000 a year in combined income), and (2) whose incomes are higher than 98 percent of American households. To now extend the definition of middle class to people who make 20 times that of the average household and whose income is greater than over 99 percent of households is to define away the entire concept of the middle class.
4) Bigger tax cuts to the highest-income Americans. This shift isn’t just a huge boon to upper-income households making between $250,000 and $1 million in income. In fact, about half of these additional tax cuts would go toward households with over $1 million in income. This is because the cut-off—be it $250,000 or $1 million—represents the portion of a taxpayer’s income that is subject to the continued tax cut. So the previous Democratic position—which remains President Obama’s public position—is that if you make over $250,000, you still get to keep your tax cuts for all your income below that threshold and only have to pay higher rates on income above that threshold. Revising the threshold up to $1 million basically means that all income between $250,000 and $1 million also retains its tax cuts, and as it turns out, about half of those tax cuts will go to people with income over $1 million.
As Jared Bernstein said, we’ll let the game theorists argue over whether this helps or hurts the Democrats’ negotiating position. But even if this does give the Democrats the upper hand in negotiations, what then? The whole point is to enact a tax code that can adequately fund the social safety net and public investments that we need to create a stronger economy with equal opportunity for all. Retaining the tax cuts for most people making over $250,000 and reducing the tax increase that people making over a $1 million would be subject to, makes that job significantly more difficult, if not impossible.