Robert Samuelson says the economy isn’t allowed to have the Keynesian cures it needs because of … Keynesians (from the 1960s)
Robert Samuelson’s Washington Post column today is, to be charitable, baffling. He mostly agrees that Keynesians have it right about what the economy needs today: more stimulus, or fiscal support, or spending, or whatever you want to call it. But in a desire to tell us that it is actually Keynesians’ own fault for why we can’t have it, he blames … John F. Kennedy for destroying the nation’s fiscal norms so completely that we somehow can’t afford economic stimulus five decades later.
To be clear, I buy none of this argument that anything keeps us from pursuing more expansionary policy today except for today’s policymakers (and I particularly don’t buy the part of Samuelson’s argument about some magical and well-defined “threshold” of public debt above which we just can’t afford more stimulus and the economy tanks). And, even if there was some reason to think that rising debt/GDP ratios do hamper future policymakers’ response to recessions, the notion that public debt rapidly shrank as a share of overall GDP during the 1960s really should give Samuelson at least some pause about his thesis.
But even if I did believe that some past president had destroyed the historic norm of fiscal probity that preceded their inauguration, I have to ask: Why Kennedy, when there’s much clearer suspects in our more-recent past? The figure below shows net lending by the federal government for the six quarters before the inaugurations of Kennedy, Ronald Reagan, and George W. Bush, as well as what happened during their two terms in office (why six quarters before? I wanted some measure of the alleged fiscal “norms” they inherited, and the Bureau of Economic Analysis data that the chart is based on starts in the middle of 1959, so this simplified my choice).
Again, I actually think concern about budget deficits per se is way overblown in policy debates (for lots of reasons—for example, the budget is affected by the business cycle, which ran differently for all three presidents compared—though, strikingly, all had recessions early in their terms and saw the economy either back in recession (Bush) or within one (Kennedy) or two years (Reagan) of reentering recession by the time their tenures ended. Oh, and wars—wars affect budget deficits).
But if you’re making the argument that running deficits that are larger than the historic norms you inherited is some mammoth economic sin, I ask again: Why Kennedy and not Reagan or Bush?
The point of Samuelson’s column is pretty obviously to blame Keynesians for today’s troubles even though they are exactly right about how to solve them.
Republican presidential nominee Mitt Romney is stirring controversy with his equivocation over whether or not the individual mandate in the Affordable Care Act (ACA)—and hence the mandate in his Massachusetts health care reform, the model for much of ACA—is a tax or a penalty. But Romney was unequivocal about one thing in his response to the Supreme Court’s decision to uphold the ACA—and unequivocally dishonest—when he claimed: “ObamaCare adds trillions to our deficits and to our national debt, and pushes those obligations onto coming generations.”
This is patently false, and the former Massachusetts governor should know better. ACA is the most substantial piece of deficit-reduction legislation of the past decade, if not decades. Beyond the first decade, when ACA is gradually being implemented, health reform is projected to lower annual budget deficits by roughly half a percent of GDP, according to the Congressional Budget Office (CBO). Put in perspective, half a percent of projected GDP for 2022 is $125 billion; if ACA is fully implemented, we’re looking at well over $1 trillion of net deficit reduction in the second decade. Passage of ACA was the largest force driving CBO’s dramatic recent improvements in long-term public debt projections: between 2009 and 2010 (pre- and post-ACA enactment), their extended baseline projection for public debt in 2083 was revised sharply downwards from 306 percent of GDP to just 111 percent—a decrease of nearly two-thirds. Since those estimates, ACA is likely to produce even more long-term deficit reduction because the long-term care insurance program (CLASS Act) has been scrapped and some states may be sufficiently principled and foolish to refuse tens of billions of federal dollars for the Medicaid expansion. (Note: neither is a policy success in my book.) Read more
The Obama administration announced yesterday that it has filed a complaint at the World Trade Organization (WTO) with China over its tariffs on large vehicles exported from the United States to China. This is the seventh complaint filed by the administration against China, and the White House noted that “the previous six have all been successful.”1 The Obama administration should be applauded for its continuing support of the U.S. auto industry, and for this action, which will help preserve U.S. jobs supported by about $3 billion of U.S. exports in 2011.
Much more needs to be done to stop unfair trade and industrial policies in China’s auto industry, which the Chinese government has targeted as a “pillar industry,” for development. Between 2001 and 2011, according to a report by EPI Research Associate Usha Haley, “the Chinese auto parts industry has received about $27.5 billion in subsidies.” U.S. imports of auto parts (including tires) increased more than 600 percent between 2001 and 2011, and are on track to reach $14.5 billion in 2012. The rapid growth of subsidized and unfairly traded auto parts from China puts at risk every job both directly and indirectly supported by the U.S. auto–parts industry. The U.S. auto parts industry directly and indirectly supported 1.6 million jobs in 2009, with jobs at risk in every state.
Adding insult to injury, China continues to manipulate its currency. This magnifies the benefits of subsidies and other unfair trade policies that benefit China’s auto-parts exports. Currency manipulation artificially reduces the costs of China’s exports and inflates the costs of exports from the United States (and other countries) in China and all other countries where they compete with China. I estimated last year that a 25-to-30 percent appreciation of China’s yuan and other manipulated Asian currencies would support the creation of up to 2.25 million U.S. jobs, stimulating up to $286 billion in GDP growth (1.9 percent) and reducing federal budget deficits by up to $71 billion per year. Read more
This morning’s release of the June 2012 employment situation report by the Bureau of Labor Statistics marked three years since the official start of the recovery from the Great Recession in June 2009. That makes this a useful moment to assess how this recovery stacks up against earlier ones, and to identify obvious policy measures that could ameliorate glaring weaknesses in the current recovery.
The figure below shows that while jobs fell much further and faster during the Great Recession than in the previous two recessions (marked by the lines to the left of the zero point on the x-axis), job growth in the current recovery is similar to job growth by this point in the previous two recoveries, just slightly lagging job growth following the recession of 1990-91 and outpacing job growth following the recovery after the 2001 recession.1
Of course, three years into recovery from those recessions, unemployment was not stuck at levels anywhere near as high as today’s 8.2 percent. But it is important to note that it is the historic length and severity of the Great Recession that explains why the economy is so much worse three years into the current recovery than it was three years into the recoveries of the early 1990s and 2000s, and that there is not something atypically weak about the current recovery relative to those earlier ones.2
Further, the most glaring weakness in the current recovery relative to previous ones is the unprecedented public-sector job loss seen over the last three years. The figure below shows that private sector job growth in the current recovery is close to that of the recovery following the early 1990s recession and is substantially stronger than the recovery following the early 2000s recession.
Yet, as the figure below shows, the public sector has seen massive job loss in the current recovery—largely due to budget cuts at the state and local level — which represents a serious drag that was not weighing on earlier recoveries.
How many more jobs would we have if the public sector hadn’t been shedding jobs for the last three years? The simplest answer is that the public sector has shed 627,000 jobs since June 2009. However, this raw job-loss figure understates the drag of public-sector employment relative to how the economy functions normally. Read more
Imagine going to a fast-food restaurant and unknowingly consuming food contaminated with toxic chemicals. Or buying cooking oil laden with carcinogens. Or purchasing medicine that makes you sick because it contains excessive levels of the heavy metal chromium.
Sadly, these are not hypothetical situations but real problems discovered in recent years in China. The Chinese financial newspaper Caixin Online declares that “these publicized food safety scandals represent only a fraction of [the] unsafe food production practices.” Caixin concludes that food safety in China is “governed by the law of the jungle.”
China’s food safety problems are not limited to small mom-and-pop businesses. The bad fast-food referred to above was the result of a toxic chemical being added to chicken served at McDonald’s and KFC restaurants. The carcinogenic food oil was found in Wal-Mart. A big business is not a guarantee of a safe product.
In the early 20th century, the United States faced food and drug crises similar to the ones in China today. These crises in the United States led to the creation of the Food and Drug Administration and to dramatic improvements in American health and life expectancy. While the United States still has its share of contaminated food, the rate of problems in the United States is far below that of China. As Caixin states, “the size and severity of the food safety crisis” in China “is unique.” There is less toxic food in the United States, in part, because we have a stronger regulatory and enforcement system.
These days, conservatives regularly condemn regulation, but the fact of the matter is that regulations save lives. Last month, my colleague Ross Eisenbrey illustrated how good Occupational Safety and Health Administration (OSHA) standards save lives in the workplace. Experts in China believe that achieving real food safety there will require much more action and involvement by the Chinese government.
In a column about the Supreme Court’s health care decision today, David Brooks offers up a series of recommendations about how to improve the nation’s health care system that he’s positive are not already in the Affordable Care Act (ACA). It’s worth quoting at length because it’s so revealing:
“Crucially, we haven’t addressed the structural perversities that are driving the health care system to bankruptcy. Obamacare or no Obamacare, American health care is still distorted by the fee-for-service system that rewards quantity over quality and creates a gigantic incentive for inefficiency and waste. Obamacare or no Obamacare, the system is still distorted by the tax exclusion for employer-provided plans that prevents transparency, hides the relationship between cost and value and encourages overspending. … Republicans tend to believe that the perverse incentives can only be corrected if we repeal Obamacare and move to a defined-benefit plan — if we get rid of the employer tax credit and give people subsidies to select their own plans within regulated markets.”
Let’s take these in turn:
“Obamacare or no Obamacare, American health care is still distorted by the fee-for-service system that rewards quantity over quality … inefficiency and waste”
Actually, no. The ACA has introduced pretty sweeping reforms to payment delivery; see the Independent Payments Advisory Board (IPAB), created precisely to engage the issues Brooks raises.
“Obamacare or no Obamacare, the system is still distorted by the tax exclusion for employer-provided plans that prevents transparency…”
Social Security is a hybrid between a pay-as-you-go and an advance-funded pension system, with most benefits paid out of current taxes but some potentially paid out of trust fund savings. Under ordinary circumstances, the trust fund serves more like a checking than a saving account, though substantial savings may be amassed in advance of bigger-than-usual outlays like the Baby Boomer retirement. This (mostly) pay-as-you-go design allowed Social Security to start paying out benefits shortly after its inception and helps insulate the system from financial market fluctuations.
Nobel Prize-winning economist Robert Solow highlighted the system’s pay-as-you-go properties in a characteristically simple and elegant model presented at a National Academy of Social Insurance gathering last week. Headlining a panel on the Baby Boomers, Solow framed a discussion in terms of basic economic constraints (math-phobes can skip the equations):
1. Labor Productivity x Hours Worked Per Worker x Active Workers = Gross National Product
2. Gross National Product = Labor Income + Capital Income
3. Labor Income = Active Worker Share + Retiree Share
Labor Income = Wage x Hours Worked Per Worker x Active Workers
Active Worker Share = Labor Income – Social Security Taxes
Retiree Share = Social Security Benefit x Retirees
With some rearranging, it follows from Equation 3 that:
4. (Social Security Benefit/Wage) = (Active Workers/Retirees) x (Social Security Taxes/Labor Income)
Solow emphasized that most of the factors in his simple model were determined outside the Social Security system, with the obvious exceptions of the first and last terms in Equation 4.1 This suggests that a decline in the worker-beneficiary ratio requires a reduction in benefits, an increase in the effective tax rate, or both. Read more
A story in Tuesday’s Wall Street Journal highlights a truth about the economy that Washington’s policy makers have chosen to ignore. The value of our currency relative to our competitor nations’ currencies is a huge driver of factory location. Despite its positive connotations, a strong dollar is bad for U.S. exports and U.S. manufacturers. For years, Japan bought U.S. treasurys as a way to cheapen its own currency and strengthen ours, just as China does. The result was that Japanese imports to the U.S. were artificially cheaper and Japanese cars built in Japan had a price advantage even overseas, when competing with U.S.-built cars. (The same would be true for refrigerators or construction equipment, or any other manufactured goods.)
But lately, Japan has been unable to prevent its currency from strengthening against the dollar, so much so that the advantage has been flipped, and it is beginning to make more sense for Japanese automakers to build their cars in the U.S. than in Japan. As a result, Nissan is closing plants in Japan and moving lines to Tennessee and Mississippi, and Honda plans to export cars from the U.S. in large numbers—150,000 a year by 2017.
What is true for Japan is true in spades for China, which for years has maintained a weak yuan relative to the dollar. Other countries in Asia have also followed China’s lead. If China let its currency strengthen, products made in China would be much more expensive here, leading many producers to move manufacturing operations back to the U.S. By the same token, products made in the U.S. get an immediate price advantage and would once again be competitive in world markets.
The Obama administration and Congress should agree to legislation that would force China and other Asia currency manipulators to give up their tactics and give our manufacturers a fair chance to compete. As EPI’s senior trade economist Robert Scott has shown, no other single legislative action is likely to create more jobs, do more to correct our trade deficit, or do more for our budget deficit.
China Labor Watch just released a new report investigating working conditions at 10 of Apple’s suppliers in China, including the Foxconn factory in Shenzhen. The New York-based group was able to collect this information even though local authorities in China sometimes literally kicked its investigators out of town. As others have also determined, including the Fair Labor Association in a study sponsored by Apple, CLW found working conditions at the Foxconn factory to be severe, with workers employed long hours at low pay under harsh living conditions. The CLW report also breaks new ground in three areas. The report finds:
- Deplorable labor practices are not just characteristic of Foxconn factories, but exist in factories throughout Apple’s supply chain. The report documents, for instance, that employees in most of the factories typically work 11 hours a day and can only take one day off a month (low wage levels and management pressure compel them to work such hours); that employee dorms are frequently overcrowded, dirty and lacking in facilities; and that there is little ability for workers at Apple suppliers to push for reasonable working conditions on their own.
- As bad as working conditions at Foxconn are, they are even worse at some of the other factories in China that supply Apple. The report flags the three Riteng factories investigated as particularly difficult places to work. The table below includes key findings from the report. It indicates: Riteng workers typically work 12 hours per day nearly every day of the year (including weekends and holidays), compared to 10 hours per day at the Foxconn factories, with some days off. The average wage for the Riteng workers amounts to $1.28 per hour, or well below the already quite low average hourly wage of $1.65 for Foxconn workers. Health and safety conditions are much worse at the Riteng factories than at the Foxconn factory, and living conditions are worse for the Riteng workers as well.
Riteng vs. Foxconn
|Riteng (Shanghai)||Foxconn (Shenzhen)|
|Approximate number of workers||
|Percent of workers that are dispatched||
|Average number of hours worked per day||
|Average number of days worked per month||
|Average hourly wage (RMB)||
|Average hourly wage in U.S. dollars||
|Percent rating factory’s performance on work safety and health as ‘bad’||
|Percent rating dorm conditions as ‘bad’ or ‘very bad’||
|Percent indicating food is unsanitary||
Source: China Labor Watch
- Certain serious labor problems have so far been neglected in the discussion of work practices at Apple suppliers in China. In particular, the new report documents the troubling yet common practice by Apple suppliers of using dispatched labor. This practice enables factories to reduce the compensation and benefits they provide to their workers, makes it even easier to compel workers to work exceptionally long overtime hours, and creates damaging uncertainty over who is responsible for any worker injuries.
In recent months, stories about when the next iPhone will be released or whether Apple will add a television to its product line have helped push the troubling issues concerning how Apple’s products are made to the sidelines. The new CLW report is a needed reminder that those issues should not be forgotten. Apple has the responsibility to ensure that basic labor standards are met not just at Foxconn factories, but also at the factories of other suppliers that have received less media attention. And, as I summarized previously, Apple easily has the resources to advance any necessary changes.
Following the Supreme Court’s ruling in favor of the Patient Protection and Affordable Care Act (ACA) and its lynchpin—the individual mandate—my colleague Josh Bivens noted all the ways conservatives have tried to keep health care from being delivered efficiently, notably by blocking government from using its monopsony power and economies of scale wisely. This, of course, is difficult to square with conservatives’ professed concerns about public debt, because rapidly rising health costs are, by far, the single biggest impediment to stabilizing long-run public debt (if the economy operates at full potential over this long-run). Political opportunism aside, reasonable policy should unequivocally aim to lower health care cost-growth; so here’s some evidence worth revisiting on the comparative efficiency of public versus private provision of health care.
The United States has a patchwork health care system of universal single-payer insurance for seniors (Medicare), publicly funded health coverage for the disabled and poor children and seniors (Medicaid and SCHIP), a rapidly unraveling system of employer-sponsored health insurance, fragmented private self-insurance markets, and 49 million non-elderly Americans (under the age of 65) without any health insurance. It’s important to note that the ACA was already a preemptive compromise with those opposed to a much more expansive role of government in directly financing health care. This, of course, doesn’t stop its opponents from lambasting it as a “government takeover,” but the ACA actually preserved the basic (inelegant) structure of American health care, seeking to fill in its gaps rather than a total overhaul. This makes its cost-containment provisions subject to much variability—some may work very well to restrain growth while others might not. And it also means that a clear, evidence-based tool for restraining these costs was left on the table: direct public provision of care and financing of costs.
By using its monopsony power and economies of scale gained by insuring tens of millions of people, public health programs have done a better job at restraining costs than private insurers. For example, since 1970, cost growth in inflation-adjusted Medicare spending per beneficiary has averaged 4.5 percent annually, versus 5.7 percent for private insurers.1 This underlying trend has been remarkably consistent over time: The 10-year rolling average of annual per enrollee cost growth for all benefits provided by private health insurers has exceeded that of Medicare in 28 of the past 31 years.
This divergent rate of cost growth compounds markedly over time. Since 1969, cumulative growth in private insurance spending per beneficiary has increased 60.8 percent more than that of Medicare.
And as I noted a while back, the Congressional Budget Office has estimated that Medicare is 11 percent cheaper than an actuarially equivalent private insurance plan, an efficiency premium that will similarly compound with time: Fee-for-service Medicare is projected to be at least 29 percent cheaper than an equivalent private insurance plan by 2030 (relative to CBO’s alternative fiscal scenario for the long-term budget outlook).
The ACA is projected to expand coverage to some 30-33 million additional non-elderly Americans by the end of the decade, a critical step for risk-pooling, increasing cost-saving preventive care, and decreasing uncompensated care costs passed along to providers and policy holders. It also included ambitious reforms to control costs (particularly the Independent Payment Advisory Board, or IPAB), but too many provisions leveraging the public sector’s ability to directly contain costs—notably offering a public insurance option (e.g., Medicare buy-in) and negotiating Medicare Part D prescription drug prices with pharmaceutical companies (as is done for Medicaid)—were lobbied out of the bill. Even though stronger cost-containments could have been included, the Supreme Court’s ruling in favor of the ACA is a major victory for long-run fiscal sustainability, as health reform is projected to reduce annual long-run budget deficits by roughly half-a-percentage point of GDP.
The ACA is a momentous step toward more efficient and comprehensive health care coverage in the United States, but reform will undoubtedly remain a work in progress—particularly as the various cost-containment provisions in the ACA are evaluated and successes merit replication. Our experience over the last 40 years should guide policymakers as they inevitably go back to the drawing board on health care reform; and the evidence over this time overwhelmingly suggests that public provision of health care is more effective at containing excess cost growth and more efficient than private insurance provision.
The individual mandate lives! Excellent.
For uninsured Americans anyway. But for those of us who had comments ready in case it was struck down, it’s kind of inconvenient.
So, in the interest of recycling, I do want to keep something front-and-center about this particular conservative attack (opposition to the mandate) on health reform: Whatever it’s premised upon, the practical impact of opposing the mandate (and since this is true of all recent conservative ideas on health care one might be forgiven for thinking that it’s a strategy, not a quirk) is simply to make health care more expensive.
And why are conservatives dedicated to making sure Americans pay too much for health insurance? Sometimes, it’s just the price of shoveling subsidies to corporations as part of any health reform. Other times, it’s making sure that Americans don’t see government doing things too efficiently and outperforming the private sector (witness the fevered desire to “reform” Medicare by privatizing it—which will predictably make it more expensive). In the end, I guess you don’t need to believe me when I say that that’s the goal of conservative health reform; but when it’s the practical impact of everything they propose, then I think my argument is looking pretty good.
Anyway, here’s my quick primer on the mandate and why opposing it was simply another exercise in making sure Americans paid too much for health insurance.
A key barrier to individuals gaining coverage if they’re not employed by a large company (which has the clout and the legal protections to force insurance companies to cover all their employees as a group, rather than just cherry-pick the healthy ones) is insurance companies refusing to cover those with pre-existing conditions—or even just those that may become sick (and hence expensive to insure) sometime in the future. The Affordable Care Act (ACA) dealt with this by mandating insurance companies offer coverage to everybody who comes to their door (“guaranteed issue,” in the jargon of reform), and to make this a real, not just a notional “offer,” mandating that these companies charge each beneficiary the same premium (“community rating,” in the jargon, with some variation allowed by age and smoking status). These provisions, again, keep insurance companies from being able to cherry-pick just the healthy to cover.
But, if I could get insured whenever I wanted and at the same rate as everybody else, shouldn’t I just choose to not pay premiums while I’m healthy and then buy coverage after I’m already sick? This would be a big problem for insurance companies, as their pool of covered beneficiaries would be a pretty unhealthy group. And since the ACA provides subsidies to help make coverage affordable, this means that the per-beneficiary level of subsidy would be pretty high, as only unhealthy people would be receiving subsidies.
The answer to this “free-rider” problem? Make sure people carry insurance even while healthy, to make for a larger, more predictable, and healthier insurance pool to keep costs down. This is what the mandate is for.
Essentially, the ACA imposes some restrictions on insurance companies (guaranteed issue and community rating) but then gives them something in return to make sure these restrictions don’t lead to them having to cover an unhealthy pool of beneficiaries (that something in return is the mandate) and rising costs.
So the mandate makes reform more efficient. This means it must be opposed by conservatives, because they have all along been determined to make any health reform as inefficient as possible. Remember the 2006 Medicare Part D legislation that cost way too much because it barred the government from bargaining with pharmaceutical companies over drug prices? And which subsidized private HMOs to cover Medicare beneficiaries? Remember the public option, which would’ve saved the public money but was taken out of the ACA in the early stages? Remember the voucherization of Medicare called for in the Ryan budget, which would insure that Americans spend far more to cover health costs in the future?
This was no grand constitutional issue, this was just conservatives doing what they reflexively do when it comes to health reform: trying to make sure it’s as inefficient as possible.
The Affordable Care Act (ACA) is valuable legislation for a host of reasons, but most notably, it provides coverage for millions of Americans who would not have been able to secure insurance, and therefore, health care when they need it. The Supreme Court decision to uphold ACA was also important because it gives clarity and certainty to states and private industry that they should start preparing for the main provision to kick in in 2014. It resolves any uncertainty that was felt throughout the country by the important players, and now provides the necessary push for its implementation.
The expansion of insurance is particularly important now as a growing share of Americans are without health coverage. Historically, Americans under age 65 have received insurance through the workplace, but since 2000, that valuable source of coverage has declined every year for 11 years running, a total decline of over 10 percentage points, as shown below.
These statistics are already bleak, but without the valuable health care legislation, the situation could have gotten much worse. Because of the ACA, more than 30 million people will get health insurance in coming years that would not have received it—making them more likely to get needed medical care and less likely to come under severe financial distress when they do.
Specifically, the fact that the Supreme Court upheld the individual mandate is one of the reasons so many more people get insured, making the law more cost-effective. The effect of the decision with regards to Medicaid is unclear, but could potentially lead to fewer of the most vulnerable Americans getting access to affordable health care.
In sum, the Supreme Court decision today reaffirms the constitutionality of the health care legislation and its valuable provisions, providing a necessary safety net for millions of Americans. It also provides the added motivation for the implementation of health reform to move full-speed ahead.
In the New York Times this past weekend, Ezekiel Emanuel laid out a proposal to allow Social Security retirees to donate a portion of their benefits to a fund that would invest in a child’s health, education, and living standards. While this is obviously a positive idea, the premise of his proposal leaves something to be desired. In fact, Emanuel’s article presents a false choice to the American people: that we must choose between a strong social insurance system or investing in children.
Emanuel’s proposal would allow Social Security recipients to voluntarily forgo their benefits—he suggests plausibly for three years after reaching the full retirement age—and divert those benefits into a “Children’s Opportunity Bequest and Fund” to help either their own grandchildren or any other child identified by their Social Security number. Over at Slate, Matt Yglesias pointed out the obvious: Wealthy grandparents don’t necessarily need a special fund to pass excess cash to their grandchildren or to charitable organizations (charitable giving is already incentivized as an itemized tax deduction). Beyond this point, I take issue with the way Emanuel presents Social Security—as a transfer of wealth to the elderly that is taking away from our kids.
Emanuel’s entire basis for propagating the policy is centered on the notion that “many Social Security recipients are quite well-to-do.” Well yes, some are. But most are not, and advancing the myth that Social Security recipients are rich only serves to fuel the fire for cutting or changing the program.
Social Security recipients are not, on the whole, well-off. The average annual retirement benefit for retired workers was $14,106 in 2010, just above the federal poverty line for an individual living alone. These benefits, while modest, go a long way towards keeping elderly Americans out of poverty and ensuring that many enjoy an adequate, albeit modest, standard of living. For more than half of the over-65 population, Social Security constitutes more than 50 percent of their income. In 2010 the program lifted 14 million seniors and 6 million younger Americans out of poverty.
The figure below (from the forthcoming edition of EPI’s The State of Working America) shows how Social Security has helped dramatically lower elderly poverty rates. Notably, elderly poverty did not shoot up during the Great Recession—many thanks to Social Security. Rather than a program that makes well-off seniors even richer, Social Security prevents seniors’ standard of living from falling even farther behind that of working-age Americans. Though there are wealthy recipients who don’t rely on Social Security for a significant part of their retirement income, they are relatively few in number, and reducing their benefits would provide somewhat modest cost savings while undermining political support for this broad-based, contributory, social insurance system.
In his article, Emanuel states that “this huge transfer of wealth is harming our children.” This is patently false. The children of today and tomorrow are not harmed or threatened by a strong social insurance system that will provide the bulk of their retirement income and protect them from the hazards and vicissitudes of life. America’s children are instead harmed by politicians that chronically undervalue and underinvest in their health, nutrition, and education—particularly for lower-income households and communities. They are disadvantaged, for instance, by the cuts to nondefense discretionary (NDD) spending enacted by the Budget Control Act. And children would fare much worse under the deep cuts to NDD spending, Medicaid, the Affordable Care Act, food stamps, and other income support programs proposed by the House Republican Budget Resolution. If lawmakers were willing to invest in all children, they could take the necessary steps to do so, and those investments would generate tangible returns. EPI has illustrated a way to do so in our budget blueprint, Investing in America’s Economy, and the Congressional Progressive Caucus (CPC) has done so in the Budget for All. Both plans would finance trillions in increased public investment while achieving fiscal sustainability.
With this proposal, Emanuel pits social insurance against other priorities. As EPI and the CPC have shown, this is unnecessary and only serves to undermine programs that are already under attack. It also promotes the idea that hugely important investments should be left to the charitable resolve of the well-off. Investing in our children does not require wealthy Social Security recipients to voluntarily forgo Social Security benefits; it requires the wealthy, of all ages, to pay their fair share in taxes. Investing in our children and other national priorities will require reforming and modernizing our tax code to address the discrepancy between these priorities and the revenues needed to fund them.
In sum, this article pits the young against the old, and in doing so, steers the discussion of public investment—both what we can accomplish and who should be paying for it—way off course.
The U.S. Bureau of Economic Analysis (BEA) recently announced that the U.S. net international investment position (NIIP) was -$4 trillion at year-end in 2011 (see figure, below). The NIIP stood at -$2.5 trillion at year-end 2010. The $1.6 trillion increase in the net debt was largely caused by price changes of -$802 billion (on domestic and foreign holdings of stocks and bonds) and by net financial flows of -$556 billion. Net financial flows were largely explained by financing of the $466 billion U.S. current account deficit in 2011. The current account is the broadest measure of the U.S. trade deficit. While the costs of financing the NIIP were relatively small in 2011, they could rise rapidly if interest rates return to more normal levels in the future.
The United States has been borrowing hundreds of billions of dollars per year for more than a decade to finance its growing trade deficits. However, until 2011, the U.S. NIIP has not declined proportionately, as shown in the figure below, primarily because of gains in the prices of foreign stocks, the decline of the dollar (which made foreign currency holdings more valuable), and frequent accounting revisions (which have found more and more U.S. investments abroad).
Last year, several of those factors moved against the United States as the NIIP declined $1.6 trillion to -$4 trillion. That’s real money. Foreign investors (primarily foreign central banks) held $5.7 trillion in treasuries and other government securities at the end of 2011. The United States paid, on average, about 2.3 percent in interest on all of those securities. These low rates are caused by the still-depressed U.S. economy operating far below potential, and are unlikely to rise unless the U.S. economy begins operating much closer to full-employment. But, if this recovery happens and the NIIP remains roughly as large as it is today, then debt service costs could rise significantly. For example, if the average cost of government debt rises to 4.5 percent, it would add another $124 billion to the U.S. government deficit. If this rise in U.S. borrowing costs, furthermore, was not matched by a rise in global interest rates, then this would actually cause a net decline in U.S. GDP, as income flows out of the country to service debt increased and were not matched by increased inflows that paid U.S. owners of foreign assets.1
The U.S. NIIP represents a potential claim against future national income, and the size of this potential claim is growing dramatically as shown in the figure above. Each year that we allow large trade deficits to continue is another year that adds to this claim on future incomes—yet this actual intergenerational transfer is often ignored while a non-existent intergenerational transfer (that one allegedly caused by rising federal budget deficits) attracts much attention from pundits and economic commentators.2
Board of Governors of the Federal Reserve System. 2012. “Selected Interest Rates (Daily) – H.15: Historical Data.”
U.S. Bureau of Economic Analysis (BEA). 2012. “International Economic Accounts: Balance of Payments.”
U.S. Bureau of Economic Analysis. 2012. “International Economic Accounts: International Investment Position.”
1. Average rate of return on U.S. government securities in 2011 calculated from data in the current account (BEA 2012a) and the NIIP (BEA 2012b). Return on seven-year treasury securities used for comparison. The average return on seven-year treasuries was 2.16 percent in 2011 (Board of Governors of the Federal Reserve System 2012). Their average return in the pre-recession period of 2000-2007 was 4.52 percent.
2. Interest payments on government debt owed to U.S. citizens only reallocate income from taxpayers to domestic bondholders. Foreign holdings of U.S. securities represent claims on future income, which are qualitatively different. Interest payments on foreign holdings reduce U.S. GDP, while interest paid to domestic holdings does not. Given the existence of substantial unemployment and the predominance of deficit opponents in Congress, increases in the government debt due to financial outflows could result in further spending cuts, which would cause a further decline in U.S. GDP.
Apple is rapidly becoming the symbol of what’s wrong with our economy: a highly profitable enterprise where all the gains go to those at the top and the vast majority, including those with college degrees, struggle to get by. Saturday’s New York Times article by David Segal deepens the story beyond Apple’s complicity in exploiting Chinese manufacturing workers. According to Segal, “About 30,000 of the 43,000 Apple employees in this country work in Apple Stores, as members of the service economy, and many of them earn about $25,000 a year.”
That $25,000 annual salary works out to $12.02 an hour for someone working full-time for one year (2,080 hours paid, either for work hours or paid leave). That’s pretty low; about $1 above the “poverty-level wage” (the poverty line for a family of four in 2011 was about $23,000, equivalent to an hourly wage of $11.07). Segal’s article starts off talking about a former Apple employee, Jordan Golson, who earned just $11.25 an hour. Many of these Apple store workers are young, so one wonders how Apple wages compare with those of other young college graduates. The short answer is “not so good,” or even “terrible.” The hourly wages of young college graduates (those ages 23-29) in 2011 was $21.68 for men and $18.80 for women. To be fair, Segal notes that, “The company also offers very good benefits for a retailer, including health care, 401(k) contributions and the chance to buy company stock, as well as Apple products, at a discount,” so including benefits may offset some of the discrepancy between pay by Apple and pay by other companies. The information necessary to calculate this offset is unavailable, but it is not believable that these benefits fully or even significantly make up such a large shortfall in wages.
How do Apple store wages compare to those of all college graduates? As the table below shows, $12.02 is far below the 20th percentile wage of college graduates, the wage that 80 percent of college graduates earn more than and 20 percent make less than. That’s right, Apple’s store employees’ wages are in the bottom 20 percent of all college graduates. In fact, $12.02 is $2.24, or 16 percent, less than the 20th percentile college wage in 2011. For college-educated men, $12.02 hourly is on par with the wage earned at the 10th decile, $11.87, meaning 90 percent of college graduates earned more than that in 2011.
Hourly wage for college graduates, selected percentiles, 2011
|10||$ 10.80||$ 11.87||$ 10.12|
|*The Xth percentile wage is the wage at which X percent of the wage earners earn less and (100-X) percent earn more|
Source: Author's analysis of Current Population Survey Outgoing Rotation Group files
It is already well-known that Apple benefits from the extremely low wages and harsh working conditions of the Chinese workers who manufacture its products. As EPI’s Ross Eisenbrey and Isaac Shapiro recently wrote, “Apple workers in China endure extraordinarily long hours (in violation of Chinese law and Apple’s code of conduct), meager pay, and coercive discipline.” Together with the mediocre pay for Apple employees, even compared with other retailers, it is clear that Apple’s success does not translate to high or rising living standards for the workers who one would hope would benefit from its success. Apple could readily afford to pay the Chinese Foxconn workers building iPhones because their costs are a miniscule part of the phone’s costs. Raising pay is not that heavy a lift for Apple: In 2011, Apple’s nine-person executive leadership team received total compensation of $441 million, equivalent to the estimated compensation of 95,000 Foxconn factory workers assembling Apple products.
The discrepancy between Apple’s profits/executive pay and its compensation to its workers is a particularly glaring example of what is occurring in the wider economy. The gap between CEO compensation and that of a typical worker is now 231-to-one, where it used to be just 58.5-to-one in 1989. Corporate profits are now higher as a share of corporate-sector income than in any year since the early 1940s when we had a War Labor Board consciously suppressing wage growth. And, this all contributes to the phenomenon that productivity—the ability to produce more goods and service per hour—has been rising rapidly but the hourly compensation of both high school and college-educated workers is totally flat. It does not look like much will change soon unless there’s a broad change of thinking among policymakers and a mobilized workforce. After all, current outcomes have been dictated by persistent high unemployment, low and weakly enforced labor standards (witness the failure of Apple to abide by California’s wage and hours law mandate of two 10-minute breaks a day, reported in the Times story), the inability of unions to set high labor standards, and the dominant political/policy influence of the wealthy and the business community. Apple’s labor practices and the overall failings of the economy have not been dictated by any economic laws. Rather, they are the result of eminently changeable public-sector policies and private-sector practices.
In a 5-4 decision issued this week in Christopher v. SmithKline Beecham Corp., the Supreme Court, in its eagerness to reach a result favoring the pharmaceutical industry over its employees, abandoned the legal straight and narrow for some very sketchy shortcuts. The case concerned the application of overtime protection to medical detailers, also known as pharmaceutical representatives, employees who visit physicians and promote prescription drugs. If the detailers are “outside salesmen,” they are exempt employees and are not entitled to overtime pay.
Ignoring the plain meaning of key words, the “ordinary usage” which Justice Antonin Scalia elsewhere has claimed to favor, the court declared medical detailers to be outside salesmen because—even though they never make a sale of pharmaceuticals to anyone—they come as close to selling as the law governing their industry allows. The best the court could do in terms of identifying sales that these supposed salesmen make is to find that the detailers induce “non-binding commitments” from physicians to prescribe the drugs their pharmaceutical companies are promoting or marketing. The court found that the fact the detailers almost get commitments from these physician “gatekeepers”—without whom no one could sell the prescription drugs being promoted—is enough to treat the “transaction” as a sale. Whew, talk about bootstrapping and judicial activism! A justice could get a hernia with that kind of lifting!
But who in reality buys prescription drugs? Certainly, in any normal economic sense, it’s not the prescribing physician. There are, in fact, two parties that purchase them, and the detailers don’t sell (or even make binding commitments) to either: the retail drug stores like CVS and Walgreens, and the patients who are the end users. The court deals with sales to the drug stores in a most unsatisfactory way: It says that the people who actually make those sales are so few (2000 sales agents vs. 90,000 detailers), and their function is so rote, that we should ignore them.
The persons who make sales (exchanging money for a product) to patients are pharmacists, but the court argues that there would be no sales without the prescribing physicians, who deal with the medical detailers and have a completed transaction when they make a non-binding commitment—not to buy—but only to prescribe the drugs for appropriate patients. According to the court, this is” tantamount” to a sale.
An unfortunate lesson this case teaches is that no one knows what the law means until the Supreme Court decides the result it wants and then stretches the meaning of the statutory or regulatory language to (more or less) fit the result.
The other lesson from this decision is for the Labor Department, which had never in 60 years brought an enforcement action against a pharmaceutical company in a way that gave the industry notice that its widespread practice of denying overtime pay to detailers was unlawful. The medical detailers are relatively well paid and loosely supervised employees whose employers do not closely monitor their work time—not the classic employees we think of when we talk about overtime pay. Although there is no excuse for the tortured logic of the majority opinion, if the Labor Department had given fair notice that it disapproved the exemption of detailers, either by bringing enforcement actions over the years or even issuing consistent guidance that made its interpretation of the statute and its regulations clear, the court might have found that the exemption did not apply.
In other words, if we don’t enforce our rights, we can lose them.
The Federal Reserve’s report on family wealth released last Monday illustrates how severely the Great Recession has hurt middle-class families. Median family net worth (assets minus debt) fell to levels not experienced since 1992. While all groups but the richest 10 percent of families saw declines in wealth, there was variation in the percentage decline by race.
In the Federal Reserve’s report, it is difficult to identify the specific trends for African Americans and Hispanics. While the net worth of white, non-Hispanic families are presented, all nonwhites and Hispanics are lumped together in the family net worth table. However, the report has a sentence detailing the net worth changes specifically for African American families (p. 21). By using the past few reports, we can see the recent trends for wealth in black America.
First, it is important to note the median black family only has a small fraction of the wealth of the median white family (Figure A). (The family data discussed here differs from our reported household data because families are a subset of households and the data are inflated to different years.) In 2010, the median black family only had 12 cents for every dollar of wealth the median white family had.
When one examines the percent decline in wealth from 2007 to 2010, it appears that whites have seen a greater percentage decline in wealth than blacks. White family net worth declined 27 percent over this period while black family net worth declined 13 percent (Figure B). But in the data, while white wealth peaked in 2007, black wealth peaked in 2004. As white wealth continued to grow from 2004 to 2007, black wealth had already declined significantly.
If we compare the white and black wealth declines from their most recent high points, we see white net worth down 27 percent (from 2007) and black net worth down 40 percent (from 2004). A 40 percent decline is a large drop for a population with very little wealth even at their peak.
The trend for black net worth is probably following the trend for black homeownership. For most middle-class families, their home is their primary source of wealth. African Americans have had a strong decline in homeownership since their rate peaked in 2004 (Figure C). Homeownership rates for black families are projected to drop to between 40 and 42 percent—which would erase 15 years of gains in homeownership. If this occurs, it could also mean a continued decline in black wealth.
It is not possible to determine the trends in Hispanic net worth precisely from the published Federal Reserve data. We can deduce, however, that from 2007 to 2010, Hispanic net worth probably declined about 45 percent. This decline is significantly larger than the 27 percent for whites over the same period. Even at their recent peak net worth, Hispanics, like blacks, only had a tiny fraction of the wealth that whites had. (In 2010, the median family for nonwhite and Hispanic families combined only had 16 cents for every dollar of wealth the median white family had.)
In terms of wealth, only the richest American families have come out of the Great Recession relatively unscathed. Significant declines in wealth have been broadly felt. But the losses to black and Hispanic families are particularly damaging because they are quite large, and they were experienced by groups that had very low levels of wealth even before the recession hit.
— Research assistance provided by Johnny Huynh
Not long ago, I blogged about the fact that our key labor law, the National Labor Relations Act, protects workers even if they don’t have a union or seek to have one represent them. When workers join together to protest working conditions, to petition management for raises or plead against pay cuts, or to report unsafe conditions to government agencies, the National Labor Relations Board backs them up. The NLRB can protect workers against retaliation by the employer, can order reinstatement for fired workers, and can obtain back pay.
It isn’t widely known, but since its inception, the National Labor Relations Act has given employees the right “to engage in … concerted activities for the purpose of collective bargaining or other mutual aid or protection.”
Now, for the first time, the NLRB has a nice-looking, somewhat interactive webpage devoted to this issue of “other mutual aid or protection.” Visitors to the site can read some heartening stories about how employers overreacted—almost always by firing someone—, to employees organizing to protest or to make a problem known to management and how the NLRB intervened to restore the job or lost wages of the workers.
It’s great to see the government helping people understand their rights and how to enforce them.
In a recent blog post on the (negligible, if not nonexistent) long-run economic cost of deficit-financed fiscal stimulus at present, I noted in passing that the Congressional Budget Office (CBO) has downwardly revised potential economic output for 2017 by 6.6 percent since the start of the recession. This may seem trivial, but for a $15 trillion economy, this dip reflects roughly $1.3 trillion in lost future income in a single year, on top of years of cumulative forgone income (already at roughly $3 trillion and counting). The level of potential output projected for 2017 before the recession is now expected to be reached between 2019 and 2020—representing roughly two-and-a-half years of forgone potential income. This represents a failure of economic policy and merits considerably more attention than received, especially when weighing the benefit of near-term fiscal stimulus versus deficit reduction.
Potential output is the estimated level of economic activity that would occur if the economy’s productive resources were fully utilized—in the case of labor, this means something like a 5 percent unemployment rate rather than today’s 8.2 percent. Potential output is not a pure ceiling for economic activity, but the level of economic activity above which resource scarcity is believed to build inflationary pressures. As of the first quarter of 2012, the U.S. economy was running $861 billion (or 5.3 percent) below potential output—the shortfall known as the “output gap.” This has a number of implications for federal fiscal policy:
- Deficit-financed fiscal stimulus will have a very high bang-per-buck while large output gaps persist. The government spending multiplier is much larger in recessions than expansions (see Figure 3 of Auerbach and Gorodnichenko 2011) and the U.S. remains mired in recessionary conditions, where economic growth is insufficient to restore full employment.
- Deficit-financed fiscal stimulus is largely self-financing because every dollar of increased output relative to potential output is associated with a cyclical $0.37 reduction in budget deficits, and this feedback effect is greatly amplified by the large government spending multiplier.
- There is so much slack in the U.S. economy—i.e., supply of resources in excess of demand—that government borrowing will not “crowd-out” productive private investment; this can be seen in the near record-low 1.6 percent yield on 10-year U.S. Treasuries.
So deficit-financed fiscal stimulus is highly cost-effective, largely self-financing, has a very low opportunity cost, and poses no risk to inflation. But there is another potential benefit: closing today’s output gap can increase potential future output (thereby also increasing the ability to repay debt incurred). The reason is simple—if long bouts of inactivity leave permanent “scars” on the potentially productive resources (and they do), then the longer the economy operates below potential, the more future potential is damaged. Concretely, factories aren’t built because firms can’t even sell what existing factories are producing. Children’s educational outcomes are damaged as economic distress forces their families to move and as they lose access to decent nutrition and health. Desirable early-career jobs for recent graduates that could impart valuable skills throughout their working lives aren’t available to them, so lifetime earnings suffer. And so on.
The CBO certainly is worried about this scarring—look at the annual revisions to real potential GDP made by them since the onset of the recession: Estimates have consistently been revised downwards except between Jan. 2009 and Jan. 2010, when the deficit-financed $831 billion Recovery Act arrested economic contraction and began shrinking the output gap.
The Recovery Act, however, was nowhere near large enough to restore full employment and close the output gap—the 10-year cost of the stimulus, after all, was smaller than the annual output gaps that have persisted since 2009. As the economy has slowed as fiscal support waned, CBO’s potential output forecasts have withered as well. So why did Congress pivot from job creation (i.e., stimulus) to deficit reduction at the start of the 112th Congress?
The whole point of long-term deficit reduction, after all, is to raise future income. But failure to restore full employment decreases potential future income. Worse, while the economy remains depressed below potential output, near-term deficit reduction—particularly spending cuts—greatly exacerbate the output gap because the government spending multiplier is so high. (We’ve seen this play out across much of Europe, where government “austerity” programs have cut spending, pushed economies back into recession, pushed up unemployment, and cyclical deterioration in the budget deficit has rendered spending cuts entirely counterproductive.)
The downward revisions to potential output in CBO’s forecast reflect a failure of Congress to resuscitate the economy and restore full employment, but it’s a policy failure that can still be reversed. Fiscal stimulus can increase employment and industrial capacity utilization today and actually “crowd-in” private investment, thereby increasing today’s capital stock and future potential output. With respect to fiscal tradeoffs, cost effective deficit-financed fiscal stimulus will actually decrease the near-term debt-to-GDP ratio (the relevant metric for fiscal sustainability), whereas deficit reduction cannot raise future income until the output gap is closed and the private sector is competing with government for savings instead of plowing cash into Treasuries. The full cost of Congress’ misguided pivot from job creation to austerity is larger than even just today’s mass underemployment—trillions of dollars of potential future income will also be lost unless we pivot back to addressing the real crisis at hand.
The Federal Reserve just published findings from the 2010 Survey of Consumer Finances, a triennial survey of household finances. Though it’s no surprise that these took a dive with the collapse of the housing and stock bubbles, the extent of the plunge is still shocking as the median family saw their net worth fall by 39 percent between 2007 and 2010.1
By 2010, the economy had begun its slow recovery. Housing prices had leveled off and stocks rebounded, recouping about half their losses by the end of the year. But this wasn’t just a temporary setback. Households—especially younger households—were in serious trouble long before the twin asset bubbles burst.
Families headed by someone age 35 to 44—the age when workers typically start getting serious about saving for retirement—had seen declines in net worth in the wake of two previous recessions (1990-91 and 2001) without fully regaining the lost ground in the intervening years (see chart below). So the financial meltdowns that precipitated the Great Recession only exacerbated an existing problem. As a result, GenXers had only accumulated $42,100 in 2010, less than half what the Baby Boomers had accumulated at the same age adjusted for inflation (in the chart, Depression and War Babies are indicated by squares, Early Boomers by triangles, Late Boomers by circles, and GenXers by an X).2
The fact that net worth declined for younger age groups even before the Great Recession is remarkable when you consider that the economy grew by a third on a per capita inflation-adjusted basis between 1989 and 2010, though this growth was not widely shared. Furthermore, families should have been saving more to make up for declines in pension coverage and Social Security benefits. As a result, the Center for Retirement Research has estimated that the average family in the broad 35-64 age range had a Retirement Income Deficit of $90,000 in 2010, a measure of how far behind they were in saving and accumulating benefits for retirement.
Even a generation that fared relatively well—the cohort born during the last years of the Great Depression and World War II—had only accumulated $227,000 as it approached retirement in 2001. This is roughly four times the median income for that age group in 2001, or enough to purchase a 20-year annuity worth $3,750 a year at a 3 percent real interest rate.3 As these Depression and War Babies began tapping their retirement savings during the boom and bust years of the new millennium, their net worth fell to $206,700 in 2010, whereas the preceding generation had seen increases in net worth during their early retirement years.
Baby Boomers fared much worse than the Depression and War Babies, lulled into complacency by asset bubbles that inflated during their prime earning years and popped as the leading edge of the Boomer generation approached retirement. Early Boomers born in the late 1940s and early 1950s saw their net worth increase by around $69,000 between 1989 and 2001 (a 4.6 percent annual rate), but only by a meager $14,500 between 2001 and 2010 (a 0.9 percent annual rate). Late Boomers fared no better, and, like GenXers, are now far behind where earlier generations had been at the same age.
Though it may be tempting to chastise families for not saving enough for retirement, most of the blame lies with former Federal Reserve Chairman Alan Greenspan and others in positions of responsibility who watched asset bubbles inflate without warning that these paper gains weren’t real, and promoted homeownership and 401(k)s as the path to a secure retirement without acknowledging the extent of the risks involved.
2. The published survey results don’t allow precise tracking of generational cohorts because demographic breakdowns are by 10-year age group and the survey is conducted every three years. However, the 45-54 “Depression and War Baby” cohort in 1989 approximately corresponds to the 55-64 age group in 2001 and with the 65-74 age group in 2010, etc.
3. In practice, the typical household holds most of their wealth in the form of home equity and doesn’t annuitize liquid assets.
The latest suicide of a worker at Apple Computer’s Foxconn supplier plant in Chengdu, China may be another indication that Apple has not appreciably improved conditions for its manufacturing workers. Apple and Foxconn, working with the Fair Labor Association, announced that they would make changes in grueling overtime work schedules and in working conditions, including a promise to gradually come into compliance with China’s overtime laws. Yet this suicide, in conjunction with recent worker protests and new reports, suggests that needed reforms have not been made.
There are mixed reports from SACOM and China Labor Watch about whether work schedules have been reduced in any systematic way at Foxconn. Problematically, it appears that when the schedules are reduced, the reductions are not adequately balanced with hourly pay increases. So the already-inadequate monthly pay drops, leaving workers—72 percent of whom at the Chengdu plant told the FLA they could not meet their basic needs—in a desperate situation.
Ultimately, Apple has the power and moral responsibility to improve wages and conditions for Foxconn workers in Chengdu and elsewhere. Certainly, Apple and its executives can afford to do the right thing.
The Heritage Foundation’s latest attack on the Postal Service is a convoluted collection of half-truths and untruths. The author, David John, doesn’t want the Postal Service to benefit from $11.6 billion in overpayments it made for its pension obligations even though he grudgingly admits “this surplus appears to exist.” The overpayment should be refunded to the Postal Service to help it met its operating costs, but Heritage wants those funds locked up in the pension plan, which it claims would “follow the private-sector practice of using the current surplus—whatever it is—to defray future retirement payments.” This is baloney. When a private corporation overfunds its pension plan, it can transfer excess funds to pay retiree health obligations. In the case of USPS, it could use the funds to pay both current obligations ($2.4 billion a year) and the congressionally mandated pre-funding for future obligations ($5.6 billion a year).
When it’s inconvenient, Heritage abandons its suggestion that the Postal Service should be treated like the rest of the private sector. Private sector employers are not required to pre-fund their retiree health benefits, and most of them fund retiree health benefits on a pay-as-you-go basis. If USPS “followed the private-sector practice,” it wouldn’t contribute a nickel to the future retiree health obligations; it would pay them as they came due, yet Heritage supports a requirement that USPS “fully prefund this benefit.”
Heritage also glosses over the findings of two independent agencies that the Postal Service was treated unfairly by Congress and the Office of Personnel Management in the allocation of its pension obligations. EPI published a report in 2010 that took the same position as the Postal Service’s Office of Inspector General and the Postal Rate Commission: USPS and its ratepayers were overcharged approximately $75 billion for past service obligations, and taxpayers were undercharged the same amount. But for Congress’ misallocation of costs, the Postal Service’s short-term finances would be manageable despite the Great Recession and the growth of electronic communication and payments.
Heritage shades the truth in its claim that the Government Accountability Office “bluntly rejected” the agencies’ claims that the Postal Service had been treated unfairly. In fact, GAO admitted that the cost allocation methodology is “a policy choice” whose fairness is debatable:
“Although the USPS OIG [Office of Inspector General] and PRC [Postal Rate Commission] reports present alternative methodologies for determining the allocation of pension costs, this determination is ultimately a policy choice rather than a question of accounting or actuarial standards. Some have referred to “overpayments” that USPS has made to the CSRS fund, which can imply an error of some type—mathematical, actuarial, or accounting. We have not found evidence of error of these types. While the USPS OIG and PRC reports make judgments about fairness, the 1974 law also implicitly reflected fairness.”
GAO does not dispute that the PRC and USPS OIG methodologies for allocating the pension costs are sound, it simply prefers a different policy choice, which burdens the Postal Service:
“All three methodologies (current, PRC, and USPS OIG) fall within the range of reasonable actuarial methods for allocating cost to time periods. However, the allocation of costs between two entities is ultimately a business or policy decision.”
In its ideological zeal to see the Postal Service destroyed or dismembered, Heritage has been careless with its facts and inconsistent in its arguments.
UPDATE, June 15, 11:37 a.m.: Ah, mystery of the funky-seeming Mitt Romney jobs numbers revealed (see below for my puzzlement)—it’s a measure of full-time jobs reported in the household survey. I guess half of this is my fault—they do reference the “full-time” aspect when talking about data from the 1970s—but the rest of the chart and paragraph just talk about “job growth.”
But I will note that this is the first time I’ve ever seen full-time jobs from the household survey used to measure job market performance over business cycles. And I’m not convinced it’s a useful innovation; in fact, I think it’s pretty obvious cherry-picking.
Say five people get brand-new jobs that provide 30 hours of work per week while five more see their hours cut from 40 to 34 hours. I’d say this is 120 hours of net new additional work being demanded in the economy; but using the full-time jobs from the household survey would simply say that it’s five “jobs” lost. This just doesn’t seem useful to me.
Also, since the Romney chart ends in June 2011, it might be useful to know what happened to their preferred number in the 11 months since then: 2.25 million jobs added. The industry-standard of economists measuring recessions and recoveries—the payroll survey—has 1.7 million jobs added over those same 11 months, so I do wonder which the campaign would cite if asked.
Lastly, I’d note that there is an obvious sector, full of full-time jobs, that has seen a particularly hard time since the June 2009 beginning of recovery: the public sector. Since June 2009, 600,000 state and local jobs have been lost, and in 2009, about three-fourths of these jobs were full-time.
I was asked to comment on the speech Mitt Romney made in front of the Business Roundtable, so I decided to do some light background reading: Believe in America: Mitt Romney’s Plan for Jobs and Economic Growth.
I noticed something odd in the jobs section of the plan—this chart (ripped directly from the Romney PDF):
I know jobs numbers and recoveries, and these looked wrong to me. For one, the absolute peak-to-trough employment loss following 2007’s Great Recession was 8.8 million jobs (between Jan. 2008 and Feb. 2010) not the 8.9 million that the chart claims.
And given that this is the peak job loss, this means, by definition, that anything measured after this trough couldn’t be negative, as the chart implies. I also know that the U.S. economy didn’t begin adding jobs after the 2001 recession until the second half of 2003, so the 2001 numbers looked off, too.
So I decided to do the chart correctly—actually show job losses during the official recessions (i.e., not just employment peak to trough) and the 24 months following and sure enough:
Romney’s numbers are all slightly off, which is odd.
Odder is that the respective performance of the recoveries following the 2001 and 2007-2009 recession are reversed. Look closely at the the last two sets of bars in the respective figures.
The Romney chart has jobs growing in the first 24 months of recovery following the 2001 recession, but shrinking in the first 24 months following the 2007-2009 recession. That’s the opposite pattern of what actually occurred—jobs shrank for the first two years after the 2001 recession and grew modestly in the first two years after the 2007-2009 recession.
I’ll note that we also tried to match the Romney numbers with quarterly data, with household-survey employment counts, with household-adjusted-for-payroll concepts survey data … nothing worked.
A little curious as to what’s going on here.
And since there’s been lots of discussion about the relative health of the private and public sectors, here’s the correct graph for private-sector jobs only.
Update to yesterday’s blog post “Fiscal hawks’ double standard for Social Security cuts vs. tax cuts”
This is an update to yesterday’s blog post “Fiscal hawks’ double standard for Social Security cuts vs. tax cuts.”
The Committee for a Responsible Federal Budget (CRFB) subsequently updated the table in their blog post, adding a column with average scheduled (i.e., promised) initial Social Security benefits for 2050. This is certainly an improvement, but their revised table still only depicts the relative comparison between initial benefits under the Bowles-Simpson plan and payable benefits. Here’s what their table would show with the additional relative comparison between initial benefits under the Bowles-Simpson plan and scheduled benefits (the lightly shaded column).
Under the Bowles-Simpson plan, medium earners reaching the normal retirement age in 2050 would see an initial benefit cut of 6 percent relative to scheduled benefits. And as CRFB duly notes in their blog post, the Bowles-Simpson proposal to use a “chained” consumer price index for cost-of-living adjustments would further reduce all beneficiaries’ benefits in subsequent years relative to scheduled benefits—a benefit cut that compounds annually, as explained in this EPI Briefing Paper.
Claims about the efficacy of fiscal stimulus in a depressed economy are based on as-flimsy evidence as the Laffer Curve?! Seriously false equivalence
Peter Orzsag calls the claim that the debt-to-GDP ratio can be lowered by providing a fiscal boost to a depressed economy the “Laffer curve of the left.” For those who have real lives and may not get the reference, the “Laffer curve” refers to the theoretical possibility that one can raise overall tax revenues by cutting tax rates. The intuition is that cutting tax rates provides incentives for working longer and saving more. In turn, this will boost economic growth sufficiently to bring in more revenue despite rates having been cut. The claim that it is relevant to the U.S. economy has been discredited empirically (and a long time ago).
In light of this, Orzsag’s claim that the “Laffer curve of the left seems to have as much empirical relevance as the original Laffer curve” is not only odd but also flat wrong.
Orzsag’s target is clearly a recent paper by DeLong and Summers that shows fiscal stimulus in a depressed economy has multiple salutary effects, not just on economic growth but even on long-run budget measures (like the debt-to-GDP ratio). The paper shows stimulus boosts near-term growth directly by relieving the constraint of insufficient demand; it boosts productive investments by giving firms an incentive (i.e., more customers coming in the door) to expand capacity; and it keeps chronic long-term unemployment from turning into a permanent erosion of workers’ skills (i.e., economic “scarring”). The assumptions about the strength of each of these effects that are needed to make fiscal stimulus debt-improving in a depressed economy are probably pretty close to real-life parameters.
Let’s do some simple math with widely-agreed upon parameters, even ignoring some of the supply-side measures DeLong and Summers examine. I’m going to round very aggressively here, but it doesn’t affect results much.
Today’s publicly-held debt is about 70 percent of GDP (call it $10.5 trillion on a base of GDP that is $15 trillion). Let’s say we decided to undertake fiscal stimulus in the form of $150 billion spent on high-multiplier activities like extending unemployment insurance, giving aid to states, or investing in infrastructure (we actually need more than this, but it’s a nice round 1 percent of overall GDP, so we’ll stick with it).
The “fiscal multipliers” on these activities are roughly 1.5, meaning they generate $1.50 in economic activity for every dollar spent on them (actually, it may be quite a bit higher, but we’ll take 1.5 as given).
So, (roughly) a year from now, this stimulus has increased the level of GDP by $225 billion (i.e., the $150 billion stimulus multiplied by 1.5). This extra GDP does indeed lower the budget deficit by bringing in more revenue. A reasonable estimate, based on CBO data, is that when the economy is operating below potential, each 1 percent increase in GDP growth yields a cyclical reduction in the budget deficit of about 0.35 percent of GDP. So, this $225 billion in additional output leads to a $79 billion improvement in the budget deficit, making the “net” fiscal cost of the stimulus just $71 billion ($150 billion minus the $79 billion offset from higher growth).
This $71 billion “net” cost of stimulus increases debt by roughly 0.7% ($71 billion divided by the current $10.5 trillion public debt). But GDP has increased by 1.5 percent. Given the current debt-to-GDP ratio of 70 percent, this means that this measure actually declines because the stimulus has increased debt by 0.7 percent but GDP by 1.5 percent.
None of these parameters, by the way, are particularly contested.1 And let’s say they’re slightly wrong, and that instead of outright improving the debt-to-GDP ratio, providing fiscal stimulus in today’s depressed economy actually makes it slightly worse – say it’s only 80 percent self-financing in terms of its impact on debt-to-GDP ratios. Would this really justify calling claims that providing fiscal stimulus in depressed economies does not damage public finances “the Laffer Curve of the left”? Not by my read of the evidence.
1. For those who like analytical solutions, all of the preceding boils down to: So long as the initial debt/GDP ratio is higher than [(1/multiplier) – fiscal clawback ratio], then fiscal stimulus reduces the debt to GDP ratio. The “fiscal clawback ratio” is simply how much a 1% boost to economic growth leads to a reduction in the budget deficit (measured also as a share in GDP). For the arithmetic above, the multiplier of 1.5 and a clawback ratio of .35 means that fiscal stimulus would reduce debt/GDP for any initial debt ratio above 32%.
Take much more conservative assumptions – a multiplier of 1 and a clawback ratio of just 0.25. Then, stimulus is debt/GDP reducing for all initial debt ratios above 75%.
Also note that this means the calculus for whether or not stimulus reduces the debt/GDP ratio gets more favorable as the initial debt ratio rises, a perhaps counter-intuitive result.
The Committee for a Responsible Federal Budget (CRFB) has taken sides in a scuffle between Social Security advocates and former Senator Alan Simpson. This scuffle concerns Simpson’s colorful defense of Social Security proposals within the report he co-authored with fellow Fiscal Commission co-chair Erskine Bowles—a report CRFB has gone to great lengths to champion.
CRFB was responding to a letter signed by young budget and social insurance experts—myself and others at EPI included—disagreeing with Simpson’s claim that the Bowles-Simpson proposals would strengthen the program for our generation. The merits of these proposals aside, CRFB is shamelessly cherry-picking baselines in response to the letter. Whereas CRFB and other fiscal hawks use a current policy baseline for almost all budget projections—e.g., assuming the continuation of the Bush tax cuts past their scheduled expiration—CRFB doesn’t adopt the same convention when it comes to Social Security. This is hypocritical and reveals what can only be described as a biased policy agenda.
In order to minimize the severity of the Bowles-Simpson cuts, CRFB’s defense of the Bowles-Simpson Social Security plan revolves around a comparison of projected future benefits proposed by Bowles-Simpson relative to benefits payable under current law. However, comparing benefits under Bowles-Simpson to payable benefits assumes that Congress will allow an abrupt 25 percent reduction in Social Security benefits when the trust fund is exhausted in 2033 since Social Security is prohibited from borrowing and benefits are generally funded through a dedicated payroll tax rather than general revenue.1
Social Security’s finances are routinely analyzed using scheduled rather than payable benefits—if for no other reason than the system would always appear to be in actuarial balance if projections were based on payable benefits. On a more practical level, it is inconceivable that Congress would allow draconian cuts to fall on elderly retirees. Unlike active workers, who can theoretically save more (or put off retirement) when benefits are cut, elderly retirees are usually viewed as having few other financial recourses. Thus, even in the unlikely event that nothing is done to shore up the system before the trust fund is exhausted, Congress would almost certainly use general revenues to pay promised benefits. Similarly, Congress routinely prevents scheduled cuts to Medicare physician reimbursements (the so-called “doc fix”). In other words, the difference between the current policy baseline and the current law baseline reflects the difference between what budget analysts assume future Congresses are likely to do versus what is currently set in legislation, including scheduled or automatic tax increases and benefit cuts.
Fiscal hawks—including CRFB—overwhelmingly use a current policy baseline to advocate staunch deficit reduction measures because these baselines show a much larger rise in public debt over the long-term, largely due to assumptions about the continuation of temporary tax cuts and the inability of Congress to contain health care cost growth. If CRFB wants to deviate from past practice and score the Bowles-Simpson plan relative to current law, they should also acknowledge that the plan proposes cutting taxes by $1.4 trillion relative to current law, all in the name of deficit reduction.2 Indeed, the plan “saved” $4.1 trillion over a decade relative to an adjusted current policy baseline, whereas continuing the Bush-era tax cuts will cost $4.4 trillion relative to current law.3 (Without the Bush tax cuts, there would not have been a fiscal commission.) Likewise, CRFB should argue in favor of leaving Social Security out of deficit discussions entirely, since by their definition Social Security is in long-run actuarial balance.
Using a current policy baseline when analyzing tax policies or clamoring for near- and long-term deficit reduction while cherry picking a current law baseline to justify Social Security benefit cuts is a gimmicky double standard that reflects a bias toward cutting social insurance programs.
1. Exceptions to this rule include the current payroll tax holiday and income taxes levied on Social Security benefits for high-income beneficiaries, which revert to Social Security.
2. Estimate based on CRFB’s Moment of Truth Project July 2011 re-estimate of the Bowles-Simpson plan relative to CBO’s March 2011 current law baseline for an apples-to-apples comparison over FY2012-21.
3. This is not an apples-to-apples comparison because the Bowles-Simpson adjusted current policy baseline assumed the Bush tax cuts would expire for households with adjusted gross income above $200,000 ($250,000), for a revenue increase of roughly $700 billion relative to full continuation, but even adjusting accordingly the two are very much in the same ballpark.
Yesterday, the Congressional Budget Office (CBO) released its annual Long Term Budget Outlook (LTBO), which projects federal spending, revenues, deficits, and debt over the next 75 years. There are many points of controversy with regards to the LTBO, not the least of which is that it’s pretty ridiculous for CBO to pretend it knows what health care costs will look like in 2087. Personally, I think that CBO’s LTBO provides a lot more heat than light, and I would be the first to applaud if CBO decided to only release ten-year budget projections (in themselves subject to a huge margin of error).
Nevertheless, there is still value in looking at the change in projections from one year to the next. The figure below clearly shows that over the past three years CBO’s extended current law budget projections—which assumes no changes are made to the law—have improved drastically.
2009: CBO projected that debt held by the public would rise from around 60 percent of GDP to just over 300 percent of GDP in 75 years.
2010: CBO markedly improves its 75-year outlook, which now shows debt rising to just over 110 percent of GDP. This improvement largely reflected passage of the Affordable Care Act (ACA), which prioritized reducing long-run deficits and slowing the rate health of care cost growth (the predominant driver of long-run deficits).
2011: CBO again improves its outlook, now projecting debt rising to 87 percent of GDP in the first 30 years but then actually falling to 75 percent over the next 45 years. This improvement was largely due to three changes in CBO’s assumptions and projections: (1) lower costs for the new ACA health insurance exchange subsidies; (2) higher taxable wages due to the employer-sponsored health insurance excise tax (pushing worker compensation away from the tax-free health coverage); and (3) a slightly higher long-run economic growth rate.
The ultimate goal of budget reform is to reach “fiscal sustainability,” a point at which public debt is growing no faster than the economy (stabilizing debt relative to national income, i.e., ability to pay). According to 2011 LTBO projections, the federal government had already achieved long-run “fiscal sustainability.”
2012: For the third straight year in a row, CBO favorably revises its long-run budget outlook: Starting in 2014, public debt is projected to fall by 0-3 percentage points each year. The public debt is shown to be fully paid down by 2070, and within 75 years the federal government is projected to have accrued reserve surpluses equal to about a third of the economy.
This improvement is primarily due to two factors. First, the Budget Control Act (the result of last summer’s debt ceiling crisis) cuts spending by over $2.1 trillion through 2021, and because of the way CBO indexes discretionary spending for inflation in its projections, it continues to reduce deficits in subsequent years. And second, CBO changed the way it projects health care cost growth. In the past, it used the average growth rate over the last 25 years, but in this report it calculated a weighted 25-year average that puts more weight on recent years. This new methodology does a better job of taking into account the fact that health care costs have been slowing recently, possibly evidence that the ACA has exceeded expectations.
Budget wonks will rightly point out that the projections in question are CBO’s extended baseline, which assumes no changes to current law. This means that the Bush-era tax cuts expire next year, the sequestration cuts also go into full effect next year, the Alternative Minimum Tax will apply to more upper middle-income households, and Medicare reimbursements to doctors will be allowed to fall dramatically. But with the exception of the sequestration trigger, all those other factors were also present when CBO made their projections in 2009, 2010, and 2011. The fact is the fiscal outlook of the federal government has improved dramatically in the last three years.
More importantly, this report clearly shows that the path toward fiscal sustainability includes allowing some—if not all—of the Bush-era tax cuts to expire and fully implementing and protecting the Affordable Care Act.
New York Times columnist David Brooks went all out in heralding the “debt is evil” stigma in his column yesterday. Regrettably, this blanket condemnation of borrowing as intemperate or immoral, intergenerational theft is all too pervasive among Washington’s policymaking elite, and all too wrong: Not all debt is created equal and suggesting otherwise impedes sound fiscal policy.
Economic actors borrow money for a wide array of activities, and both businesses and households know better than to apply a universal value judgment to debt. Borrowing money for college tuition allows for human capital accumulation, which will hopefully yield a high rate of return; borrowing money to take to the casino is widely viewed as imprudent, as the expected rate of return at any casino is negative. Businesses borrow money to build factories, buy equipment, finance research and development, and engage in other productive activities that add value to the economy. Financial firms leveraging themselves the way of Long-Term Capital Management (using debt to proportionally magnify both risk and potential returns), on the other hand, adds systemic financial risk and zero—more likely negative—economic value. Similarly, there are good and bad reasons alike to run federal budget deficits. What matters much more than the accumulation of nominal debt is the purpose of the borrowing and the ability to repay the amount borrowed.
Brooks laments that the “federal government has borrowed more than $6 trillion in the last four years alone, trying to counteract the effects of the [dotcom and housing] bubbles.” Yes, the implosion of the housing market and the ensuing financial crisis and recession forced Congress to borrow heavily as the cyclical portion of the budget deficit ballooned and fiscal policy was used to arrest a steep economic contraction, propping up aggregate demand and the financial sector alike. The alternative, however, was a depression that would have swollen budget deficits regardless, while greatly impeding our ability to repay debt because of lost income and economic scarring reducing future potential income. Indeed, policymakers’ failure to restore full employment—which still necessitates much more deficit-financed stimulus—is producing such scarring effects: The U.S. economy is still running $861 billion—or 5.3 percent—below potential output and the Congressional Budget Office has downwardly revised projected potential output for 2017 by 6.6 percent since the onset of the recession. That is real, welfare-reducing economic waste resulting from insufficient public borrowing—borrowing that could have put productive resources to use instead of allowing them to atrophy.
Economists Lawrence Summers and Brad DeLong compellingly argue that given present U.S. economic conditions (where the Fed cannot singlehandedly stabilize the economy), deficit-financed stimulus is actually self-financing. Essentially, if nominal interest rates are below long-run trend real GDP growth adjusted for reduced economic scarring effects and improvements in the cyclical budget deficit resulting from stimulus, a dollar of debt more than pays for itself in the long-run. CBO projects real GDP growth will average 2.4 percent over the next 25 years, whereas the yield on 10-year Treasuries is only 1.55 percent (hovering around a record low); high bang-per-buck fiscal stimulus passes any reasonable cost-benefit analysis test so long as the economy remains mired well below potential in a liquidity trap.
What Brooks misses entirely is that any value judgment regarding debt boils down to the opportunity cost of debt and the value added of the tax or spending program being deficit-financed—particularly in ways that affect the ability to repay debt.
Example 1: The Bush-era tax cuts were entirely deficit-financed, adding some $2.6 trillion to the public debt between 2001 and 2010, while failing to produce even mediocre economic performance (the 2001-2007 Bush economic expansion was the weakest since World War II). Numerous economists believe that, between their dismal efficacy and the reduction in national savings they induced, the Bush tax cuts decreased long-run potential output.
Example 2: If the rate of return on infrastructure spending exceeds the cost of financing, it makes sense to borrow money to build a bridge, or better yet repair a bridge (the cost of repair increases with time and preventative maintenance is much more cost effective than rebuilding infrastructure from scratch). As my colleague Ethan Pollack points out, the case with infrastructure is a clear cut “win-win-win” because it raises potential future output, making the incurred borrowing relatively easier to pay back, and infrastructure spending increases actual present output and employment (reducing cyclical deficits). And today, the opportunity cost of infrastructure investment is at historic lows.
There is good debt and wasteful debt alike, just as both constructive editorializing and gibberish can be found scrawled across op-ed pages. Brooks’ failure to recognize any economic context or nuance only feeds the misguided debt hysteria that has pushed most of Europe back into recession and encouraged U.S. policymakers to give up job creation in favor of premature, counterproductive austerity.
Brad DeLong links to what he calls a “DeLong-Summers ‘Simplistic Keynesians’ Smackdown Watch“—a piece by Ken Rogoff calling “dangerously facile” those who argue for the “simplistic Keynesian remedy that assumes that government deficits don’t matter when the economy is in deep recession; indeed, the bigger the better.”
Since “simplistic Keynesianism” is a pretty good description of my diagnosis and remedy for today’s U.S. economic troubles, and since I don’t want to ever be “dangerously facile,” I read both the Rogoff commentary and the Reinhart, Reinhart, and Rogoff (2012) paper that it links to.
I did learn one thing—it turns out that my earlier post about the likely provenance of a Rogoff claim about the potential damage from high public debt isn’t quite right—but the new provenance of this claim isn’t right either.
There’s not much particularly new in either piece. Instead, they recycle the finding that, looked at over several centuries, there is an odd threshold of debt-to-GDP ratios—90 percent—that sees growth beneath the threshold run about 1 percentage point higher per year than growth above the threshold. They then do the arithmetic and argue that every year that the public debt-to-GDP ratio is over 90 percent is a year of GDP growth 1 percent lower than it would otherwise be and voila, the damage from high debt has been documented.
Or not. We’ve already noted why we think this threshold, while it might be an interesting (if odd and deeply atheoretical) curiosity, has no relevance to current U.S. policy debates (and yet somehow the 90 percent scare-mongering won’t stop—see David Brooks’ latest invocation of it).
The main reason for this judgment is that the causality between slow growth and high public debt is extremely two-way. There have almost surely been times when exogenous decisions to add to public debt have hampered countries’ growth. But there have also surely been times (and many more times, in my guess) when slow growth has led directly to rising debt-to-GDP ratios. And when this is the case, noting a simple negative correlation between GDP growth and a particular debt-to-GDP threshold tells us nothing about how dangerous—or, more likely, useful—a policy of further fiscal support would be.
And, there is no doubt that the increase in public debt over the past four years in the U.S. is directly the result of the Great Recession, and not a cause of it. Further, adding to this public debt going forward (so long as it was intelligently spent on job creation) would not only not harm the economy, it would reduce the debt/GDP ratio.
To be blunter, applying results gleaned from the over 80 percent of the country-years in their high-debt sample period that began before World War II, as well as the other clear-as-day cases where high debt was driven by slow growth (Japan in the 1990s and 2000s) does nothing to aid policy analysis about fiscal support in the here-and-now.
The authors even miss an obvious clue regarding those episodes in their data where high debt is driven by slow growth—the failure of elevated public debt to lead to upward pressure on interest rates. High public debt-to-GDP ratios combined with no upward pressure on interest rates is a key tell that it’s likely that below-potential growth is driving the debt ratio and not vice-versa.
Further, if interest rates are not pushed up by rising debt-to-GDP ratios, there is no mechanism for rising debt to impede growth. The authors gloss over this—just noting that “the growth-reducing effects of public debt are apparently not transmitted exclusively through high real interest rates.” More likely, the growth-reducing effects of public debt are simply non-existent when economies are deeply depressed.
Lastly, the paper makes a mistake that I think is key to understanding why policymakers keep getting blindsided by bad news (like the last two-months’ poor job growth) that just should not be that surprising: it assumes that economies naturally heal themselves from recessions, and quite quickly.Read more
One hallmark of the first 30 years after World War II was the “countervailing power” of labor unions (not just at the bargaining table but in local, state, and national politics) and their ability to raise wages and working standards for members and non-members alike. There were stark limits to union power—which was concentrated in some sectors of the economy and in some regions of the country—but the basic logic of the postwar accord was clear: Into the early 1970s, both median compensation and labor productivity roughly doubled. Labor unions both sustained prosperity, and ensured that it was shared. The impact of all of this on wage or income inequality is a complex question (shaped by skill, occupation, education, and demographics) but the bottom line is clear: There is a demonstrable wage premium for union workers. In addition, this wage premium is more pronounced for lesser skilled workers, and even spills over and benefits non-union workers. The wage effect alone underestimates the union contribution to shared prosperity. Unions at midcentury also exerted considerable political clout, sustaining other political and economic choices (minimum wage, job-based health benefits, Social Security, high marginal tax rates, etc.) that dampened inequality. And unions not only raise the wage floor but can also lower the ceiling; union bargaining power has been shown to moderate the compensation of executives at unionized firms.
Over the second 30 years post-WWII—an era highlighted by an impasse over labor law reform in 1978, the Chrysler bailout in 1979 (which set the template for “too big to fail” corporate rescues built around deep concessions by workers), and the Reagan administration’s determination to “zap labor” into submission—labor’s bargaining power collapsed. The consequences are driven home by the two graphs below. Figure 1 simply juxtaposes the historical trajectory of union density and the income share claimed by the richest 10 percent of Americans. Early in the century, the share of the American workforce which belonged to a union was meager, barely 10 percent. At the same time, inequality was stark—the share of national income going to the richest 10 percent of Americans stood at nearly 40 percent. This gap widened in the 1920s. But in 1935, the New Deal granted workers basic collective bargaining rights; over the next decade, union membership grew dramatically, followed by an equally dramatic decline in income inequality. This yielded an era of broadly shared prosperity, running from the 1940s into the 1970s. After that, however, unions came under attack—in the workplace, in the courts, and in public policy. As a result, union membership has fallen and income inequality has worsened—reaching levels not seen since the 1920s.
By most estimates, declining unionization accounted for about a third of the increase in inequality in the 1980s and 1990s. This is underscored by Figure 2, which plots income inequality (Gini coefficient) against union coverage (the share of the workforce covered by union contracts) by state, for 1979, 1989, 1999, and 2009. The relationship between union coverage and inequality varies widely by state. In 1979, union stalwarts in the northeast and Rust Belt combined high rates of union coverage and relatively low rates of inequality, while just the opposite held true for the southern “right to work” states. A large swath of states—including the upper Midwest, the mountain west, and the less urban industrialized states of the northeast—showed lower-than-national rates of inequality at union coverage rates a bit above or a bit below that of the nation. More importantly, as we plot the same relationship in 1989, 1999, and 2009, those states move as a group towards the less-union coverage, higher-inequality corner of the graph. The relationship between declining union coverage and rising inequality is starkest in the earlier years (between 1979 and 1989). After 1999, union coverage has bottomed out in most states and changes in the Gini coefficient at the state level are clearly driven by other factors, such as financialization and the real estate bubble.