This methodology document discusses the data and methods used in the Economic Policy Institute (EPI)/Center on Wage and Employment Dynamics (CWED) estimation of the teacher pay penalty—both the weekly wage estimates and the analysis of benefits to compute the adjustments required to estimate the compensation penalty. We highlight and assess the improvements in sample selection and regression specification in the results reported in our 2019 study, The Teacher Weekly Wage Penalty Hit 21.4 Percent in 2018, a Record High: Trends in the Teacher Wage and Compensation Penalties through 2018. This document duplicates Appendix A in that 2019 study (Allegretto and Mishel 2019) and draws on and updates the documentation found in The Teaching Penalty by Allegretto, Corcoran, and Mishel (2008).
There are two measurement issues whenever there is discussion of teacher pay. One is that teachers have the “summers off,” so annual earnings are an inappropriate guide for wage comparisons. Two, teachers have good health and pension benefits that must be taken into account. We directly address these issues, the first by examining the weekly earnings of teachers compared with other college graduates and the second by adjusting our estimates of the weekly wage penalty for differences in benefits. This methodology document provides details on our choice to use weekly wages as our metric of wages and on our methods for measuring benefit differences and adjusting the wage penalty to reflect a total compensation penalty.
Many other issues are covered as well. We discuss all matters related to our use of the Current Population Survey, including the choice of sample; the inconsistencies created by the 1994 redesign and by 1992 changes in how education level is measured; our exclusion of observations with imputed wage data; and adjustments made to top-coded data in response to significant growth in the number of observations with top-coded weekly wages. We detail and explain the improvements in sample selection and regression specification and assess the impact of these choices on the level and trend of the estimated teacher weekly wage penalty. We also describe the regression specification and method for estimating a teacher weekly wage penalty for each state. Last, we provide the details on the measurement of benefits and our method of computing the teacher “benefits advantage” that we use to adjust the weekly wage penalty and identify the teacher compensation (wage and benefits) penalty.
Current Population Survey sample
As noted in the body of the report, we use individual microdata from the Current Population Survey (CPS) from the Bureau of Labor Statistics (BLS), specifically the “Outgoing Rotation Group” (ORG) sample, or CPS-ORG, for our wage analysis. The CPS is the monthly survey administered by the BLS to more than 60,000 households to measure and report on unemployment. The CPS-ORG data used here are based on reports from roughly 150,000 workers each year. These data are among the most widely used by economists to study an array of labor market topics including wages, employment, and unemployment. The CPS-ORG data are particularly useful due to their large sample and the inclusion of information on weekly wages. Since 1994, the CPS-ORG survey has asked respondents to report their wages on a weekly, biweekly, monthly, or annual basis (whichever a respondent finds most appropriate), from which the BLS then derives the weekly wage.
Our analysis restricts the sample to all full-time college graduates between the ages of 18 and 64 (defining “full-time” as working at least 35 hours per week). Teachers are identified using detailed census occupation codes, and the sample includes only elementary, middle, and secondary teachers (prekindergarten and kindergarten teachers, adult educators, and special education teachers are excluded). This analysis also focuses only on public school teachers (private school teachers—who, on average, earn less than public school teachers—are excluded). In earlier work, our regressions were estimated using a sample that included workers of all education levels rather than being restricted to college graduates: We explain our reasoning and assess the impact in the section on changes to sample selection and regression specification.
Addressing inconsistencies in the CPS historical series and ‘benchmarking to 1996’
There are two inconsistencies in the historical CPS data that affect our estimates. The first is that the coding scheme for education changed in 1992. Prior to 1992 the survey asked for the “highest grade of school attended,” and in 1992 the question was changed to “highest level of school completed or the highest degree received.” Thus, the data in years prior to 1992 provide a measure of the number of years in school of each worker, while the data starting in 1992 provide the level of the highest degree attained. EPI’s Methodology for Measuring Wages and Benefits (EPI 2019) describes how this inconsistency is handled by adjusting the pre-1992 data:
The challenge of making a consistent wage series by education level is to either make the new data consistent with the past or to make the old “years of schooling” data consistent with the new educational attainment measures. To that end, we assume that completing 12 years of schooling equates with a high school diploma, 16 years with a college degree, and 18 or more years with an advanced degree. Anything between and including 13 and 15 are coded as “some college.” We redistribute the “17s” to the “16 years” category (presumably a four-year degree).
The second inconsistency is due to the major CPS redesign in 1994. For the purposes of our study, the most significant change is that, starting in 1994, respondents may choose whether to report their earnings on an hourly, weekly, biweekly, monthly, or annual basis—whichever earnings interval is easiest for them to report. In pre-redesign years, respondents were specifically asked to report their weekly wages from the last week.
As reported by Allegretto, Corcoran, and Mishel (2008),
The change in the CPS survey question on earnings appears to have resulted in a significantly higher weekly wage among teachers, as teacher wages rose 10.2 percent between 1993 and 1994 (the year the redesigned survey was first used)—far faster than the 2.2 percent increase among nonteacher college graduates. The additional 8 percent wage growth among teachers appears to represent the effects of a correction for the underlying bias in the pre-1994 survey. Consequently, our estimates incorporate the pre-1994 data in a way that does not allow this bias to be built into our results. (47)
To avoid the bias in the pre-1994 data, we benchmark our historical weekly wage penalty series to 1996 levels, using 1996 since it is the first year for which we have data following the redesign. This takes two steps. The first is to link 1996 estimates to 1993, the last year pre-redesign. The challenge is that we do not have data for 1994 and 1995, and there is a 1993–1994 inconsistency. Our method is to use the changes in the estimated teacher weekly wage penalty series in the March CPS to link 1993 and 1996, using the estimates presented in Allegretto, Corcoran and Mishel 2008, Table 3: The teacher weekly wage penalty changed by -1.0, -4.2, and -2.0 percentage points for all teachers, for women teachers, and for men teachers, respectively. This provides an estimate of the teacher wage penalty in 1993 benchmarked to 1996 levels. The second step is to use the changes in the estimated teacher weekly wage penalty for the years 1979 to 1993 to backcast from the 1993 estimate to each prior year. That is, we use our estimates for 1992 and 1993 to compute the percentage point change and add that to the benchmarked 1993 level to obtain a benchmarked estimate for 1992. The same process yields benchmarked estimates for the entire 1979–1993 period. For more information on CPS coding schemes and the 1994 redesign, see Cohany, Polivka, and Rothgeb 1994.
The CPS-ORG data used in this analysis are nonimputed data—we have used only nonimputed since the inception of this series in 2004. When a survey respondent fails to report any earnings, the BLS imputes their earnings—also referred to as allocated earnings. The imputation procedure is based on a Census Bureau “hot deck” methodology that finds a respondent or “donor” in the survey who closely matches the nonrespondent in characteristics such as location, age, race, and education. The problem is that the CPS-ORG does not match on detailed teacher occupations; it instead matches on a broad set of occupational categories. Thus, more often than not nonresponding teachers are assigned the average earnings of college graduates in higher-paid nonteacher occupations. Thus, imputed teacher earnings are systemically overstated, which creates a systemic bias in the comparison of teacher earnings with that of other professionals—which effectively attenuates the teacher disadvantage.1
In addition, the share of CPS-ORG earnings data that are imputed has grown markedly over time; hence, the bias is exacerbated. In 1979, for our original sample, which included workers with less than a bachelor’s degree, imputed earnings data in the CPS-ORG made up 17 percent of the sample; by 2000, imputations accounted for 33 percent of the sample; and in 2018, they made up 41 percent of the original sample. As mentioned, the implications for our analysis of the teacher weekly wage penalty are significant. In the early years, 1979 through 1993, the differential between the teacher weekly wage penalty found using all the data available (without regard for imputations) and the teacher weekly wage penalty found by analyzing only nonimputed observations was at most a 2 percentage-point difference—meaning that the inclusion of imputed data lessened the teacher weekly wage penalty by 2 percentage points or less. But post-1996, the differential steadily grew, and in 2015 the differential was larger than ever. For all teachers, the teacher weekly wage penalty of 19.0 percent for 2015 reported in Figure B in Allegretto and Mishel 2019 (estimated without imputed data) would be mitigated to a 10.0 percent penalty had imputations been included—thereby reducing the penalty by nearly half.
As mentioned above, our current methodology restricts the sample to workers with at least a bachelor’s degree. Compared with the above, shares of imputed data on our new sample of teachers and other college graduates are qualitatively similar. Over the more consistent data period, from 1996 to 2018, allocated data increased from 18 percent to 30 percent for teachers, while it increased from 24 percent for the sample of other college graduates to 39 percent. Whether we analyze our previous, larger sample of workers or our new sample of workers—only those with at least a bachelor’s degree—the problem of imputed data has substantially worsened over time.
BLS allocation flags are not available for 1994 and are available for only the last four months of 1995. Because we cannot limit our analysis to nonimputed data for 1994 and 1995, we therefore do not report results for those years. In the past we have extrapolated results for these two years by comparing estimates using all data available with estimates using nonimputed data only. We then compared the two sets of estimates for the years just prior to and just after 1994 and 1995. The results gave us a rough expectation of what the teacher weekly wage penalties would be if nonimputed data were available for 1994 and 1995. However, in this update we no longer make these two guesstimates. When the time frame of our analysis was much shorter, we felt it was important to try to provide informed guesses for these two years—but we no longer think it is important to do so. Generally, comparisons of the time periods pre-1994 and post-1995 are at best suggestive, as the 1994 CPS redesign was substantial and other variables such as imputation flags and educational attainment coding are not consistent between the two time periods.
Our analysis of the relative wage of teachers relies on comparisons of weekly earnings, rather than annual earnings, the approach taken by some authors (e.g., Hanushek and Rivkin 1997; Temin 2002, 2003; Greene and Winters 2007; Podgursky and Tongrut 2006), or hourly earnings. As discussed in our prior work, we elect to use weekly wages rather than annual earnings to avoid measurement issues re how to handle annual weeks worked (to, for example, account for teachers’ traditional “summers off”). We elect weekly wages rather than hourly earnings to avoid controversies over the number of hours teachers work per week. We note that a Scholastic–Melinda & Bill Gates Foundation survey found that “teachers work an average of 10 hours and 40 minutes a day, three hours and 20 minutes beyond the average required work day in public schools nationwide” (Scholastic 2012).
It is often noted that the annual earnings of teachers cannot be directly compared with those of nonteachers given that teachers are typically only contracted to work a nine-month year. But differences arise over exactly how much time teachers devote to their position outside of their nine contracted months of teaching. Teachers spend some of their summer months in class preparation, professional development, or other activities expected of a professional teacher. Teachers who may wish to earn additional income during the summer months can often do so but are unlikely to be able to earn at the same wage rate as in their teaching role (so having a nine-month salary is a disadvantage in attaining an annual salary target). Similarly, attempts to compare the hourly wages of teachers and other professionals have resulted in considerable controversy by setting off an unproductive debate about the number of hours teachers work at home versus other professionals.2
As we note in Allegretto and Mishel 2019, such decisions regarding wages interval (weekly, annual, or hourly) become mostly irrelevant when considering changes in relative wages over time since a constant bias is consistent with identifying an accurate trend. For instance, a measure can be biased in terms of levels (a thermometer, say, may be off by two degrees) but could still provide accurate information on trends (how much the temperature rose may be accurately discerned with either a precise or a consistently biased thermometer). Similarly, changes in relative wages are expected to be similar as long as the relative work time (between teachers and other college graduates) remains constant. For example, if the ratio of weekly hours worked by teachers relative to those worked by comparable workers remains constant over time, then estimates of changes in relative hourly wages will be the same as for relative weekly wages. Similarly, estimated changes in relative annual earnings will parallel those for weekly earnings as long as the annual weeks worked by teachers have not changed relative to those of other college graduates.
Some researchers (e.g., Podgursky and Tongrut 2006) have contended that the use of the CPS-ORG weekly wage data downwardly biases teacher earnings: They claim that teachers report a weekly wage that is actually an annual salary divided over a full year rather than the partial year they actually work, which exaggerates the teacher weekly wage penalty relative to other comparable workers. This issue is particularly relevant to CPS data prior to the 1994 CPS redesign. In Allegretto, Corcoran, and Mishel 2008, we benchmarked the CPS-ORG wage data to annual data from the Annual Social and Economic Supplement (ASEC) of the CPS, the “March CPS.” This extensive benchmarking exercise provided validation that the CPS-ORG wage data are consistent with the annual March data, which Hanushek and Rivkin (1997, 2004), Temin (2002, 2003), and Podgursky and Tongrut (2006) have used in their analyses of teacher wage trends. As expected, the annual wage penalty is just the weekly wage penalty multiplied by the ratio of teacher and nonteacher annual weeks worked. We are even more confident in the post-redesign CPS data because the redesigned questions allow respondents to provide wage data for a variety of reporting periods—hourly, weekly, biweekly, monthly, annual. The BLS uses the reported wage data to compute weekly wages based on information on weeks worked provided by respondents. Therefore, the potential problem of teachers reporting weekly wages earned in nine months but received over the full year was lessened by the redesign.
Our 1979 through 1993 benchmarking exercise leaves little doubt that there has been deterioration in the relative earnings of teachers over time. Moreover, our use of weekly wage comparisons in all of our work on teacher pay allows us to avoid unproductive discussions of work years, summers off, and so on. Allegretto, Corcoran, and Mishel 2008 also show in chapter 2 that the long-term trends in the March CPS, decennial census, and CPS-ORG data all yield similar findings regarding the relative erosion of teacher wages.
Adjusting observations with BLS top-coded weekly wages
To protect the confidentiality of respondents, public-use CPS data have assigned top codes for each source of income that respondents’ report and for the wage data in the ORG files. In our prior analyses, we did not make any adjustments to weekly wages to account for top-coding. Failure to account for this has generated a growing understatement of college graduate wages as more and more observations have become top-coded in the last 20 years. We therefore make adjustments to the top-coded data in the analyses reported here; below, we provide an assessment of the impact of this decision on our estimates of the level and trend of the teacher weekly wage penalty.
For the time period of our study, 1979 through 2018, the BLS top code was increased only twice: the assigned amount rose abruptly in 1989 from $999 to $1,923 per week, and again in 1998 to $2,884.61.
The share of observations that hit the top code grew considerably over the long periods between updates. During the first period, from 1979 through 1988, when the top-code value was $999, the share of top-coded data for all wage and salary workers ages 18–64 increased from 0.6 percent to 4.6 percent. Over this same period, the share of public school teachers with top-coded weekly wages increased from 0.1 percent to 3.7 percent. Importantly, over the same time, the shares for other college graduates increased much more, from 2.6 percent to 16.7 percent, reflecting the growth in wages at the top of the distribution along with the few changes in the BLS-assigned amount. As expected, when the assigned top-code value was increased from $999 to $1,923 in 1989, the top-code share of observations plummeted—falling to 0.5 percent, 0.0 percent, and 2.4 percent for the whole sample, for teachers, and for other college graduates, respectively.
The shares then increased to 1.7 percent, 1.0 percent, and 6.5 percent in 1997, the final year weekly wages were top-coded at $1,923. In 1998, the top-code value was increased to $2,884.61, where it stands today. Over the last 21 years, the top-code share has increased from 0.6 percent to 4.2 percent for the overall sample, from 0.3 percent to 1.3 percent for teachers and from 2.3 percent to 12.1 percent for other college graduates.
The dynamic of the BLS-assigned top code essentially truncates the wage distribution at the top and creates a downward bias in the mean weekly wages measured; the issue becomes more pronounced as the share of observations with top-coded weekly wages increase. To ignore the issue of top codes is to artificially attenuate the teacher disadvantage because far fewer teacher observations are top-coded than observations among other college graduates.
For our study, we replace original BLS-assigned top-code values with a Pareto-distribution implied mean for the upper tail of the weekly earnings distribution. The method is further described in EPI’s Methodology for Measuring Wages and Benefits (EPI 2019).
Improvements in sample selection and specification
We have been tracking trends in teacher pay and estimating relative teacher wage and compensation penalties since 2004 when we released our first book on the subject, How Does Teacher Pay Compare? We made two substantive changes in our approach in our second book, The Teaching Penalty, released in 2008. For this current study, we have introduced seven significant modifications to the sample, data, and regression specifications. This section details these changes and their impact on the level and trend of the teacher weekly wage penalty.
The effects of the two methodological changes made for The Teaching Penalty are documented in Appendix A of that book. First, we excluded private-sector teachers from our estimated teacher weekly wage penalties so that the estimates were restricted to public school teachers. Teacher wages in the private sector are, on average, substantially less than in the public sector, as documented by Allegretto and Tojerow (2014). Thus, we added a control in our regression model to separate out weekly wage penalties of public-sector teachers from those of teachers in the private sector. This change resulted in a smaller, otherwise less negative (public-sector only) teacher weekly wage penalty than originally estimated in our 2004 book.
A second improvement was to include a finer set of controls on educational attainment in the regression model, adding separate controls for B.A. and M.A. degrees rather than simply controlling for having a college degree. This change was made because a much larger share of teachers hold a master’s degree or higher compared with other college graduates. This change in the specification of education increased the relative teacher weekly wage penalty (making it more negative). The two changes together decreased the original teacher disadvantage slightly (the penalty became less negative). For example, in the 2008 book, we concluded that the original 2004 teacher weekly wage penalty of 16.3 percent became 15.1 percent with the simultaneous implementation of the two changes. For all teacher pay studies we published from 2008 through 2018, we made no changes to our research design or sample construction.
For our current study, we have revised our sample strategy and regression specification to improve the estimates. We explain our reasoning for each decision and assess the impact on our estimates of the teacher weekly wage penalty. Appendix Table A1 provides a summary of the changes we have incorporated into our new approach compared with our other analyses since the publication of The Teaching Penalty in 2008.
Changes in specification and sample restrictions
|Sample restrictions and regression specification||Original sample and model used in previous work||Sample and model used in this paper|
|Dependent variable log weekly wage||Yes||Yes|
|Self-employed workers dropped||Yes||Yes|
|Weekly hours ≥ 35||Yes||Yes|
|Imputed CPS wage data dropped||Yes||Yes|
|Age range 18–64||Yes||Yes|
|Education sample restriction||None||B.A. or higher|
|Education category controlsb||6||4|
|Race/ethnic controls: w,b,h,oc||Yes||Yes|
|Age quartic controls||Yes||Yes|
|Gender control (pooled regressions)||Yes||Yes|
|Private-sector teacher indicator||Yes||Yes|
|Public-sector teacher indicator||Yes||Yes|
|Include CPS-ORG weight in regressions||No||Yes|
a We adjust top codes using Pareto distribution.
b Six groups: less than high school, high school diploma, some college or associate degree, bachelor’s degree, master’s degree, professional degree/Ph.D. (combined). Four groups: bachelor’s degree, master’s degree, professional degree, Ph.D.
c White, black, Hispanic, other. Race/ethnicity categories are mutually exclusive (i.e., white non-Hispanic, black non-Hispanic, other non-Hispanic, and Hispanic any race).
In Allegretto and Mishel 2019, we again restrict the sample to all full-time wage and salary workers between the ages of 18 and 64 (defining “full-time” as working at least 35 hours per week), as we have in the past. We also limit the sample to nonimputed earnings data, as discussed above. As usual, teachers are identified using detailed census occupation codes, and the sample includes only elementary, middle, and secondary teachers (prekindergarten and kindergarten teachers, adult educators, and special education teachers are excluded). We continue to separate out public and private teacher outcomes, thereby limiting our reported estimates of the teacher weekly wage penalty to public school teachers. We continue to use log weekly wage as the dependent variable.
We further restrict the sample to include only those workers with at least a bachelor’s degree. In some of our earlier work, we estimated wages penalties back to 1960, when teachers were less likely to have a college degree. But today public school teachers are required to have at least a B.A.; thus, we limit the sample to those workers who have at least a B.A. degree. We also expand the categories of education indicator variables for highest degree attained—to include M.A., professional degree, and Ph.D.—in the regressions. Previously, the only indicator beyond the B.A. level was M.A. These changes allow for a cleaner comparison of teachers with other college graduates.
Additionally, we include finer geographical controls. Regression specifications now include geographic controls at the state (and D.C.) level instead of regional level controls (for four regions). For the first time, we use top-code-adjusted weekly wages that follow a Pareto distribution, as explained above, to avoid the increasing understatement of college graduate wages as more observations are top-coded. We now weight the observations in our regressions using the CPS-ORG weights. This ensures our results are nationally representative and correspond to the descriptive weekly wage data used in Figure A in the main text. In accordance with past work, we continue to include regression controls for race (four categories), marital status (dichotomous indicator), and age as a quartic.
To gain further insight into how these changes affect our estimates, we compare each separately and all simultaneously to our original estimated teacher weekly wage penalties as reported in our earlier work (Allegretto and Mishel 2018)—presented as model (1) in Appendix Table A2. We omit any analysis of expanding the education controls because the impact of doing so does not affect the estimated weekly wage penalty by as much as 0.1 percent.
The top panel of Appendix Table A2 reports estimates of the weekly wage penalty for 1979, the beginning of our analysis; for 1993, just before the 1994 redesign and the two-year period in which imputations are incomplete or not available; and then for selected years from 1996 onward. These data allow us to assess the impact of sample and regression specification changes on the levels of the weekly wage penalty in key years. The last four columns report percentage point change in the weekly wage penalty over various time periods to assess the impact of sample and regression specification changes on the trend of the weekly wage penalty. The bottom panel of the table reports differences between the new and original estimates for the selected years and for the trends in particular periods. Appendix Figure A1 shows the time series of the estimated teacher weekly wage penalty for each year from 1979 to 2018 for every specification choice, except for the years for which we do not have data (1994–1995).
Estimated teacher wage penalty with alternative specifications, 1979–2018
|Level of wage penalty||Percentage point change in penalty|
|A. Estimated weekly wage penalty|
|2||Original & adjust top codes||-3.5%||-4.7%||-5.7%||-14.6%||-14.8%||-14.4%||-22.4%||-22.9%||-19.4||-17.2||-8.1||-8.5|
|3||Original & replace region with state effects||-1.6%||-1.6%||-2.6%||-11.9%||-11.1%||-10.8%||-16.7%||-16.8%||-15.2||-14.2||-5.7||-6.0|
|4||Original but exclude noncollege workers||-6.8%||-5.0%||-6.0%||-13.8%||-13.4%||-12.4%||-19.3%||-19.5%||-12.7||-13.5||-6.1||-7.1|
|5||Original using ORG weights||-3.2%||-3.7%||-4.7%||-13.0%||-12.2%||-11.6%||-17.2%||-17.3%||-14.1||-12.6||-5.2||-5.8|
|6||All changes described in (2)–(5)||-7.3%||-5.3%||-6.3%||-13.0%||-13.0%||-13.5%||-20.7%||-21.4%||-14.1||-15.1||-8.4||-7.9|
|B. Difference with original (1)|
|2||Original & adjust top codes||-1.0||-1.4||-1.4||-1.0||-1.8||-2.3||-3.7||-4.2||-3.2||-2.8||-2.4||-1.9|
|3||Original & replace region with state effects||0.9||1.7||1.7||1.6||1.9||1.3||2.0||1.9||1.0||0.2||0.1||0.6|
|4||Original but exclude noncollege workers||-4.3||-1.7||-1.7||-0.3||-0.4||-0.3||-0.6||-0.8||3.5||0.9||-0.4||-0.5|
|5||Original using ORG weights||-0.7||-0.4||-0.4||0.5||0.8||0.5||1.5||1.4||2.1||1.7||0.5||0.8|
|6||All changes described in (2)–(5)||-4.8||-2.0||-2.0||0.6||-0.1||-1.4||-2.0||-2.7||2.1||-0.8||-2.7||-1.3|
* Benchmarked estimate: 1993 equals 1996 estimate plus change in wage penalty, 1993–1996, estimated with March CPS data; 1979 equals derived 1993 plus changes in wage penalty, 1979–1993, estimated with ORG data.
Source: Authors’ analysis of Current Population Survey Outgoing Rotation Group data
Appendix Table A2 and Appendix Figure A1 allow us to assess the impact of four changes: adjusting top codes; replacing region with state controls; excluding non-college-educated workers from the sample; and weighting each observation with the weights provided in the CPS-ORG data.
Appendix Figure A1 provides a visual depiction that compares and contrasts each of the six models over the time period of study. The first thing to notice in the figure is that a comparison of results from each model are much less consistent in the early period compared with the estimated weekly wage penalties after the 1994 redesign. This is due to many issues, but generally we believe that the post-1994 period provides more reliable estimates due to data improvements and consistency (see more below on the redesign)—which is why we benchmark the pre-1994 estimates. Benchmarked estimates effectively reduce the teacher weekly wage disadvantage.
Estimated teacher weekly wage penalty with alternative specifications, 1979–2018
|Model variations||Original model||Original model||Original & adjust top codes||Original & adjust top codes||Original & replace region with state effects||Original & replace region with state effects||Original but exclude noncollege workers||Original but exclude noncollege workers||Original using ORG weights||Original using ORG weights||All changes||All changes|
Notes: Figure shows regression-adjusted teacher weekly wage penalties of public school teachers (elementary, middle, and secondary) relative to other college graduates under alternative models. In all models, the dependent variable is (log) weekly wages. The original model includes indicator controls on public school teacher, private school teacher, gender, and married, along with indicator sets on education (M.A., professional degree, Ph.D.) and race/ethnicity (black, Hispanic, other); also included are age as a quartic and state fixed effects. Each subsequent model builds from the original sample and regression specification. Estimated relative teacher weekly wage penalties reported in this paper are from the final model that incorporates “All changes.” Estimates are omitted for 1994 and 1995, as imputation flags for these two years are incomplete or not available.
Source: Authors’ analysis of Current Population Survey Outgoing Rotation Group data
This analysis yields a few general conclusions. Two changes—the use of state rather than regional controls, and the weighting of observations in the period from 1996 to 2018—lower the teacher penalty (move it toward zero). The other changes (top-code adjustments, weighting of observations in the 1979–1993 period, and the exclusion of non-college-educated workers) increase the estimated teacher weekly wage penalty (making the estimate more negative). Regardless of the model chosen, however, the relative teacher wage penalty deteriorated considerably over time, as Appendix Figure A1 visually confirms. Thus, the worsening trend in teacher relative wages is evident no matter which model is chosen.
The two changes that have had the largest effect in the post-1994 redesign period are estimates from top-code-adjusted weekly wages and the incorporation of state fixed effects into the regression specification. Over time (1996 through 2018), using state fixed effects lowers the teacher penalty on a fairly consistent basis, but the impact of using the top-code adjustment increasingly makes the teacher wage penalty more negative. It is not surprising that the impact of making the top-code adjustment grows over time since the share of the sample with top-coded weekly wages has grown considerably.
The impact of excluding non-college-educated workers is relatively small in recent years but was relatively large in the early period (4.3 percentage points in 1979) and still sizable into the mid-1990s (1.7 percentage points).
In sum, our improved approach with all of the changes (shown as in row (6) in Appendix Table A2) estimates a larger relative teacher disadvantage—even as the estimates increased by differing amounts across the years. A clear deterioration of relative teacher wages is evident, regardless of model. Using model (1) to estimate the teacher weekly wage penalty in 2018 yields a 18.7 percent estimate, while model (6) results in a 21.4 percent penalty.
Estimates of state-specific teacher weekly wage penalties
In Allegretto and Mishel 2019, we improve our estimates of the teacher weekly wage penalty in the states. In previous work, we relied on descriptive computations of the weekly wages of public school teachers relative to other college graduates, controlling for education levels. Specifically, we used data on the weekly wages of those with educational attainment of a B.A. degree or an M.A. degree for both teachers and other college graduates. We computed a weighted average wage for teachers and for other college graduates using the education composition of teachers as the weights. That is, if half the teachers had an M.A. degree and half had a B.A. degree, then we weighted the wages of other college graduates so that they also had an even split between those with B.A. degrees and those with M.A. degrees.
We report state-specific teacher weekly wage penalties using a regression-adjusted method that corresponds to how the national estimates are obtained and using the same sample of all wage and salary workers, with a bachelor’s degree or higher, between the ages of 18 and 64, who report working 35 or more hours a week, and who report hourly wages between $0.50 and $100 (in 1989 dollars). BLS top-coded data are adjusted using the same procedure described above, and the sample only includes observations that have nonimputed wages. To increase the sample size for each state, we pooled five years of the CPS-ORG data, 2014–2018.
One concern we had about this approach was whether there were sufficient data to obtain state-specific estimates. We initially examined this question using the pooled CPS-ORG data for 2013–2017. We found that D.C. had the smallest sample of public school teachers, 124 observations, and three states (Delaware, Hawaii, and Arizona) had between 133 and 139 observations. All of the remaining states had samples exceeding 150 observations. We assessed that sample sizes were sufficient. Nevertheless, in Appendix C we present the standard error, the 95th-percentile confidence intervals, and the t-statistics for each state’s weekly wage penalty estimate so readers can judge the level of precision.
There are variations across the states in the character of the samples. The share of observations that have imputed wages varies greatly, with imputation rates ranging from 15.9 to 45.6 percent for teachers and from 23.9 to 54.4 percent for other college graduates. Similarly, the share of observations with BLS top-coded weekly wages varies greatly as well. The share of top-coded observations for teachers ranges from zero (in nine states) to 5.5 percent, for an average of 1.0 percent across all states and D.C. Top-code shares for other college graduates range from 3.7 percent to 17.3 percent, for an average of 8.4 percent.
To estimate state-specific teacher weekly wage penalties, we first fit a linear regression of log usual weekly earnings on a complete set of dummy variables for whether the respondent is a public school teacher, interacted with state fixed effects. The state-specific teacher weekly wage penalty is then the sum of the coefficient for being a public school teacher and the coefficient for the interaction with the particular state. Finally, to express the weekly wage penalty in percentage points, we exponentiate the sum and subtract one. In addition to the set of teacher and state interactions, the regression model also includes the same set of control variables employed to estimate the national weekly wage penalty: a gender dummy; a set of dummies for black, Hispanic, and other race; an age quartic; marital status; dummies for M.A., Ph.D., and professional degrees; and state fixed effects. The model also controls separately for a complete set of interactions between the respondent’s state and whether the respondent is a private school teacher.
Computing the benefits advantage
This section explains the computations required to assess the benefits advantage in each year. The benefits advantage reflects the advantage of K–12 teachers in benefits relative to those of civilian professionals, assuming the wage penalty is estimated as detailed above. There are no microdata available to researchers for estimating a compensation penalty that requires data on both benefits and wages for teachers and other college graduates. Instead, we rely on a comparison of teachers and professionals in the aggregate to adjust our estimated teacher wage penalties.
The core data for these computations come from the BLS Employer Costs for Employee Compensation (ECEC) Historical Listing based on the National Compensation Survey (BLS 2019). Specifically, we pull data on employer costs per hour worked for detailed categories of compensation from the section “Primary, secondary, and special education school teachers” in BLS 2019, Table 7 (“State and local government workers”). We compare teacher benefits with those of civilian professionals from BLS 2019, Table 3. The category “civilian professionals”—which includes all private-sector workers and state and local public-sector workers but excludes federal government workers—is the broadest category available that corresponds with all college graduates. State and local government teachers closely correspond to our CPS-ORG sample of public school teachers.
“Benefits” in our analysis refers to the employer costs for health and life insurance, retirement plans, and payroll taxes (i.e., Social Security, unemployment insurance, and workers’ compensation). The remaining components of compensation are “W-2 wages,” a wage measure that corresponds to the wages captured in the CPS-ORG data used for estimating the wage penalty. W-2 wages are the wages reported to employees and to the Internal Revenue Service, and they include the following ECEC items: “direct wages,” defined by the BLS as “regular payments from the employer to the employee as compensation for straight-time hourly work, or for any salaried work performed”; “supplemental pay,” which includes premium pay for overtime, bonus pay, profit-sharing; and “paid leave.” These data are used to compute the benefits share of compensation presented in Table 1 and the ratio of compensation to W-2 wages used in the calculation of the benefits advantage. It is important to note that the CPS-ORG definition of wages includes supplemental pay items and paid leave, so using the “direct wage” measure of wages from the ECEC would not provide a match to estimates of the wage penalty.
Appendix Table A3 shows the calculations for identifying the benefits advantage. The basic data required, shown in Panel A, are the compensation-to-W-2-wages ratios for K–12 teachers and for professionals, plus the estimated weekly wage penalty for the particular year. The logic is simple. Start with the relative wage disadvantage, or penalty, of teachers—21.4 percent (as reported in Figure B in the main text). Based on this result, we set the wages of professionals to be equal to 100.0 and teacher wages equivalent to 78.6, as shown in Panel B of Appendix Table A3.
Computing the benefits advantage, 2018
|A. Basic data|
|Ratio: Compensation to W-2 wages|
|Estimated weekly wage penalty||-21.4%|
|B. Benefits advantage analysis|
|C. Benefits advantage****||8.4%|
* 100 less estimated wage penalty
** Wage times compensation-to-wage ratio
*** Divide by professionals’ compensation, the benchmark, and multiply by 100
**** Subtract teachers’ W-2 wages from teachers’ rescaled compensation
Note: Numbers may not sum to totals due to rounding.
Sources: Benefits and wage data from Bureau of Labor Statistics Employer Costs for Employee Compensation data; wage penalty estimated with Current Population Survey Outgoing Rotation Group data
The corresponding compensation difference is computed knowing the compensation-to-wage ratio for each occupational group. These compensation differences are then rescaled to set professional compensation equal to 100.0, showing that teacher compensation is 86.9 (indicating a compensation disadvantage of 13.1 percent). The “benefits advantage” calculated when relying solely on the wage data rather than including both benefits and wage data (i.e., compensation data) is thus 8.4 percent, the difference between the relative teacher compensation of 86.9 and the relative wage of 78.6. (Note that the underlying data, not rounded, generate an 8.4 percent benefits advantage, while the rounded-to-one-decimal data indicate an 8.3 percent advantage.)
Historical data issues
Two data discontinuities need to be addressed to develop a historical series. One is the discontinuity introduced by the CPS redesign in 1994. This discontinuity creates two problems: It makes CPS analysis of teacher wage data before and after 1994 inconsistent, and it makes it impossible to identify the observations that do not have imputed wage data in 1994 and 1995, preventing us from making estimates for those years consistent with later years. Our method for benchmarking to 1996 levels, described above, allows us to compile a series for the 1979–1993 years that is consistent before and after the 1994 redesign. However, we do not attempt to present estimates for 1994 or 1995.
The second discontinuity is that the ECEC occupational categories changed in 2004. There are consistent data on “professionals” from 2004 to 2018, but the data before 1994 are for “professional and technical workers.” Similarly, the ECEC provides data on K–12 teachers (in the state/local sectors) starting in 2004 (see BLS 2019, Table 7), but the data before 2004 are for a category of “elementary and secondary teachers” that are civilians and not limited to the state/local public sector.
Because of the inability to use nonimputed data in 1994–1995, we use 1993 as the key historical year in Table 1 along with 1979 to show the estimated wage penalty trends. These data are benchmarked to the 1996 level of the wage penalty.
There are no benefits data before 1986, so it is not possible to derive a compensation penalty for 1979. This leaves us the challenge of identifying the benefits advantage for 1993. An analysis of the elementary/secondary and professional/technical occupation benefits data from the ECEC for 1993 and 2003 indicates a decline in the benefits advantage from 3.4 percent to 3.2 percent, a decline of 0.2 percentage points. We extend the benefits advantage calculated for 2004, 2.2 percent, to 1993 by adding this 0.2 percentage point trend from the 1993–2003 period to obtain a benefits advantage of 2.4 percent in 1993. This assumes that there was no change in the benefits advantage over the 2003–2004 period and that, over the 1993–2003 period, the teacher–professional advantage experienced percentage-point growth comparable to the growth of the elementary/secondary–professional/technical advantage during this time.
About the authors
Sylvia A. Allegretto is a labor economist and co-chair of the Center on Wage and Employment Dynamics, which is housed at the Institute for Research on Labor and Employment at the University of California, Berkeley. She is also a research associate of the Economic Policy Institute and is co-author of many EPI publications, including past editions of The State of Working America, How Does Teacher Pay Compare? and The Teaching Penalty: Teacher Pay Losing Ground. She has a Ph.D. in economics from the University of Colorado, Boulder.
Lawrence Mishel is a distinguished fellow and former president of the Economic Policy Institute. He is the co-author of all 12 editions of The State of Working America. His articles have appeared in a variety of academic and nonacademic journals. His areas of research include labor economics, wage and income distribution, industrial relations, productivity growth, and the economics of education. He has a Ph.D. in economics from the University of Wisconsin at Madison.
1. For more on imputations, see Allegretto, Corcoran, and Mishel 2008, 10; Hirsch and Schumacher 2004; and Bollinger and Hirsch 2006.
2. In Allegretto, Corcoran, and Mishel 2008, we estimated regression-adjusted relative hourly wages of teachers using CPS-ORG data and found no qualitative differences in our results.
Allegretto, Sylvia A., and Lawrence Mishel. 2019. The Teacher Weekly Wage Penalty Hit 21.4 Percent in 2018, a Record High: Trends in the Teacher Wage and Compensation Penalties Through 2018. Economic Policy Institute and the Center on Wage and Employment Dynamics at the University of California, Berkeley, April 2019.
Allegretto, Sylvia A., Sean P. Corcoran, and Lawrence Mishel. 2004. How Does Teacher Pay Compare? Methodological Challenges and Answers. Washington, D.C.: Economic Policy Institute.
Allegretto, Sylvia A., Sean P. Corcoran, and Lawrence Mishel. 2008. The Teaching Penalty: Teacher Pay Losing Ground. Washington, D.C.: Economic Policy Institute.
Allegretto, Sylvia A., and Ilan Tojerow. 2014. “Teacher Staffing and Pay Differences: Public and Private Schools.” Monthly Labor Review (U.S. Department of Labor, Bureau of Labor Statistics), September 2014.
Bollinger, Christopher R., and Barry T. Hirsch. 2006. “Match Bias from Earnings Imputation in the CPS: The Case of Imperfect Matching.” Journal of Labor Economics 24, no. 3: 483–519. https://doi.org/10.1086/504276.
Bureau of Labor Statistics (BLS). 2019. Employer Costs for Employee Compensation Historical Listing: National Compensation Survey, March 2004–December 2018.
Cohany, Sharon R., Anne E. Polivka, and Jennifer M. Rothgeb. 1994. “Revisions in the Current Population Survey Effective January 1994.” Employment and Earnings, February 1994.
Economic Policy Institute (EPI). 2019. Methodology for Measuring Wages and Benefits. Last updated February 21, 2019.
Greene, Jay P., and Marcus A. Winters. 2007. How Much Are Public School Teachers Paid? Manhattan Institute, January 2007.
Hanushek, Eric A., and Steven G. Rivkin. 1997. “Understanding the Twentieth-Century Growth in U.S. School Spending.” Journal of Human Resources 32, no. 1: 35–68. https://doi.org/10.2307/146240.
Hanushek, Eric A., and Steven G. Rivkin. 2004. “How to Improve the Supply of High-Quality Teachers.” In Brookings Papers on Education Policy: 2004, edited by D. Ravitch, 7–44. Washington, D.C.: Brookings Institution Press.
Hirsch, Barry T., and Edward J. Schumacher. 2004. “Match Bias in Wage Gap Estimates Due to Earnings Imputation.” Journal of Labor Economics 22, no. 3: 689–722. https://doi.org/10.1086/383112.
Podgursky, Michael, and Ruttaya Tongrut. 2006. “(Mis-)Measuring the Relative Pay of Public School Teachers.” Education Finance and Policy 1, no. 4: 425–440. https://doi.org/10.1162/edfp.2006.1.4.425.
Scholastic and the Bill & Melinda Gates Foundation (Scholastic). 2012. Primary Sources 2012: America’s Teachers on the Teaching Profession.
Strauss, Valerie. 2017. “Teacher Shortages Affecting Every State as 2017–18 School Year Begins.” Washington Post, August 28, 2017.
Temin, Peter. 2002. “Teacher Quality and the Future of America.” Eastern Economic Journal 28, no. 3: 285–300.
Temin, Peter. 2003. “Low Pay, Low Quality.” Education Next 3, no. 3: 8–13.
U.S. Census Bureau, Current Population Survey Outgoing Rotation Group microdata (U.S. Census Bureau CPS-ORG). Various years. Survey conducted by the Bureau of the Census for the Bureau of Labor Statistics [machine-readable microdata file]. Data for 2004–2018.