Learn About: Evaluating Performance | Common Core
Home > Policies > Calculating high school graduation rates > Calculating high school graduation rates
| print Print


Calculating high school graduation rates

While the annual commencement ritual celebrates the nation’s high school graduates, it should also cause us to pay attention to those students who don’t attain the milestone of receiving a diploma. Who are they, where are they, and how many of them are there?

Counting graduates has increased in importance as an issue of national policy since 2001, when the No Child Left Behind Act  (NCLB) required states to report graduation rates as an indicator of high school performance. So states did just that. And then a rash of reports made headlines, arguing that graduation rates are much lower than what many states were reporting—astoundingly low, some said.

Confusion, criticism, and controversy ensued and continues—with good reason. Here are a few snapshots from reports on high school graduation rates:

  • One analyst states that only 70 percent of American students are earning standard high school diplomas (Greene, 2003).
  • Other analysts say graduation rates are probably between 80–83 percent (Mishel and Roy, 2006).
  • The U.S. Census Bureau reports that 75 percent of the nation’s 18–24-year-olds have high school credentials (Census, 2000).
  • Thirty-six states report graduation rates between 80–97 percent; another source says the reality in these same states is 58–86 percent (Hall, 2005).

As these numbers suggest, there is considerable debate among experts over how many students leave high school with a diploma. Yet, most analysts agree on one thing: The numbers are worrisome. Thirty years ago the United States led the world in the percentage of adults with high school diplomas. Now, it ranks ninth according to the Organisation for Economic Co-operation and Development (OECD, 2005). OECD, an international organization of 30 developed nations including the United States, says the lower ranking is because the percentage of young adults in the United States with a high school diploma has stayed the same, while it has risen in other countries. Other scholars disagree; their analyses show that U.S. high school graduation rates have dropped (see, for example, Haney, 2004).

A high school diploma: Don't leave school
without it

Having—or not having—a high school diploma has lifelong consequences for individuals and widespread consequences for society.

Leaving high school without a diploma seems to guarantee lifelong poverty. Students who drop out are more likely to be unemployed (Rouse, 2005). Dropouts on average earn about $12,000 per year, nearly 50 percent less than those who have a high school diploma and no higher education credential.

Not only do non-grads earn less, but they are 50 percent less likely to have a job that offers a pension plan or health insurance as compared with those who earned no more than a high school diploma (Rouse, 2005). They are more likely to experience health problems (Muenning, 2005).

Lacking the minimum safety net of a living wage, health insurance, and pensions increases the likelihood that individuals without high school diplomas will need social services such as welfare or food stamps at one or more points in their lives (Waldfogel, et al., 2005).

Individuals who do not earn a high school diploma are more likely to spend time in prison. One study conducted by a University of California, Berkeley economist found that a 10 percent increase in the graduation rate would likely reduce the murder and assault arrest rates by about 20 percent (Moretti, 2005). The same study found that increasing the high school completion rate by just one percent for men ages 20–60 would save the United States up to $1.4 billion per year in reduced costs from crime.

Even the most optimistic national estimate means that about one in five students is not earning a standard diploma on time. For minorities, the odds are even worse. The range varies, but the rosiest estimate is that one in four black and Hispanic students is not graduating (Mishel, 2006). Meanwhile, having a high school diploma is more important than ever. (See sidebar.)

The bottom line is that many states and districts don’t know what their high school graduation rate is or whether rates are improving, declining, or staying the same over time. And without accurate, reliable graduation rates, it is difficult for policymakers to enact effective policies to keep kids in school and ensure they earn a standard diploma.

In this guide the Center for Public Education examines why there is such a variation in reported graduation rates, and why it has been difficult for state and local agencies to produce an accurate, reliable measure. We also describe the latest graduation rate estimates and explain recent initiatives to improve how the indicator is collected, reported, and analyzed. Last, we provide some recommendations for local leaders so they can better understand and communicate about local graduation rates.

Why are graduation rates so inconsistent?

There are multiple sources for finding out how many Americans are high school graduates. The U.S. Census, for example, provides information about the numbers of adults who have completed a high school credential. When it comes to schools, the public and policymakers are more concerned with knowing how many students leave high school with diplomas out of all who entered—basically, the ratio of students in to graduates out. However, few states and districts have had mechanisms in place to collect and report accurate, reliable data. Plus, as we will see, they use various and idiosyncratic formulas and definitions for calculating rates that present very different pictures of high school graduation.

NCLB now requires states to use graduation rates as part of measuring a high school’s Adequate Yearly Progress. The law and subsequent regulations provide some guidelines for calculating graduation rates which could bring some uniformity.

Within basic parameters, however, NCLB gives states significant latitude in how they measure graduation rates. When Christopher Swanson, then with the Urban Institute and now the director of research at Editorial Projects in Education, studied the issue in 2003, he found that states calculated rates using a range of approaches and data, including longitudinal data, completion rates, dropout rates, and others.

Many of these methodologies, or the way they are carried out, are flawed. According to one analyst, because of problematic methodologies, more than two dozen states have overstated graduation by more than 10 percentage points while another five states reported rates more than 20 percentage points higher than more reliable calculations (Hall, 2005).

To understand the problems with the methods states use to calculate rates, it helps to keep the basic calculation in mind:

 population of students who graduate

=

grad rate

 related (larger) population of students

All of the following questions affect which students are represented in the equation and, subsequently, the relevance of the results. States whose methodologies don’t provide sound answers to these questions wind up with questionable results.

  • Who will we count?States have to know what they mean by “graduate.” For example, a graduate might be defined broadly as “a student who completed high school” or more narrowly as “a student who earned at least a standard diploma within four years of entering 9th grade.” Students who meet the definition are part of the numerator. 
  • How will we count them? Having defined the population of graduates, states must then define the population against which the numerator will be compared (i.e., the denominator). This unit of comparison plays a crucial part in the ultimate relevance of the measure. The denominator should represent in some way students who entered the pipeline to graduate, which includes students who did graduate as well as those who did not.
  • How will we represent them? Having defined who the students are, what data will be used to represent them in calculations? These are the numbers that get plugged in to the numerator and denominator.
  • How will we account for them? States must establish policies and procedures to ensure that all students are represented by the data and represented accurately.

Who will we count?

NCLB guidelines set a baseline definition by specifying that students who receive certificates of completion, GEDs, and other non-standard diplomas may not be counted as graduates. Research shows that these alternative credentials don’t have as much value in postsecondary education or the workplace as a standard diploma does (see sidebar, The four-year cohort: Who’s in, who’s out). NCLB guidelines also stipulate that students must earn their standard or advanced diplomas within four school years. Some states and localities use the NCLB definition to calculate rates for federal reporting purposes, but use a different definition to calculate a rate for state reporting. For example, some states report a state graduation rate that includes all “completers,” or recipients of standard or advanced diplomas, GEDs, and certificates of completion. This can cause confusion.

One state did not count graduates at all, but used dropout rates to arrive at a graduation rate. But to define a graduate as “a student who did not drop out” is not a sound approach since these categories are not mutually exclusive. Some students might receive a credential such as a certificate of completion; others might earn a standard diploma but take longer than four years to do so. Such students are not dropouts but neither are they graduates as defined by NCLB.

How will we count them?

Sometimes states base their calculations on a measure that is illogical if not downright misleading.

For example, during the 2002-2003 school year, North Carolina used as its unit of comparison the length of time it had taken for that school year’s graduates to earn their diplomas and reported the result as its graduation rate.

 Number of students receiving a diploma in Year Y  

=

grad rate

Number of students receiving a diploma in Year Y
who had entered high school no more than 4 years earlier

This comparison does not account for students who entered high school as part of the class of Year X and who either left or were still in school working on their diploma. According to one analysis, using this measure overstated North Carolina’s actual graduation rate by more then 30 percent (Hall, 2005). (To North Carolina’s credit, it has since adopted a more rigorous methodology as the basis for NCLB accountability in 2006–2007—the four-year cohort rate described later in this guide.)

How will we represent them?

How states arrive at the numbers to plug in to their calculations is inextricably linked to decisions about definitions and units of comparison. For example, during the 2002–2003 school year, New Mexico reported a graduation rate of almost 90 percent. The formula did compare a pool of students in the same class with a group of students who actually graduated. But, the state only used data about students who entered 12th grade in fall 2002. The formula did not include students who had already dropped out in the 9th, 10th, or 11th grades. By using a formula that accounted for changes in enrollment in those earlier grades (Swanson, 2004), he Education Trust reported that the state’s on-time graduation rate was closer to 61 percent (Hall, 2005).

How will we account for them?

Clear definitions, a sound formula, and precise numbers still won’t produce a useful measure if the underlying data are poor quality. States and districts must spell out how data will be collected, stored, retrieved, updated, processed, and protected. In addition, they need the technical infrastructure to support all of those tasks.

Many states and school districts rely on unreliable or missing student data, although efforts are under way to improve this situation. Many states still use paper-based systems that are not flexible enough to track individual students, so they count on schools to report the information. But most schools don’t have the resources to find out what happens when students go missing. Did they actually drop out or did they transfer to another school? Without a student level record system these questions cannot be fully answered.

Often schools presume that a student who stops coming to school has transferred to another school even if the student’s original school never received a request for a transcript. When Colorado upgraded its data system last year, for example, the state found that 18,342 students that districts had labeled as transfers in 2004 were not actually transfers. Most were dropouts (Yettick, 2005).

The best methodology: The four-year cohort graduation rate

Most experts and the federal government believe that a four-year cohort graduation rate is the best way to measure high school completion. In this calculation, each class, or cohort, is longitudinally tracked as it moves through school, starting from when students enroll in the fall of 9th grade until they graduate four years later. Students who transfer into the school and join the class are added to the cohort. Students who transfer out, die, or go to prison are removed from the cohort.

The rate uses the following formula: The number of students who graduated divided by the number of students enrolling in 9th grade for the first time, plus the students who joined the cohort, minus the students who left. For example:

80 graduates in 2006 x 100 

=

72 Percent grad rate

(100 9th graders in 2002) + (20 transfers in) – (10 transfers out)

(Note that this rate doesn’t include everyone—see sidebar).

The four-year cohort: Who's in, who's out

Students who take more than four years to graduate are excluded from the four-year cohort calculation. Proponents of this methodology endorse a four-year time frame because that’s the traditional time for completing high school. Additionally, research shows that students who earn a diploma on time are in a much better position to succeed beyond high school. For one, students who take longer to graduate are much less likely than four-year completers to go to college or finish if they get there (Adelman, 2006). So it’s to students’ benefit to be on the four-year track.

Students who earn alternative high school graduation credentials are excluded. Credentials such as certificates of completion or the General Educational Development (GED) are typically excluded under the four-year cohort rate, too, because those credentials are not standard high school diplomas. One study, for instance, found that students who passed the GED were “statistically indistinguishable” from high-school dropouts: They were not significantly more likely to land a job or have higher wages (Cameron, 1993).

Nonetheless, it will be important for state and local agencies to track five-, six- and seven-year graduation rates, GED rates, and alternative credentials such as certificates of completion, vocational, and special education diplomas. Young people who earn these credentials are not drop outs and should not be counted as such.

It should also be noted that the act of dropping out is not an irreversible decision. In fact, one study showed that 58 percent of the 1990 high school sophomores who dropped out had either completed or re-enrolled in school within two years of their scheduled graduation date (NCES, 2002). These other credentials and paths toward graduation should be reported as complementary rates to the four-year cohort rate in order to provide a complete picture of high school performance.

In order to improve the quality of graduation rates and graduation rate data, the National Governors Association developed the Graduation Counts Compact (NGA, 2005). The contract stipulates that states will develop an electronic record system that will track individual students from kindergarten through college by assigning each student a unique student code. It also recommends a graduation rate formula using the cohort graduation rate described above. Because the system requires a unique student code for each student, it will also be able to track students who receive alternative credentials like GEDs and students who graduate in five, six, and seven years.

While all 50 states have signed the contract, they still have a great deal of work to do (Hoff, 2005). Creating a new, electronic student record system is time-consuming and expensive. An even greater hurdle might be that some states may look dishonest when the new, more reliable formulas indicate that their graduation rates have been exaggerated for years.

The next best methodology: Graduation rate estimates

At present, most state and local agencies don’t have the student data they need to fully implement a four-year cohort rate or appropriate formula that will provide the complete picture. Right now, less than half of the states have a longitudinal system in place that can follow students from pre-kindergarten through high school and only a handful are using them to calculate a four-year cohort rate (Education Week, 2006). However, education researchers have developed methodologies for estimating state and local graduation rates that can help fill in gaps in the big picture and provide useful comparisons for checking the accuracy of state and local calculations. These methodologies can be broken down into two categories: Enrollment-based and survey-based graduation rate estimates.

Enrollment-based graduation rate estimates

Enrollment-based graduation estimates use calculations similar to those used in a four-year cohort approach. These estimates rely on grade-by-grade enrollment counts from the National Center for Education Statistics' Common Core of Data (CCD) to approximate how many 9th graders make it to graduation four years later. Advocates of this approach argue that even though these estimates cannot track individual students, they show how much a given class “shrinks” over time by examining how many 9th graders are enrolled one year, 10th graders the next year, 11th graders the next year, and so on.

In the simplest variation of this method, Tom Mortenson of the Pell Institute for the Study of Opportunity in Higher Education uses the “ratio of regular high school diploma recipients in a given year, divided by the number of students that were enrolled in the 9th grade four years earlier” (2002). So to estimate the graduation rate for the class of 2003, Mortenson would divide the number of standard diplomas awarded in the spring of 2003 by the number of ninth graders in the fall of 1999. For example:

2,719,947 (number of diplomas awarded in Spring 2003)

=

70 percent grad rate

3,934,876 (number of students Enrolled in 9th grade in fall 1999)

Source: U.S. Department of Education, NCES Common Core of Data

The Manhattan Institute’s Jay P. Greene developed a similar approach (2003). He starts with the same, basic formula—graduates divided by 9th graders four school years earlier—but makes two adjustments. Rather than using 9th grade enrollment, he averages 8th, 9th, and 10th grade enrollments in order to account for the high level of retention that typically occurs in the freshmen year of high school, a phenomenon known as the 9th grade bulge. Second, Greene attempts to adjust for overall school population growth or shrinkage by adjusting the numerator by the percent change in total high school enrollment. The second adjustment is important in areas where overall high school enrollment has significantly grown or shrunk. Green's graduation rate formula:

[3  x  graduates in Year Y   x  (grade  9 in year Y – 4)  +  (grade 10 in Year Y – 4)  +  (grade 11 in YearY – 4)  +  (grade 12 in Year Y – 4)]

=

grad rate 

[(grade 8 in Year Y – 5) + (grade 9 in Year Y – 4) + (grade 10 in Year Y – 3)] x [(grade 9 in Year Y – 1) + (grade 10 in Year Y – 1) + (grade 11 in Year Y – 1) + (grade 12 in Year Y – 1)]

NCES recently released a new graduation rate estimate, and the formula is similar to Greene’s except that it does not account for growth or shrinkage of the high school population. NCES calls the formula the “averaged freshman graduation rate” and publishes it annually by state (U.S. Department of Education, 2006).

Swanson (2004) provides another approach to estimating rates with his Cumulative Promotion Index (CPI). First, he calculates the percentage of promoted 9th graders by comparing the number of 10th graders to the number of 9th graders one year earlier. He then performs the same calculation for the other three grades and multiplies these four ratios to come up with an estimated graduation rate. Swanson’s CPI estimate has been used by various organizations including the Education Commission of the States; the Education Trust, which contrasted his estimates with the rates officially reported by the states under NCLB (Hall, 2005); and most recently, by Education Week (2006). Swanson’s Cumulative Promotion Index:

Grade 10 enrollment Year Y x Grade 11 enrollment in Year Y x Grade 12 enrollment in Year Y x Graduations in year Y = grad rate
Grade 9 enrollment Year (Y – 1  Grade 10 enrollment Year (Y – 1)  Grade 11enrollment in Year (Y– 1) Grade 12 enrollment in Year (Y – 1)

Note that these methods cannot provide estimates at the individual school level because NCES does not collect information on the number of diplomas awarded by schools. However, using school enrollment data from the CCD, Johns Hopkins University researchers Robert Balfanz and Nettie Letgers created a methodology (Promoting Power Index) that estimates how well schools keep students until the start of the 12th grade (2004). The calculation compares the number of 12th graders in the current year against the number of 9th graders four years earlier. So if a school had 100 students at the start of the 9th grade and 80 students at the start of the 12th grade four years later, they would have a Promoting Power of 80 percent for that year.  It does the same calculation for the 12th grade class for the two previous years as well then averages the three Promoting Powers to get the school’s Promoting Power Index (PPI). Balfanz and Letegers's Promoting Power Index:

grade 12 Enrollment in Year Y-1

=

Promoting Power Index (ppi)

Grade 9 Enrollment in Year Y-4

Note: The measure devised by Blafanz and Letgers is not a graduation rate.

If an educator wanted to compare a school’s graduation rate against federal data, the PPI might be the best measure because it provides school-level information. But note that the PPI is not directly comparable to a graduation rate; it does not account for students who make it to the 12th grade but ultimately don’t graduate. The indicator also does not have any adjustments for schools that lose large numbers of students during the school year. 

For all their differences, the various graduation rate estimates do not produce significantly different outcomes. When Boston College researchers compared some of these estimates, they “found no evidence that the conceptually more complex methods yield more accurate or valid graduation rate estimates than the simpler methods.” For instance, the Greene rate estimates the national graduation rate for the class of 2000 to be 69.6 percent. Using the same data, the CPI produces a rate of 67.5 percent (Miao and Haney, 2004).

view table as a PDF file

Comparison of Enrollment-Based Approaches to Estimating High School Graduation Rates

Levels of applicability Author(s) Author affiliation Forumula name  How it works Example result: high school graduation rates, class of 2001 
 National, state, district  Mortenson  Pell Institute n/a compares enrollment of 9th graders with number of graduates four years later  67.3 percent
 National, state  Greene  Manhattan Institute n/a calculates starting enrollment by averaging number of 8th, 9th, and 10th graders and compares with number of graduates four years later; also adjusts for population changes  70 percent
 National, state  U.S. Department of Education's NCES  n/a n/a calculates starting enrollment by averaging number of 8th, 9th, and 10th graders and compares with number of graduates four years later   71.7 percent
 National, state, district  Swanson  Urban Institute (former), Education Week (current) Cumulative Promotion Index calculates percentages of annual grade promotions (9th to 10th, 10th to 11th, 11th to 12th, and 12th to graduation) and multiples all four percentages   68 percent
 National, state, district, school  Balfanz & Letgers*  Johns Hopkins Univ. Promoting Power Index  compares 12th grade enrollment with 9th grade enrollment four years earlier   68 percent
*The Balfanz & Letgers PPI does not measure graduation rates    

Survey-based graduation rates

Another approach to estimating graduation rates is using survey data from programs like the Census Bureau’s Current Population Survey (CPS) or the National Education Longitudinal Study (NELS), a longitudinal survey of a nationally representative group of 8th graders. The NELS study (2002), for instance, found that 83 percent of the 1988 eighth-grade cohort had earned a high school diploma by 2000. Another nine percent had earned an alternative credential. The rest—eight percent—had either dropped out of high school or failed to complete an alternative credential like the GED. About 0.3 percent earned a certificate of attendance.

Survey-based graduation rates typically show a slightly higher graduation rate than enrollment-based ones. Enrollment-based calculations by Greene and Swanson, for instance, put the four-year high school graduation rate near 70 percent overall and near 50 percent for minority students. Survey-based estimates are at about 80 percent for all students and around 75 percent for minority students (Mishel, 2006).

When looking at trend data, the different approaches find even less common ground. For example, the CPS data show that the percent of adults age 25 and over who had completed at least a high school diploma is at an all-time high of 85 percent (U.S. Census), while Miao and Haney’s comparison of enrollment-based estimates indicates that the national four-year graduate rate is declining. To confuse matters even further, Greene et al. (2006) found that graduation rates have largely remained stagnant, with perhaps a small jump in the past few years.

Some believe that the higher survey-based estimates show that the graduation rate crisis is exaggerated (Mishel, 2006). But most experts argue that these differences are caused by a mix of definitional differences and reporting bias (Greene, Winters, Swanson, 2006). Mishel and NELS statisticians counter that the NELS self-reported data is matched against student transcripts and is thus validated.

Survey-based graduation rate estimates provide a perspective on the national graduation rate, however, schools and districts cannot use the data to understand what is happening locally. The CPS and NELS surveys do not produce district- or school-level data. Along with most experts, the U.S. Department of Education recommends that local educators use enrollment-based graduation rate estimates to vet their data. In a speech last year, Deputy Secretary of Education Ray Simon noted that averaged freshman graduation rates “track very closely with true on-time graduation rates” (2005).

Accurate and credible graduation rates

By itself, better data will not improve graduation rates. But an accurate understanding of the issue is an important first step. Dropout prevention programs will only be effective if leaders understand the scope of the problem. Who is leaving? Why are they dropping out? What will keep them in school? Also, what is happening to students who take more than four years to get their diploma? What is happening to students who earn GEDs? Certificates of attendance? Using accurate, reliable data to answer such questions is essential to creating initiatives and programs that promote student graduation.

To help policymakers effectively tackle the dropout problem, graduation rate calculations must consist of three parts:

1. Solid formulas and definitions. An on-time graduation rate should reflect the proportion of students who begin 9th grade for the first time and leave four school years later with a standard-diploma. To calculate such a rate, formulas and definitions need to be transparent. Certificates of completion, for instance, should not be counted as a standard diploma, and on-time should mean four school years. Schools and districts should have strict guidelines on how they collect and report data, and only students who die, go to prison, or transfer should be counted as non-graduates and be removed from the denominator. States should also require that schools have full documentation when they code students as transfers or dropouts.

2. High-quality data collection. Graduation rate reporting systems need to be exact with enough flexibility to account for various ways students move through the system. While ensuring student privacy, data collection systems should be able to calculate student-level data and track each student from kindergarten through college. Audits of data collection practices should be conducted regularly and randomly.

3. Stakeholder support. To build a high-quality system to calculate graduation rates, educators, parents, and community members need to support the effort. Educators and administrators should know how to collect, report, and analyze graduation data. Community members should also understand how the rates work and how they serve as an indicator of school performance.

Recommendations for school districts

As mentioned earlier, all 50 states have signed NGA’s Graduation Counts Compact and promised to implement a standard, four-year cohort graduation rate formula along with a comprehensive data system that tracks individual students. While there are no deadlines in the contract, it’s clear that it will be years before the states implement all the features. But local leaders should not wait until states have implemented the NGA contract. They need to develop a plan to deal with graduation rates now by

  • Delving into their data.
  • Using complementary indicators.
  • Beginning policy reforms.
  • Developing understanding.
  • Forming better reporting to better results.

Delve into the data

Educators should conduct a deep and thorough analysis of their graduation rates. A good way to start is to compare the local graduation rate against an estimate that uses CCD data. To examine a specific school’s rate, the Promotion Power Index is the best approach because it contains school-level data. To evaluate a district rate, the Greene or Swanson estimates would work better.

If there are discrepancies, districts should be sure they are using the four-year cohort graduation rate approach correctly. Does the formula look across all four school years? Are the definitions correct? Plugging locally gathered data into the enrollment-based estimate formulas and comparing the two outcomes can also help highlight methodological issues.

If the approach holds up, then the district should audit local data collection efforts. Are 9th, 10th, and 11th grade dropouts included in the final counts? Is documentation provided for all transfers? It is only through a detailed accounting that a district can ensure an accurate and reliable graduation rate.

Use complementary indicators

While a four-year, cohort graduation rate is the most significant measure of completion, educators need to examine a variety of indicators to gain a full and accurate picture of how a system serves its students. The National Governors Association (2005) recommends that educators closely follow the indicators below alongside their four-year cohort graduation rate.

  • Five–and six–year cohort graduation rates
  • A dropout rate
  • Completion rates for those earning alternative completion credentials from the state or a GED
  • In-grade retention rates
  • Percentages of students who have not graduated but are still in school or who have completed course requirements but failed a state exam required for graduation.

Educators should also examine completion measures that account for local variance. Communities that have many 10th–12th grade high schools, for instance, will want to have a 10th–12th grade graduation rate. If a state has defined the requirements necessary to be considered college-ready, districts should create a college-ready graduation rate (NGA, 2005).

Schools and districts should publish and promote these complementary indicators as soon as possible. The additional measures can be used—and explained—at school board meetings and on school and district report cards. This will help local stakeholders gain fluency with the indicators before the state mandates kick in.

Improve documentation and tracking

Districts should implement a number of policy reforms. To start, local school systems should require documentation for all student coding. If a student is listed as a transfer, for example, the school should have a transcript request from the school the student transferred to. And students without documentation should be coded as dropouts. These policies would provide incentives for schools to improve data collection and help produce a graduation rate that more accurately reflects the actual graduation rate.

Districts should also start training educators on data collection and reporting. School personnel need to understand how graduation rate data should be collected, reported, and analyzed. To ensure that schools are following these procedures, districts should conduct regular and random audits of schools’ data collection methods.

Develop understanding

Schools and districts should communicate to parents and other community leaders that they plan to focus on making sure all students graduate and enlist their support. To do so, they must explain the importance of the indicator and its role in school improvement. Some states, such as Florida, have already placed their data collection guidelines online. (http://www.fldoe.org/eias/eiaspubs/default.asp)

Communities that continue to report implausibly high graduation rates leave themselves open to attack. Last year, the editorial board of the Indianapolis Star published a weeklong series on graduation rates detailing how the Indianapolis school district reported a graduation rate of 90 percent when the rate, as estimated using the Balfanz and Letgers PPI estimate, was 35 percent (2005). The opening article concluded, “Inflated graduation numbers have lulled the public into believing that dropping out is rare. It’s not. And thousands of young people are suffering the consequences.”

When education agencies get ahead of the graduation rate issue, however, they are lauded, even when the initial round of data initially show a significant drop because of methodological changes. Washington State, for instance, recently began reporting a new indicator of completion because it discovered that its graduation rate was misleading. In 2003, the state reported a graduation rate of 79 percent but realized that the figure didn’t include students who had dropped out in 9th, 10th, and 11th grade, because most districts in the state did not keep dropouts in the data system for more than a year. Until the state develops the capacity to track individual students and can report a four-year cohort graduation rate, they are using an estimated graduation rate. The estimated rate was 66 percent in the 2002–2003 school year (Blysma, 2004). The state’s director of research told the Education Trust (2005) that “the new, more honest graduation rate in Washington was a major wake-up call for educators, policymakers, and parents in the state.”

From better reporting to better results

Education leaders need to improve how graduation rates are defined, collected, and reported so they have the most accurate, reliable data possible to guide them in taking on the dropout problem. The improvement needs to start at the state level where policymakers should establish a clear formula, strong definitions, and implement a sophisticated data system that will track individual students. At the local level, there is also a lot of work to do. Educators and administrators should delve deeply into their current graduation data to gain a better understanding of their graduation rate. They should focus on how data is collected, especially the tracking of students who have left. Improving graduation rates—and graduation rate reporting—is necessary for the future of our students and our society.


This guide was prepared for the Center for Public Education by Ulrich Boser, a  freelance writer and a contributing editor at U.S. News and World Report. His work has appeared in SmithsonianSlate, and theWashington Post.

Posted: June 23, 2006

©2006 Center for Public Education

Add Your Comments





Display name as (required):  

Comments (max 2000 characters):




Comments:



Home > Policies > Calculating high school graduation rates > Calculating high school graduation rates