REPORT


December 12, 2017

Executive Summary

 

The 2015 passage of the Every Student Succeeds Act (ESSA) ushered in a new era for state accountability systems. With less prescriptive federal requirements, ESSA provided states an opportunity to rethink both how they identify schools that need to improve and how those schools might be improved. The law requires states to submit their plans to the U.S. Department of Education and then begin implementing their approved plans in the 2017-18 school year. Sixteen states plus the District of Columbia submitted their plans in the spring of 2017, and the remaining 34 states submitted their plans in September.

 

These plans are an opportunity for states to explain what actions they plan to take in exchange for billions of dollars in federal funds, and to showcase how they are tapping the new opportunities in ESSA to design accountability systems that will improve educational outcomes for all students. States explicitly asked for this opportunity, and the ESSA plans were their first chance to show what they plan to do with their newfound flexibility. But, while the potential exists for state innovation, looser federal rules also carry the risk that states may not meet the needs of historically underserved students and the communities these funds are intended to help the most.

 

To determine whether states were living up to this commitment and balancing innovation in tandem with equity, Bellwether Education Partners, in partnership with the Collaborative for Student Success, convened two rounds of objective, independent peer reviews of the state plans. The results of our analysis of the first 17 plans are available on the Bellwether website, and we compiled best practices by category at CheckStatePlans.org. This report compiles our findings after reading the remaining 34 state plans.

 

The founding principle of ESSA was that states, unbound by federal oversight, would develop stronger, more creative education systems that in the end would better serve students and increase equity. The law was intended to act as a floor upon which states would build compelling and innovative systems. As such, compliance with the federal law is necessary, meeting the law’s minimum requirements will not be enough to improve education for all students. With that in mind, our reviewers were looking for states to go beyond simply checking the boxes. We explicitly recruited a diverse set of peers, and we asked them to be open to a range of different approaches. We were interested in identifying best practices, not judging states against our own preferences, and we created a simple rubric to evaluate state plans based on nine key elements of what we considered to be foundational aspects of high-quality accountability systems.

 

Unfortunately, we found that states were not taking full advantage of the opportunities ESSA presented. Instead, with the few exceptions noted below, we found state ESSA plans to be mostly uncreative, unambitious, unclear, or unfinished. This was especially disappointing for states that submitted their plans this fall, whose leaders did not take advantage of the additional time and resources that were made available to them as the result of a later submission date. It’s possible that states may go back and bolster their plans over time, but it does not inspire confidence that states chose not to submit plans that advanced educational opportunities in bold and innovative ways for all students.

 

Process and Methodology

 

After the states submitted their official plans to the Department of Education, we convened a bipartisan, nationally esteemed group of more than 45 education policy experts, with diverse ideological perspectives, and with a strong emphasis on individuals with state-level experience, to review each plan individually and in small teams. The full group also included specific content experts to address the unique challenges associated with students with disabilities and English learners.

 

Reviewers used their own judgment and expertise to respond to and score each of nine rubric items, and those scores have been normed across states and peers. Before the results of this process were released publicly, we offered state education agencies the opportunity to respond to our reviews, correct any inaccuracies, and/or point us to additional information beyond their formal ESSA plan, and that feedback is reflected in our final reviews. As the plans are living documents, we recognize that there is still the possibility of inaccuracies and any remaining errors are our own. The findings in this executive summary reflect what we heard from the peers and states collectively.

 

These reviews represent a snapshot in time. That is, they reflect the quality of state plans as submitted to the federal government. As of this writing, the federal process is ongoing, and states are likely to make changes to their plans up until they earn approval. Even beyond those approvals, we hope our reviews can help the states continue to improve their plans as they turn toward implementation.

 

Bright Spots

 

Our peer review process uncovered some promising policies and practices. Of note:

 

  • While states continue to place a strong emphasis on reading and math, they are also broadening their accountability systems beyond these two subjects. Many states added science and a more accurate measure of student attendance, not to mention indicators measuring college and career readiness, measures of whether students are on track toward high school completion, or surveys of school climate.

 

  • The majority of state plans included a measure of year-to-year student growth, which gives schools credit for how much progress their students make over time, rather than static determinations about where students are at a given point in time.

 

Although questions remain about some of the individual choices states made, and some states were already moving in these directions, states continued to make noteworthy progress in broadening the scope of what it means to be a successful school. More specifically, several individual state plan components from the September submission window are worthy of praise and should be considered models or exemplars for other states to follow:

 

Standards and Assessments:

 

  • California, Idaho, Maryland, South Dakota, and Washington all received top marks for strong commitments to college- and career-ready standards and high-quality, aligned assessments in math and English language arts.

 

Academic Progress:

 

  • Minnesota’s plan combines a clear measure of student achievement with a clear growth model that awards points based on students advancing through achievement levels on state math and English language arts assessments, and all schools in the bottom 25 percent on these indicators will receive additional supports.

 

Supporting Schools:

 

  • Indiana’s plan includes specific state- and district-level roles and responsibilities, from needs assessment to planning to selection of interventions and supports. It will first use its 7 percent set- aside dedicated for school improvement activities for planning grants to all comprehensive support schools, and in subsequent years it will award funds on a competitive basis to applications using strong, evidence-based interventions.

 

  • New York provides low-performing schools with a high-quality, on-site school quality review conducted by trained reviewers. This system helps low-performing schools focus on six key tenets of school quality and gives them a road map for improvement. If a school continues to struggle, the state can eventually convert it to a charter school, place it under the control of the State University of New York or the City University of New York, or close it down.

 

  • Rhode Island’s plan describes a robust system of school improvement, particularly for schools in comprehensive support, which seeks to balance the role between the state, district, and community. Its approach provides a state hub of information for districts to identify evidence-based activities, embeds and formalizes community feedback in the process, ensures rigorous and dramatic change occurs in schools that fail to improve over time, and provides resources in targeted, strategic ways to support district and school improvement efforts.

 

Exiting Improvement Status:

 

  • Indiana will require that comprehensive and targeted support schools reach at least a C grade for the entire school or the low-performing subgroup for which it was identified, respectively, for two consecutive years before they are eligible to exit improvement status. Additionally, schools must draft a sustainability plan to explain how they will maintain their progress.

 

Opportunities for Improvement

 

Unfortunately, across the reviews, potential issues and areas of concern persist. Those notable issues include:

 

  • Goals that are largely untethered to the state’s long-term vision, historical performance, or other objective benchmark: States are required to present goals for both student achievement and graduation rates, but the achievement goals in particular seem to be based on subjective judgments of what looks “ambitious but achievable” rather than any grounding in evidence. Absent any historical data, it’s impossible to know whether the balance states struck is an appropriate one. Moreover, only a handful of states incorporate their goals in any meaningful way into their accountability systems.

 

  • Lack of clear, easy-to-understand school ratings: Collectively, our peers brought years of experience reading and analyzing state accountability systems to this project, and yet they still couldn’t always understand what a state was proposing. For example, some states are proposing systems that, on the surface, appear to result in a clear, final rating. But those plans are often short on details explaining how individual indicators would be scored, how the overall ratings would come together, or what would distinguish one rating from another. In order for accountability systems to drive improvement, they must be easily understood by educators, parents, and other stakeholders and point out specific areas for improvement to guide action and intervention. But these state plans often fail that test.

 

  • Failure to incorporate student subgroups: Of the 34 plans in this round, we counted only four states—Kentucky, Minnesota, Ohio, and Texas—that were planning to incorporate individual subgroup performance in some way into each school’s rating. ESSA also requires states to identify schools with low-performing subgroups, but only one of the 34 states—Minnesota—provided estimates for what its rules would mean in practice. The rest gave promises without specificity. Next summer, when states begin identifying schools based on their ESSA plans, school leaders may be surprised to find out they’re being identified as a school in need of “targeted support” due to low- performing subgroups of students.

 

  • Failure to intervene in low-performing schools: Instead of taking the opportunity to design their own school improvement strategies, states mostly produced plans that are vague and noncommittal about how they will support low-performing schools. Some states identified a list of evidence- based interventions that “may” happen in low-performing schools, but very few outlined specific timelines and interventions that would occur in those schools. Nationally, the federal government will be investing about $1 billion a year specifically to support low-performing schools, and the law gives states wide flexibility about how they use their share of this money, including whether they want to distribute it to school districts via formula or through a competition, or if the state wants to embed its own priorities in those distributions. The Department of Education did not ask states how they were planning to spend these funds—which in itself is a lost opportunity—and only 12 of the 34 states voluntarily disclosed their plans for how they were planning to use all their school improvement funds. The law also allows states to set aside an additional 3 percent of Title I funds to provide “direct student services” to students in low-performing schools, but none of the 34 states said they planned to take up this option.

 

  • Lack of attention to English learners: While there are a number of innovative proposals in state plans pertaining to English learners, there are also many remaining uncertainties about how these would work in practice, including missing targets or unclear descriptions of how raw data would be converted into ratings for schools. ESSA requires states to define “languages other than English that are present to a significant extent” (including at least the most common non-English language in the state) and then explain the steps they are taking, beyond providing accommodations on English-language assessments for those students, to develop tests in these native languages. A number of states, however, declined to name any languages commonly spoken by setting thresholds just beyond the point where they’d have to take action. A number of “English-only” states declined to make any effort at all to explore native language tests. In addition, a handful of states included English language proficiency in  only  minor  ways,  and  Florida,  a  state  with  a  significant  English learner population, declined to incorporate English learners at all into its school ratings.

 

  • Lack of attention to students with disabilities: While the Department of Education did not ask states to discuss in their plans how they were working to ensure students with disabilities are taught grade-level content standards and measuring their progress, states missed an opportunity to explain how their ESSA plans align with other work they’re doing to support students with special needs. All states have alternate academic achievement standards and assessments for students with the most significant cognitive disabilities, but states barely mentioned them in their plans. ESSA also codified a rule that only 1 percent of students could take these assessments, but no state took the opportunity to articulate how it would manage that process and ensure that this cap was not exceeded.

 

  • A continued shift toward normative accountability systems: The ESSA plans also codify another shift in state accountability systems. As a country, we have moved away from criterion-referenced accountability systems—where all schools are held to the same, predetermined criteria—to norm- referenced ones, where schools are compared to each other instead of to some external criteria. This change has given states the freedom to adopt more rigorous, and more honest, state standards and assessments, but it has also created a disconnect between the standards for students and the standards for schools. That is, states have done the hard work of adopting more rigorous standards and more sophisticated assessments for students, but they’re mostly holding schools accountable for their place in relative ranking systems. The federal law now requires this type of system, at least to identify the lowest-performing schools in each state, but states have taken that even further by designing entire systems that are rankings-based and which ignore the question of whether or not students are on track to succeed in college and careers.

 

These trends and general lack of detail are worrying. Although ESSA provides some “guardrails” that every state must follow, it leaves significant discretion to individual states. We encourage the U.S. Department of Education and federal peer reviewers—like our own—press states to fill in these gaps prior to approving the plans. As we transition to a world where every state has the opportunity to design its own unique system, states have a responsibility to parents, educators, and the broader public to clearly articulate how those systems will work.

 

Where We Go From Here

 

ESSA provides states with a number of flexibilities in how they create accountability systems that are tailored to local needs and that support an excellent, equitable education for all students, but most state plans did not take advantage of these opportunities. An optimist might view that as states taking their time and not wanting to codify plans that they want to change and improve later, based on early implementation results. A pragmatist might also note that states were responding to the questions asked of them to get their plans approved, and states were smart to avoid adding unnecessary details that didn’t affect their chances of approval. After all, the plans are both political documents for the states and a compliance exercise for the federal government. Indeed, we heard many of these arguments from states when we shared preliminary drafts of our reviews.

 

But a pessimist or a skeptic might point out that states have now had two years to complete their plans, and any remaining holes could also reflect a state’s lack of willingness or lack of capacity to commit to taking strong actions to intervene in places where students continue to struggle.

 

A skeptic might also wonder how states are going to implement some of the promises they’ve put on paper, particularly if those promises turn out to be more expansive than many anticipated. What if a state’s promised rules lead it to start identifying 40 or 50 or 60 percent of their schools as in need of improvement for students with disabilities or black, Hispanic, or low-income students? Will states follow through on their promises, or will they turn to clever statistical rules, such as confidence intervals or averaging procedures, to hide those results? It’s too soon to know if that’s a valid concern, but states have done nothing to alleviate it in the two years since ESSA passed, and it would not be the first-time states put forward processes to artificially inflate school scores, hide the performance of subgroups, or minimize the consequences faced by chronically low-performing schools.

 

All states have room to learn from their peers and continue to improve their plans. In the short term, the U.S. Department of Education has a responsibility to ensure there is sufficient information in the plans to meet the fundamental requirements of the law and ask for more detail, where necessary.

 

Even after the plans are approved, the federal government and states alike should carefully monitor implementation efforts to determine where plans have been successful and where changes are needed, and help build further evidence for the approaches and activities that promote equity and lead to better results for kids. We encourage advocates, state departments of education, and governors to use the lessons learned from this peer review process to guide stakeholder engagement conversations and plan implementation going forward. The findings from our independent reviews should be viewed   as additional guidance from a bipartisan panel of experts and, within the context of each state, can provide important information on how to put in place strong statewide accountability systems that serve all students.