MDE Does It Again

By Jerel Wade | August 16th, 2017 at 11:27 am

BY: Jerel Wade /

Jerel Wade is an educator and small business owner from Jones County, MS.

Filed Under: Contributor, Education, Ethics, Feature Stories, MDE, Mississippi, Mississippi PEP, MS State Government, Opinion, Teachers

“Ultimately, it’s all about economics. It’s about keeping an area of the state looking good to prospective business interests. Dropping from a level “A” to a level “B” could result in billions of dollars in lost revenue. And the power brokers in those areas don’t want to miss out on the spoils of a positive business climate, of which a high-achieving school district is a must.”

I knew it was going to happen. I’m not surprised in the least. But, the reality that, once again, public schools are getting the rug pulled out from under them doesn’t set well with me.

Late Tuesday afternoon, the Mississippi Department of Education sent out a press release declaring their intent to ask the state Board of Education to “establish a new baseline for assigning school and district letter grades for the 2016-2017 school year.” Maybe someone forgot to tell MDE, but the ’16-’17 school year ended in May. The game has been played, the scores have been reported, and the rules are being changed after the fact.

Schools were assured by Dr. Carey Wright, State Superintendent, that we would know exactly how many points each student would need to show growth, yet these numbers are being arbitrarily changed months after students have taken the tests. Once again, educators feel lied to.


The MDE press release states that the “new baseline is needed to correct artificially high growth rates included in the 2015-2016 accountability grades.” If the problem happened two years ago, why not go back and correct those?

The 2015-2016 accountability grades were calculated by comparing the current MAAP test to that year’s PARCC test. I never understood how accountability grades could be given when comparing two different tests. It’s an apples-and-oranges comparison.

Become a free 30-day trial member of the Mississippi PEP Community and check out all our member only resources.

Schools knew that the PARCC test essentially didn’t count for accountability for that year, but knew it would be compared to MAAP the next year. In an attempt to game the system, some schools put little effort into doing well on PARCC, thereby intentionally deflating their scores so that a greater comparison would be made the following year.

The “artificially high growth rates” mentioned in the press release were due to intentionally deflated scores being compared to the first year of the MAAP assessment. However, some schools adhered to the spirit of the assessment system and worked hard to prepare students for PARCC and MAAP, showing less growth than those who worked the system to their advantage.

Now, we finally have an apples-to-apples comparison and the powers-that-be see that as a problem. Dr. Wright stated that this change will give a “true picture of their performance.”

For more on Mississippi Education Rankings click the Graphic above or choose a headline to the right. .


(Cont.)–WADE: MDE Does It Again:

I don’t understand how changing the scoring rules after the game is played will give a “true picture of their performance.” What does this true picture look like? Is it possible that a school that once was low can make the changes necessary to show improvements? Does their always have to be a set number of “A” schools and a set number of “F” schools?

Wright goes on to say, “The MDE needed two years of results from the Mississippi Academic Assessment Program (MAAP) to conduct an analysis of the data and to establish a stable baseline.”


Didn’t she lead her agency in developing a baseline that compared two different tests? Didn’t she hire organizations to review the process in moving from PARCC to MAAP? Has anyone heard her mention needing two years of MAAP data for a “stable baseline?”

This is the first time I have heard this from anyone at MDE, though many, if not most, educators knew this to be a simple fact.

The Roots of the Problem

Here is where the roots of the problem become a little more apparent. The news release from MDE states that after the 2015-2016 accountability scores were sent out, some districts began to raise concern that “their growth rates were abnormally high and could not be sustained over subsequent years.” These districts understood that their growth from 2014-2015 to 2015-2016 was based on intentionally deflated scores on PARCC. They were shining stars for a year, but knew that staying on top would be difficult.

Now, don’t get me wrong. Many of these districts deserved to be recognized as top performers. They are some of the best public schools in the state. They have a higher than average tax base, socio-economic status, and educational attainment of parents. They are set up for success from the start. But, once MDE sets the rules, all schools, regardless of their make-up, should play by the same rules and expect them to be constant.

But What Were the Intended Consequences?

Dr. Chris Domaleski served as the chair of the task force taking a look at the accountability data. He serves as associate director of the National Center for the Improvement of Educational Assessment and has been hired several times to help establish accountability guidelines for MDE. He states in the news release there were “unintended consequences.”

Naturally, I have to ask, what were the intended consequences? Were there specific schools who needed to be rated an “A,” yet didn’t make the grade? Were there certain schools that needed to be an “F” no matter how hard their teachers worked? If the actual consequences were “unintended” then there must have been some set of intended consequences.

Domaleski states that “calculating growth on different assessments was artificially inflated growth.”

Did someone of his stature and expertise not know that comparing two totally different tests in consecutive years would give artificial results?

Many school leaders knew this going into the 2014-2015 testing year with PARCC. They understood what was about to happen and played the game to their advantage. They knew that it wouldn’t be politically expedient for some districts to be rated lower in the second year after being seen as the shining example of education in Mississippi.

He further states that due to “instability in growth, the ability to meaningfully compare performance…is compromised.”

How do we know that growth is unstable?

MDE set the standard for growth and schools all across the state worked to show that their students were growing. Yet, MDE created “instability” by changing the cut scores, thereby moving the target for schools after the arrow has left the bow.

If comparing the same assessments gives us an accurate portrayal of school performance, and last year MDE decided that comparing two different assessments was the best way to determine accountability grades, it makes me wonder if we can trust any directive that the Mississippi Department of Education sends to us.

Do they really take the time to think through the consequences of their decisions? Do they fully understand the frustration of those involved in public schools when it seems decisions are made based on the direction of the wind?

As the MDE news release states, public schools have taken three different assessments in the last four years. We have endured several years of the left brain not knowing what the right brain is doing.

The recommendation is being made that the 2017 accountability cut scores be set using the same percentile rankings as the 2016 accountability scores. This means that the same number of schools will be labled “A” as last year. Using the old cut scores, only 7 districts would be an “A” but with the changes that number will double to 14. The number of “F” schools would go from 12 to 21.

Are 7 districts being artificially inflated? Are 9 districts being unduly punished? Will these new cut scores remain for the 2018 accountability results, like the new release says, or will there be other “unintended consequences” to show up later?

Who knows? But, you can believe many educators are wondering if their effort is going to be worth it in the long run.

Domaleski goes on to say that without the new baseline, results would reflect “unexpected and unrealistic circumstances where results declined.”

Could this be seen as an admission that the Common Core State Standards, which were renamed the College and Career Readiness Standards are not reliable standards? Are we seeing a lack of growth over time by high performing schools due to poor standards that don’t lead students to higher achievement?

“With the exception of growth, all components of the accountability system are performing as expected,” Domaleski said.

He anticipated an increase in proficiency and growth, but those results didn’t pan out. Lower rated schools were showing improvements, but high performing schools were not.

Ultimately, it’s all about economics. It’s about keeping an area of the state looking good to prospective business interests. For example, certain areas of the state cannot compete for the new Toyota-Mazda automobile plant without having a high quality school. Dropping from a level “A” to a level “B” could result in billions of dollars in lost revenue. And the power brokers in those areas don’t want to miss out on the spoils of a positive business climate, of which a high-achieving school district is a must.

I wish I knew the real reason why MDE has changed its mind after the fact. My hunch is it has very little to do with what they want the public to believe. And, I can’t find any reason in all of this that is beneficial to the students.

If MDE doesn’t do damage control inside the public schools, then they may as well disband and turn education over to the individual districts.

Wait. That doesn’t sound like that bad of an idea.