By Richard Capone, CEO/Co-Founder of Let’s Go Learn, Inc.
Florida announced on March 15, 2022 that it is dropping FSA, its end-of-year state test, and replacing it with three shorter progress assessments to be given in the fall, winter, and spring of each year. It is the first state to recognize that high-stakes testing has indeed become too laborious and; in essence, it disrupts student learning because it shuts down classroom instruction as teachers and students focus on preparing for the actual tests, taking the tests, and waiting for the test results themselves.
This overemphasis has in many cases limited standards-based instructional time and progress monitoring and increased stress on students and teachers. After all, test developers like myself know that student test-fatigue and length of tests are highly correlated. At one time, it seemed logical to pursue a longer and more thorough state test, but experience has taught us that it also yields diminishing results as students get tired and/or frustrated.
Learn more about education reform
Bucking the state test tradition, Florida is moving away from the high stakes end-of-year test to progress monitoring tests. If done correctly this move could be a great first step by breaking down the content on which students are testing based on the time of year, testing students on what you would expect them to know at the start, middle, and end of the school year.
However, the issue with Florida’s brave move is that it doesn’t address the core problem of low student achievement. With so many students at non-proficient academic levels for their grade, testing needs to be changed even further. Our educational system needs a paradigm shift.
Consider this example of little-white-lie reporting that celebrates that 40% of a school’s 6th grade students are proficient in English Language Arts and Math. This reporting belies the truth that 60% are performing below grade level. And this “below” level group is not homogeneous.
Actual summative gaps can span one to three years behind in a combination of varied sub-tests of knowledge. Herein lies the big equity issue. To provide education equitably to these students who are behind, you can’t scalably support their progress in a regular classroom where the focus is on reviewing grade-level materials..
From what I can tell, the assessments that Florida will adopt are grade-based progress monitoring tests which will only tell teachers what they already know. Again a little expressed secret: teachers already know which students struggle. But they don’t know why, and they don’t have a scalable structure in their schools to address individual learning and bring students who are non-proficient up to grade level. The move to progress monitoring testing sounds good on paper. But without a paradigm shift, state administrators may misinterpret the true application of progress monitoring.
Explore what quality progress monitoring looks like.
More info...True progress monitoring is tied to what a student is supposed to be taught based on their individual needs. For example, progress monitoring in special education is tied to the individual education program (IEP) of each student. The plan is set, teachers know what they will teach, and then they test to see if each student is making progress on their personalized plan. Progress monitoring in pull-out intervention programs is based on the intervention plan developed after a diagnostic test is given to the students.
If progress assessments are not implemented with an understanding of true progress monitoring, the move that we are seeing in Florida may just be a continuation of accountability: rather than a painful test once a year, there are three smaller painful processes each year. Am I being a little harsh? I don’t think so. While I actually do believe in accountability testing, I also know that if your goal is a summative test with only a few outcome variables, you don’t have to make the tests as long as they are. If the goal is to assign each student a percent-ranking score in ELA and math, you don’t need a 4-6 hour assessment to do so. Part of this dilemma is caused by the big business of testing. Industry revenues and general inertia keep tests long and reporting slow.
Online tools for special education
Florida has one part right: break state testing out into small chunks. The next step should be to acknowledge that testing has to also inform instruction because of the sheer number of students who are non-proficient. Wouldn’t it be better to shift the test design from accountability to diagnostic/prescriptive?
By defining progress monitoring correctly, Florida can break the cycle of non-proficiency. \This is a great opportunity to make testing more logical. What about making the first test of the year a K to grade-level computer adaptive diagnostic test? Adapt all the way down to kindergarten if necessary or up to grade-level based on the actual student’s ability which is similar to what is done in special education. Then the mid-year and end-of-year tests can be grade-level progress monitoring tests to meet the needs of the “grade-level standards” focused stakeholders.
Final thought: learning is sequential. Yes, you can leap frog and get some concepts out of order but most learning such as upper grade English Language Arts and mathematics is sequential. Would you ever put a second year French student into a fourth year French class and expect them to learn and catch up without special support? No, you would not. And yet this is what we do to our current students in most general education classes. A few years ago, too many students in New Jersey were failing Algebra I in 9th grade, so some districts pushed Algebra I down to start in 8th grade. How did this make sense?
This robotic adherence to grade-level instruction, pacing guides, and testing is blinding. Over the past two years, lots of smart educators have lamented learning loss or unfinished learning. How do we solve this dire issue? The answer is clear: follow a special education model. Test each student diagnostically, find their instructional levels (or present levels), teach them individually, and finally progress monitor what they are being taught. We have the technology to do this. Do we have the will?
Leave A Comment