Skip to main content
Accreditation

Metric, Metrics Everywhere and Nothing to Make Us Think!

By February 11, 2013December 30th, 2021No Comments
In a recent meeting with a major city school board president, I was told that “the school district doesn’t consider any programs that do not have demonstrable statistics as to its effectiveness.” My retort? “How has that been working out for you?”
This particular school district – like many urban school systems — is plagued with problems begging for solutions. Like most school districts, both large and small, it has been seduced for years by the idea that the solutions lie in the statistics of “demonstrably effective programs”. What the program salesperson doesn’t say – and perhaps does not appreciate – is that measures of success for any educational product depend upon the level of commitment to implementing it by those using the program. And, this commitment must be based both on an understanding of the problem addressed and an agreement that the program will solve that problem.
Snapshot metrics, whether gathered by a program or product provider or by a school system about its own performance, merely confirm the obvious. Good schools perform well on the selected measures; struggling schools perform poorly. The results typically are a correlation and, infrequently, causal. They confirm what we already have concluded: they tell us little about the conditions contributing to the results and do not provide direction for improving results. These are critical pieces of information to any improvement effort.
It has been said that everything we measure may not be important and everything important may not be measurable. This is particularly true of a process as complicated as teaching and learning. Although there is a role for metrics in education, its purposes should be formative. Testing should be a means of measuring academic and student performance and achievement over time. The results should be used to improve both program efficacy and individual performance.
Today’s obsession with metrics focuses on the summative without an understanding of the causes of that performance – whether excellent or poor. Metrics force schools and individual students in to competitive comparisons while ignoring the true value added by the educational experience. Metrics describe only “the what is” and contribute nothing to the need to understand “what could be”.
Fortunately, there are exceptions to the “all about metrics school of thought”. I have visited schools that see the true value of gathering data and using it accordingly. There are an increasing number of products and programs that candidly reference their limitations. And, importantly, there are educational professionals, including teachers and administrators, who have had enough and are developing practical, qualitative measures to supplement the standard means of assessment.

Next time I meet that school board president I will remind him that sometimes common sense makes more sense than relying solely on statistical significance. “How might that work for you?”

Leave a Reply