A recent article in the January 2014 issue of Wired magazine written by Felix Salmon called, “Numbered by Numbers: Why Quants Don’t Know Everything,” brought up some interesting points to consider when it comes to current trends in education and high-stakes testing. A lot has been written about the benefits and drawbacks of such tests, and many parents, educators, and students sit on both sides of the fence on the issue. I would like to present a different view on the matter: maybe high-stake tests aren’t so bad; perhaps its all in how we give them.
No one can argue that good data produces good results. Using quantification methods to extract trends and conclusions from sets of data has been used for decades. As pointed out in the article, Felix Salmon talks about what we as a society have learned from this process and why quantification often fails. Before we discuss quantification and its effect on education it is important to understand exactly what quantification is. It is simply taking large quantities of raw data and using it to make determinations or predictions. In its truest form, quantification is incredibly reliable and incredibly accurate, which explains its rise in popularity. Why leave to chance what you can predict using concrete information? Sounds simple, right?
Data doesn’t lie; people do. So why is quantification not always a good thing?
The main point of the article is that quantification can be broken down into four stages: pre-disruption, disruption, overshoot, synthesis. Quantification often comes in touted as the savior of an organization or system, thrown into place for its accuracy and promise of maximum profitability (pre-disruption). Then once quantification takes over, results often happen quickly (disruption), and people profiting from those results become extremely happy. The entry of quantification in the education system has entered the stage of disruption. In an effort to improve education, leaders have turned to quantification as a means to assign value to a teacher, curriculum, school, and administration in order to quantify progress and success. High-stakes tests provide the perfect vehicle for this quantification, turning learning into raw data. With the introduction of high-stakes tests it became immediately clear which students were meeting expectations and which weren’t. That clarity translated in to mass adoption of the practice across the country. What better way to determine which are the best schools than to assign a score or number to a school’s success?
What could go wrong?
To answer this question, we must understand the third stage of quantification, the stage schools are currently beginning to experience: overshoot. Overshoot is the natural tendency for people to “game” the system. Once it has been determined which practices produce positive results, you find that people will begin to focus on those results, tweaking practices to ensure improvements at the expense of everything else. This has produced the trend of “teaching to the test.” While this practice means better test scores and attractive data, does it paint a true picture of student learning and school success? Many would argue that better test scores mean better performance, yet real world data tends not to agree. Students are still graduating with a lack of skills necessary for todays modern workforce.
If we truly want to improve schools let’s learn from past mistakes of quantification and skip directly into the last stage of quantification. One of the biggest lessons learned from quantification is that data is best used when the human factor is not removed from the equation. Synthesis is “the practice of marrying quantitative insights with old-fashioned subjective experience.” It is through this process of bringing “people” back into data that businesses and data-driven analysts have been able to improve results dramatically, as much as 15-20%, removing the negative impacts of overshoot and correcting past mistakes.
The best example of the impact of overshoot is the banking industry. Having relied on quantification for years in order to determine qualifications for loans and in an effort to boost profits, banks took data to the extreme, in many cases completely eliminating the human factor. Decisions were made based on algorithms and computer software rather than bank employees and administrators. This resulted in extremely high profit margins, allowing banks to operate with equally lofty debt. This overconfidence and lack of oversight resulted in one of the largest collapses in history of both the housing market and banking confidence. Let’s not let this happen to education.
There is a better way.
Rather than relying solely on data from high-stakes testing, its important to remember the human element in education. There are countless factors that contribute to a student’s development, most of which are directly observed by those closest to the child—ironically the same people often removed from the equation in quantification. The human factor is essential to learning; remove it and all that’s left is data. What about those things children need to learn that aren’t measured by the test? Things like: leadership, interpersonal skills, collaboration, team-building, creative thinking, and confidence. Many businesses are desperate for new talent in all of these areas, and yet none are measured through high-stakes tests (and most could not be). It is only those around the child that can truly observe many of these behaviors.
It is essential then that we remember not to remove the “human factor” from the equation of learning and recognize that the teacher, parents, administrators, service providers, and all of the other people involved in a child’s learning are just as important (if not more important) than quantification of data. It’s time in education that we engage in the “humanification” of data, taking the information to the next step and teaching teachers and parents how to interpret and use the data effectively without removing their experience and wisdom. Teachers need to feel invested and included in the process. It is only through synthesis, combining the wisdom and experience of people with the information data provides that we will truly achieve a means by which assessing the “whole child,” not just the parts that are easy to measure.
If you have specific questions that you would like to have answered that are related to this topic, feel free to ask them in the comments section below. For more information about the author, please visit: J.M.Cataffo’s Author Website