Data. Analyzing data. I remember that I would literally groan and face palm when I heard an administrator mutter those words. At my previous school, the discussion of data was always paired with bad news regarding test scores. Or targeted populations. Sometimes the doom of program improvement, and teaching to the standardized test. At the time, the only data I cared about was whether there were enough students to fill four sections of band!
I underestimated the prevalance and importance of data. In fact, we both consciously and subconsciously use data in our lives. Google Maps uses data when determining how long it will take me to get to a destination. My doctor uses data when figuring out the dosage for a prescription. I use data when deciding which checkout line to go in at the grocery store. Just picture it: there are three lanes open. In lane 1, two patrons with shopping carts completely full. Lane 2 has five patrons with 3-4 items each. In lane 3, the ONLY patron is my 85-year-old mother-in-law, but she’s searching for her checkbook. From the anecdotal data you’ve collected over the years over how this scenario tends to play out, which lane would YOU choose?
I will fully admit that I am not an expert on using data to guide my teaching. This is why my goal for this year is to use it more often to help me make more informed decisions. I want to be more purposeful in my approach since I’m aware of the incorrect ways in which to use and analyze data. If I truly want my students to be better writers and more literate, appreciate the importance of reading an informational text, to be independent thinkers, and to communicate effectively, I have to examine my current practices. Otherwise, I will most likely continue with business as usual.
When I should’ve used data but didn’t
A prime example of this is my band director days. Back then, I honestly thought I was badass. Seriously. If any of my former band directors ever read this post, they’ll probably chuckle at my conceit. I was conflating my popularity as a band director (with the students) with being a good one. I packed those sections of band because I was fun, but nobody cared if my bands were any good. Wow, they’d say. Kim Lepre has 4 sections of band! She must be doing something right!
And I was! I was recruiting like a salesman and using carrots like field trips to attract students. Join band, go to Disneyland! was my winning slogan. I don’t know if I actually fostered a love for music, or if students just stayed for the field trips. Very few of my middle school students stayed in band through their senior year in high school. I don’t know if they really liked band or just liked having a fun teacher such as myself.
However it became apparent that my skills in recruitment didn’t translate into skills as a band director. To my constant frustration, my bands constantly scored mid-range and below at band festivals, which was the only metric. I couldn’t figure out why my bands never improved, so I blamed the judges, blamed the lack of elementary school band programs, and blamed the system in general.
Looking back, had I used data to guide my teaching, I’m confident that I would’ve improved, along with my band’s festival scores. I might have remained as a band director since I would have experienced some success. I should’ve looked at the feedback and scores, and used that to change how I taught those skills. If we keep getting nailed on intonation, then that should’ve been a clue that we – the band and myself – need to work on it. The blame game only works for so long, but at the end, if you teach in a bubble, you can only take your students so far.
You’ll never really know how well your students, and in turn, your teaching, are doing unless you closely examine the data.
Today, I feel good about saying that I have solid classroom management and class routines and procedures. My lessons tend to be engaging. I believe I’m good at identifying which skills my students are successful in demonstrating, and which ones always need work. In my PLC, we’re looking closely at last year’s data to determine which standards we should spend more time on, and which standards we can simply touch on. It’s a bit of a conundrum though – if we focus more on the standards the students were not successful at, will the other ones suffer? Is it our teaching or the test (there’s that blame game again)? We’ll never know unless we measure with data and properly interpret it.
I do want to make it clear that when I speak of data, I’m referring to more than just standardized test scores. Those are just one way to measure success, and quite honestly, those scores often feel irrelevant to me. We receive them far too late to intervene, and while I do use them in some form to adjust my teaching, I think the far more salient data is the one that I collect on a regular basis. How often Jonah yelled out an answer, how many students failed to use a transitional phrase, or how many times I taught and assessed passive voice. Analyzing that kind of data is what helps teachers grow and hone their craft.
I’m curious about how other teachers use data to guide instruction. There are many great books and articles on the topic, but I do best with real-world examples. While I plan to do my own research on this practice, I’d love to hear from my readers about how they employ data-driven teaching. So please leave a comment, suggestion for further reading, or any advice.
Leave a Reply