Teacher effectiveness, in my experience, is typically measured in a few ways: classroom observations, student test scores, and student surveys. Except when conducted in the most nuanced and careful way, I don’t think these data points elicit a great understanding of how effective a given teacher is. Why don’t we evaluate student work samples as indicators of how well a given teacher is teaching?
Right now I’m in the process of completing one of the four components of National Board Certification, which involves sending in samples of student work across time. I’m supposed to send in three samples of student writing from three different points in the year, and then explain how my instructional practices helped the student to achieve the writing-related goal I set out for them. Essentially, what the judges are looking for is that something that’s happening in my classroom is resulting in actual, demonstrated student growth.
One of my administrators came in to observe one of my classes this past week. I appreciated how she circulated throughout the room, asked students what they were working on, and seemed genuinely interested in what was happening in the class. But whatever the result of the observation, what she doesn’t have is evidence of where the students were before they started the unit or lesson, and where they’ll be after they complete it. She doesn’t have the necessary data to tell whether or not my teaching is actually doing what I want it to.
The “student work” (scare quotes intentional) that usually gets used for teaching evaluation tends to take the form of fill-in-the-bubble standardized tests. For public school teachers more so than for me, these tests are the bane of teachers’ collective existence. They are often asinine, they are too-frequent, and they hover like ugly, dark clouds over our curriculum and classrooms. Open-ended, authentic assessments (often in the form of writing and higher-order activities) that actually ask students to think are more demonstrative of true learning. So why not measure students’ progress according to goals we want them to reach? And why not measure teacher effectiveness in terms of how well they help students to reach those worthwhile goals?
What might this look like? Well, the initial conversation between the teacher and administrator might involve the question, “What do you want students to learn by the end of this unit or period of time?” The teacher might then be observed once or twice, and then be asked to submit random student writing samples. A follow-up conversation could involve the teacher and administrator sitting down to cooperatively analyze student growth as demonstrated in those samples. What’s going well? What could be improved upon? This would be a much more intensive process than administering tests or conducting surveys, but that’s the work to be done, isn’t it?
Until embarking on the National Board journey I’ll admit that I’d never undertaken this exercise on my own. Though as a teacher I have a sort of intuitive sense of how a student is progressing over the course of a year—it’s very hard to get an objective measure of how a student is doing in their ability to assess historical causation, for example—I hadn’t really looked at my students’ writing over time, and asked whether or not what I was doing in the classroom was really helping them to get better at thinking. I think administrators and school districts ought to be asking that question, rather than relying mostly on teacher-centered observations of our supposedly student-centered classrooms