YES YES YES!!!
I can create a pretty good rubric for any given introductory-intermediate programming assignment. Know why? Because I’ve seen literally thousands of student solutions. Can I do that for a brand new style of assignment I’ve never done before? Heck no!
Crappy Rubrics is one of my many pet peeves. Far too many educators treat the term “rubric” as if it is simply another word for marking guide. <sigh>
Also, professional Educationists (…you know, professors of education, many of whom have never actually taught anything other than “Education”. That would be like teaching programming without ever bothering to learn anything about the domains of the problems you are trying to solve with those programs. There are plenty who do that too.)…. Where was I? Oh yes, professional Educationalists (read: Education Faculty faculty) are all of one mind it seems when it comes to the wonderfulness of rubrics. Problem is, many of them INSIST that every
assignment learning task (for some reason, “assignment” is not the right word for assignments anymore) for every course taught be assessed using a rubric. First time this work is being done? No matter, make up a rubric. I guess we’re supposed to just know all the different ways people will approach the task.
Hmmm… maybe THAT’s why almost ALL of the Education grad-level seminar courses at the U of Calgary include a research style paper as the major assignment. It’s because they have the rubric!!!
There’s a saying: “Nothing is more dangerous than a professor with a powerpoint.” (In case you’re wondering, it’s because far too often, this also means that the professor will follow that ppt, faithfully, regardless of the students’ needs.)
Well, here’s one to add to it: “Nothing is more dangerous than an Educator with a rubric.” (I spent a LOT of time generating this rubric, of COURSE I’m going to give out assignments that fit my rubric. Until I retire.)
Alas, as I wrote in my last post, as with other good ideas, there has been some stupidification of this tool. I have seen unwise use of rubrics and countless poorly-written ones: invalid criteria, unclear descriptors, lack of parallelism across scores, etc. But the most basic error is the use of rubrics without models. Without models to validate and ground them, rubrics are too vague and nowhere near as helpful to students as they might be.
Consider how a valid rubric is born. It summarizes what a range of concrete works looks like as reflections of a complex performance goal. Note two key words: complex and summarizes. All complex performance evaluation requires a judgment of quality in terms of one or more criteria, whether we are considering essays, diving, or wine. The rubric is a summary that generalizes from lots and lots of samples (sometimes called models, exemplars, or anchors) across the range of quality, in response to a performance demand. The rubric thus serves as a quick reminder of what all the specific samples of work look like across a range of quality.
Cast as a process, the rubric is not the first thing generated, therefore; it is one of the last things generated in the original anchoring process. Once the task has been given and the work is collected, one or more judges sorts the work into piles while working from some general criteria. In an essay, we care about such criteria as: valid reasoning, appropriate facts, clarity, etc. So, the judges sort each sample into growing piles that reflect a continuum of quality: this pile has the best essays in it; that pile contains work that does not quite meet the criteria as well as the top pile, etc.
Once all the papers have been scored, the judge(s) then ask: OK, how do we describe each pile in summary form, to explain to students and other interested parties the difference in work quality across the piles, and how each pile differs from the other piles? The answer is the rubric.