Intelligent vs. thoughtless use of rubrics and models (Part 1) | Granted, but…

Approximate Reading Time: 3 minutes


I can create a pretty good rubric for any given introductory-intermediate programming assignment. Know why? Because I’ve seen literally thousands of student solutions. Can I do that for a brand new style of assignment I’ve never done before? Heck no!

Crappy Rubrics is one of my many pet peeves. Far too many educators treat the term “rubric” as if it is simply another word for marking guide. <sigh>

Also, professional Educationists (…you know, professors of education, many of whom have never actually taught anything other than “Education”. That would be like teaching programming without ever bothering to learn anything about the domains of the problems you are trying to solve with those programs. There are plenty who do that too.)…. Where was I? Oh yes, professional Educationalists (read: Education Faculty faculty) are all of one mind it seems when it comes to the wonderfulness of rubrics. Problem is, many of them INSIST that every assignment learning task (for some reason, “assignment” is not the right word for assignments anymore) for every course taught be assessed using a rubric. First time this work is being done? No matter, make up a rubric. I guess we’re supposed to just know all the different ways people will approach the task.

Hmmm… maybe THAT’s why almost ALL of the Education grad-level seminar courses at the U of Calgary include a research style paper as the major assignment. It’s because they have the rubric!!!

There’s a saying: “Nothing is more dangerous than a professor with a powerpoint.”  (In case you’re wondering, it’s because far too often, this also means that the professor will follow that ppt, faithfully, regardless of the students’ needs.)

Well, here’s one to add to it: “Nothing is more dangerous than an Educator with a  rubric.” (I spent a LOT of time generating this rubric, of COURSE I’m going to give out assignments that fit my rubric. Until I retire.)

Alas, as I wrote in my last post, as with other good ideas, there has been some stupidification of this tool. I have seen unwise use of rubrics and countless poorly-written ones: invalid criteria, unclear descriptors, lack of parallelism across scores, etc. But the most basic error is the use of rubrics without models. Without models to validate and ground them, rubrics are too vague and nowhere near as helpful to students as they might be.

Consider how a valid rubric is born. It summarizes what a range of concrete works looks like as reflections of a complex performance goal. Note two key words: complex and summarizes. All complex performance evaluation requires a judgment of quality in terms of one or more criteria, whether we are considering essays, diving, or wine. The rubric is a summary that generalizes from lots and lots of samples (sometimes called models, exemplars, or anchors) across the range of quality, in response to a performance demand. The rubric thus serves as a quick reminder of what all the specific samples of work look like across a range of quality.

Cast as a process, the rubric is not the first thing generated, therefore; it is one of the last things generated in the original anchoring process. Once the task has been given and the work is collected, one or more judges sorts the work into piles while working from some general criteria. In an essay, we care about such criteria as: valid reasoning, appropriate facts, clarity, etc. So, the judges sort each sample into growing piles that reflect a continuum of quality: this pile has the best essays in it; that pile contains work that does not quite meet the criteria as well as the top pile, etc.

Once all the papers have been scored, the judge(s) then ask: OK, how do we describe each pile in summary form, to explain to students and other interested parties the difference in work quality across the piles, and how each pile differs from the other piles? The answer is the rubric.

via Intelligent vs. thoughtless use of rubrics and models (Part 1) | Granted, but….

In case you’re interested, I have posted some of the rubrics I’ve used over the years here.

Be the first to like.


Intelligent vs. thoughtless use of rubrics and models (Part 1) | Granted, but… — 1 Comment

  1. I find it difficult to use rubrics even on assignments I’ve given 3 or 4 times before—students come up with startlingly different ways to solve problems, some of which are great and some of which are bad in surprising ways. Close and intelligent reading of their work is the only way to figure out what they’ve done and what feedback they need. I could write some generic rubric about grammar, programming style, documentation, correctness, … and it would be a total waste of time.

    I can see the need for rubrics in very large classes and exams, to standardize grading across dozens of graders. The results are generally uniform, but low-quality, grading (see all the problems that the SAT essay has). The solution is not to magically improve rubrics, but to eliminate large classes and exams, and have intelligent grading of courses small enough that one teacher can do the grading.

    I’ve judged science fairs for over a decade, and have seen the same mistakes made repeatedly, but no science fair rubric I’ve seen ever addresses them—they’re almost all touchy-feely “presentation” rubrics that give so few points for the scientific part of the science fair that they are worse than useless—they actively encourage judges to reward the poorly done but slickly packaged projects.

Leave a Reply