## Saturday, June 26, 2010

### Problem Solving: Rubric

This one's a follow up to a previous post on which I'd still appreciate some push-back. (So if you haven't read that one, go there first)  I'm still convinced that problem-solving/synthesis/application or anything upstream from skill duplication needs to be handled separately from the skill itself.  That's not to say that a student can't validate a score on a particular skill(s) based on how they do on a richer assessment, but these types of problems are difficult to reassess for reasons stated in the Taxonomy post.  I did some tinkering with a rubric and tried it out with my 7th graders.  I liked what I saw.

I'm trying to make my students aware of four things:
• There's often times more than one way to solve a problem.
• There are usually four ways to represent a problem/solution: verbally, numerically, symbolically and graphically.
• Multiple skills can be put together to create new understanding.
• Strategies can often be generalized into a rule.
I gave my students a pretty open ended problem...

Tell me everything you can about the relationship described by the points: (2, 5) (-1, -1) and (5, 11).

...and gave them a page with two copies of this rubric.  One for them to fill out when they finished the problem and one for me to fill out after they turned it in.

I'm not completely pleased with this rubric because there will be times when there may not be multiple points of entry to a problem or all four ways to represent a problem may not be applicable. Giving them the rubric was huge.  From that day on, student's showing work ceased being a problem.  They all realized that everything in class became an opportunity for assessment, I was clear about my expectations and they rose to meet them.

How does this work with SBG?
Each of the four columns will have it's own standard.  I may weigh them as I think this is the stuff we're really after.  I will also allow a student to call on skills to be validated.  For example, if a student can identify finding slope as a skill needed to solve this problem and performs it correctly, the score on the slope standard becomes a 5.  I do think that the student needs to identify the skill in order for it to be validated, though.

I know these ideas need some sharpening and I may be missing something, but I'm throwing them out to you anyway.  So in all your free time...

Matt Townsley said...

I like the idea of assessing open-ended question, so I'm with you there, David. I'm wondering about the redundancy of doing this over time and putting it in the grade book. In other words, if you another open-ended assessment and a student does better in the "generalization" category, will you overwrite it in the grade book, even if its centered on quadratics instead of the problem you posted here? In other words, will you only have four columns for the entire grading period or will these same four columns/standards be entered into the grade book several times over and over again. If it's over and over again (and they're the same standards), it seems to be a bit contrary to the idea of standards-based grading. Fire away, Mr. Cox.

Sue VanHattum said...

I'm not sure how to use this, but I like it. I like that it got the students taking more responsibility for their learning. I put a quote from this and a link to it at the new sbg wiki.

Jason Buell said...

I think of a rubric as a description of the quality of thinking.I think the first two columns are most problematic.

Yes, it's good they have multiple methods or representation and problem solving strategies. I think you run in to trouble because you're valuing more/different rather than one or two really in depth/well thought out strategies.

I'm not sure this is coming through but here's an extreme example: Imagine a kid came up with a COMPLETELY original method of solving a problem. She's going to win a Fields medal. The rubric values 4 different surface methods that could be completely algorithmic.

In general, I think you should avoid raw counts with rubrics. Sorry, this is a quick comment before I'm off for the day so I've got to cut it short. I'll try to talk to you more about it later.

grace said...

I like that you've clearly identified what you think it is that students should be able to do with this problem rather than setting rubric points on vagaries like "critical thinking" or "logic." I also like that students had the opportunity to assess their own work and perhaps revisit and show more knowledge before turning it in.

One question I have is whether students were given an example of what each rubric point might look like-- what does it mean to "identify skills"? Do they need to write out the name of the standard, or simply use it in their work?

Jason's comment makes me wonder if it's possible to create a general rubric that works with any challenging problem, or if we'd need to tailor different rubrics to different problems. Maybe we'd have to create 10 columns, and then choose the 4 that best evaluate understanding on each individual problem we're assigning.

Jason Buell said...

Hi David, continuation of our twitter convo.

I think the problem with the first two columns is that, while multiple methods of representation probably requires greater depth of knowledge, it doesn't guarantee it. I think the easiest fix for this is to include indicators of depth such as "Uses four methods of representations and shows the ability to apply these skills to unique problems..." or something like that based on the taxonomy post. Then next step down might be, able to represent the use, but not the generalize it to new problems. The next step down could be a pure algorithmic representation.

Since you want depth of thinking, you need the measure to be depth. I think by just focusing on quantity, that's what you'll most likely get. The stereotypical example would be all those ELA papers kids write where they need to give "three reasons for..." and that's what they do. Three crappy reasons. I think most of us would take one really well defended one.

I also might agree with Grace and say that maybe some of those might be better off embedded into specific standards since they don't seem to always be necessary.

All that being said, this is ten times better than anything I've come up with so any criticism is on the margins, not on the whole.

Jennifer Borgioli Binis said...

Dave - Thanks so much for putting your thinking out there and inviting comments. As a self-professed rubric junkie, I'm always thrilled to see the new ways they are used.

One challenge that your rubric presents for me is the issue of self-assessment. In a checklist, a student can look and see if the item/task if present and check it off. If it's not, she adds it, then checks it off. A rubric is a tool to help students improve the quality of their work. How might a learner use your rubric to improve their work? What would a student need to do to demonstrate they use skills correctly versus use some skills correctly? What is an example of some skills? "Some" is a problematic term when it comes to quality...

Would it be possible to expand the "Rule of 4" column by clarifying? Are some of the rules more important than others? Is it possible that the lowest level student can name the 4 rules but not actually do anything with them?

Thanks again for sharing and I look forward to hearing your thoughts. My thinking on rubrics can be found here: http://qualityrubrics.pbworks.com/ and here: http://grand-rounds.blogspot.com/2010/06/in-defense-of-rubrics.html

David Cox said...

I think one thing I need to be very clear on is that this rubric is designed to assess problem solving/higher order thinking/habits of mind...whatever you'd like to call it. So by definition, we are already passed the basic skill portion of the grade. I understand your concern, Matt, but I can see doing a few things with the actual scores. Each time a student demonstrates the ability to generalize, it can be entered into, say, Shawn's SBGradebook and one could use the mode to determine the overall score when it comes to this habit of mind. I don't see how doing this over and over again is any different than Shawn's Investigation standards.

I like your idea, Jason about changing the language a bit to be more clear, but again, these problems will be unique. These will not be typical problems that we've done in class. So if a student can work a unique problem they've never seen before multiple ways, I still say that they show deeper understanding of this problem than someone who can just get the answer. But let's be clear, a student who can "just get the answer" to a unique problem they've never encountered, they're still doing very well. In your tests, you have your problems sectioned off according to level of difficulty. If a student solves your highest level problem one way, they earn your highest score so long as they've nailed all the lower level problems. What happens if a kid shows you that they can do that same problem more than one way? I'm simply making it a goal to have kids try to solve a difficult problem multiple ways.

Grace, I think you must've been reading my mind when I wrote the rubric. I kept asking myself if some of these categories would prove useful for all problems. You got another 6 columns in mind? I'd be glad to add 'em.

Jennifer,
I'm not sure that there is one method of representation that is more important although there may be some that are more natural to a problem than others.

I think pitting a "checklist" against a "rubric" may be a false dichotomy. After all, don't we all have some sort of mental checklist to decide what a 5 is compared to a 4? My rubric is merely intending to break down problems that you all would consider to be the advanced level problems on your own assessments.

Alright, lemme have it.

Jennifer Borgioli Binis said...

For me, much of this goes back to the purpose of a rubric. A rubric is a tool for self-assessment, reflection and understanding quality. Sometimes they're used for grading, but that's a secondary use, IMHO.

If you are completing a rubric with a checklist in mind, to my thinking, you're violating what's at the heart of a rubric - communication with students. Ideally, with rubrics, there are no implications, no assumptions. It's all right there for them to read and learn from. This often means students are a part of the process, and rubrics are used sparingly.

At the heart of what you're talking about, I think, is helping students become better problem solvers. To me, that's goes beyond breaking down the rules of four and how many you can use. Consider this analogy: when you get really good at driving standard, you may have lost your ability to articulate when to release the clutch and press the gas. Your good problem solvers may internalize the Rules of 4 and thus, appear on your rubric to lack a skill if they don't use them all. I go back to the question of: to what degree?

I started to draft a supporting rubric based on what I think you're looking for. I'd love to know if I'm correctly interrupting your expectations...

grace said...

Maybe "multiple methods" could have a variation that involves logically checking your work, or something like that. You can verify that what you've done makes sense by solving using a different method and getting the same answer, or by proving its logic in some other way-- that way we have a double check for students who do this: http://xkcd.com/759/

I don't have 6 more columns, but I'll think about it :) Wonder if it'd be meaningful to include a strand about being able to explain your work.

David Cox said...

Jennifer: If you are completing a rubric with a checklist in mind, to my thinking, you're violating what's at the heart of a rubric - communication with students. Ideally, with rubrics, there are no implications, no assumptions. It's all right there for them to read and learn from. This often means students are a part of the process, and rubrics are used sparingly.

Can you elaborate on this a bit. On one hand it seems you're saying the rubric can be used to guide the student yet be used sparingly. What do you mean by by no implications or assumptions?

Grace
Checking work may not be a bad idea especially if we're talking about using alternative methods to do so. Would explaining work is another to consider. Thanks.

Jason Buell said...

Sorry for taking forever to get back to this. Life always intervenes.

After reading your comments I better appreciate what you're going for. I would probably still prefer a depth/quality indicator for the Rule of 4. I'm particularly concerned with the req. to represent a problem verbally (or presumably in writing) when it comes to English learners. This may be an language issue rather than a math understanding issue.

I think I'm having trouble visualizing where this fits into your standard flow that you outlined before. The way I understand it, this is kind of separate from your assessment cycle. So after X number of students masters the unit on graphing, you continue and and drop the open ended problems on them or is it embedded throughout?

I also wanted help on the open ended nature of the problem. I've run into this issue a few times with "tell me everything you know" types of problems. I usually get the same BS over and over again. That is, I find that Student X knows a certain thing really well and can keep recycling that things again and again. I don't find I'm getting as much info as I'd like from those types of problems. How do you overcome that? I don't know if that's subject specific, but in science the laws are well, universal, so you can apply those laws to every unique situation.

(You're going to answer, "Get better problems" right?)

David Cox said...

I'm with you on quality over quantity. Probably will use both. I'm thinking that I'll drop some rich problems on them a couple of times per quarter. I'm still unsure if I want old to replace new in these standards or use mode or (forgive me for saying this) maybe even averaging as each problem will be independent of the others. I liked your journey example bit is it ok to average many different journeys? Still fleshing this out.

Yeah, get better problems!

But really, if expectations are clear and a kid is just bs-ing his way through the problem, that'd fall pretty low on any rubric if we keep in mind that the right answer is just a byproduct of what we're really looking for.