Friday, May 29, 2009


All the new schools come with cafegymatoriums so I figure I'll go with a 3-in-1 post.


 For 12 years I had assigned it but it always bugged me that we would spend 15+ minutes the next day going over something that many of the students didn't complete.  And if they did bring it in, how did I know if it was actually their work?  Now that I have children of my own in school, I am even more bothered by the amount of busywork that imposes itself on family time.   

My question isn't regarding the validity of homework.  My question revolves around the idea of how to make homework matter.  How do we make it meaningful to our students?  Can we tie it to assessment and/or differentiate it so that kids can work on what they need at any given point in time? And further, can homework become part of a meaningful dialogue between teacher and student rather than a box to be checked on the daily "to do" list? 


Quiz, Quiz, Test.  Quiz, Quiz, Test. Quiz, Quiz, Test. 

Isn't that how the pattern goes?  Followed that one too.  But again, over the past few years, my view of assessment has changed.  When do we assess?  How often?  How many times should a student have to show us he can do something?   How many different ways should he have to show it? Multiple choice or free response?  Where does writing come into play? 

For the past three years, we have been dealing with pacing guides and benchmarks due to the fact that my district is in program improvement.  I am in favor of it.  Pacing guides and benchmarks  have allowed us to begin with the end in mind, check for understanding along the way and then find ways to intervene with students who are struggling to grasp the concepts/skills.  However, I have noticed that teachers have a tendency to become very procedure oriented and lose sight of all the great thinking that can be provoked in a math classroom.  I don't blame this on pacing and benchmarks any more than I blame bad lessons on the tools being used in the classroom.  It has become obvious that the textbook pacing isn't the way we want to go, so we have started to teach one standard at a time.  But I think that many of our standards need to be deconstructed even more in order to ensure that when we assess, we get a grip on where a student is really struggling.  For example, in California, Algebra Standard 15.0 deals with mixture, rate and work problems.  It isn't enough to say that a kid is struggling with 15.0, we need to be a bit more specific in order to fix the problem.  I know that Dan has done a nice job of explaining the need to break the curriculum down into skills and he has a great assessment plan.  The part we have struggled with is what to do in between the initial assessment and the re-assessment(s).  Which leads us to...


Is it enough to throw some review problems up on the board for warmups and call it "intervention?"  Do we give students different assignments based on their need -- and when we give  these assignments, how do we grade them?  How much weight do they carry in relation to the final grade? Can I actually have 30 kids working on 30 different things?  If so, does that mean that I have to come up with 30 different assignments for each skill I want to remediate?  My head hurts just thinking about it. 

Until recently. 

Why can't we tie them all together? Why can't homework/classwork be prescribed based on the results of an initial assessment becomming a prerequesite for the re-assessment; a key to unlock the assessment box.  A student can be placed into one of two paths: the road to proficiency or the road to advanced status. Once a student reaches proficiency in a certain standard/skill, he earns a B.  He then has the choice to move towards advanced status in that skill (for an A) or work towards proficiency in another skill. If he never moves onto the advanced path, the score for that skill remains a B.  I am not sure if we should go with a 1-5 grading system or attach a percentage to the rubric score. (ie. 5 = 90%, 4 = 85%, 3 = 75%, 2 = 65%, 1 = 50%)

  Over the past month, I have had some release days and have come up with a template.  The challenging part has been to decide which "tasks" a student must complete before being allowed to re-assess.  These tasks are very minimal in that they merely show what I would like a student be able to show before he is allowed to re-assess.  Could a student take these tasks and "create" their own problems based on the template, or would the teacher need to be more hands on in helping direct the student?  Are there skills I am missing?  Are there ways to demonstrate the skill that I am leaving out?  How can this be adapted for student interest and/or modality?  And most importantly: does this idea stand a chance? I would really appreciate any feedback that I can get on this.

Note: Our math classes are in 94 minute daily blocks, so time for intervention/enrichment is built in.  We will go with a sort of 60-30 model next year where we do regular instruction for the first 60 minutes and leave the last 30 minutes for students to work on their choice of previous skills. 

The proficiency tasks for each skill will be followed by the student doing an exemplar.  My working definition of "exemplar" is a problem that exemplifies the given skill worked by the student with written and/or verbal explanation of the process used.  I have found these to be very good authentic assessments.  The student has the option to do this via paper and pencil or mathcast.


Matt T. said...

I've been thinking along the same lines for a while, David. I think Dan uses some sort of assess for proficiency (4 on his s-b scale), re-assess for advanced proficiency (5 on his s-b scale) system, but I can't remember the exact post he mentioned it in. If we keep our learning targets strictly procedural, it seems like we might be neglecting the problem solving/critical thinking angle of math where one is required to bring together multiple ideas and "solve" a complex problem. Does that make sense?

On a side note, assessment has really been the fuel for change in my classroom this year. I wrote a bit about it here: Seems like we're both yearning for "more" in this area of our daily interactions with students. Keep up the great blogging!

David Cox said...

I completely agree. In our classes, the regular classroom instruction will involve much of the critical thinking/problem solving. I also am working on developing different projects for kids who have demonstrated proficiency with the skill. I am not sure if you had a chance to look at the "tasks" in the template I linked to, but I want the "procedural" stuff to be the bare minimum.

I read your post re: assessment and I liked it. How do you currently use the assessment to make said changes.

Thanks for the encouragement.

Sarah said...

And must have messed up some of the html. This New School Year should work now.

*crosses fingers*

David Cox said...

Hi Sarah
I did link to Dan's "Comprehensive Math Resource" in the post. I know the idea of a kid maxing out on their score isn't a unique one because that is what Dan has been doing. However, the thing that I am trying to do is find a way to create a template of assignments that kids can do as classwork/homework for remediation and/or enrichment. I can't stand the way homework seems to be a worthless time sucker so I want to be able to give kids a choice on what they do on "their time." This is where I think I need the most feedback. If you check the template I linked to you will find the "task" page. That is definitely the work in progress.

Was your last question directed to me or Dan? If it was me...this year has been pretty good. I think I have fallen short in my planning. I still haven't figured out how to use all 94 minutes wisely. It has been a tough transition going from 55 minute periods in high school to the blocks I am using now. The benefit is that I only teach 3 classes (85 kids) all year long.

kevin said...


Is there any room in your department's budget? There is some online software called ALEKS which is very good and runs about $40 a student. Its set up by course and the kids have some choice as to what skills they are working on at what time. Its almost all open answer with review drills and frequent assessments. You only secure a skill if you continue to remember how to do it during each assessment. If you mess up you can lose the skill. Its being used in my wife's middle school to complement classroom instruction. It automatically differentiates, tracks and reviews. It also offers explanations. It really allows each kid in a classroom to work on something different.

I used it a bit in my high school calculus class (non-AP). They had all passed a precalculus course but when they took the initial assessment on ALEKS they had only about 24-30 topics out of 263 (which brings us back to the question of what grades/assessments mean/do in the classical classroom). What I found is that on "ALEKS" days my kids would come in and sit down and work almost continuously for the 78 minute block on mathematics. That's a huge chunk of time. I got to pop around and answer different questions or give 'mini-lessons' if they were stuck on something.

I think that ultimately, true differentiation requires some amount of technology. Ideally we do want 30 kids to be able to learn at their own rate and to be successful, but even if we're only teaching 1 subject at 1 level, those 85 kids are a lot to handle. Software like ALEKS are a welcome helping hand. Personally I see ALEKS as a way to free up teacher time to develop the kind of lessons that Dan Meyer does. Afterall how much time do we spend creating problems, working through them, etc. Do we do it any better than a computer. Do we bore 10 kids and lose 10 kids while talking to the middle 10?

P.S. You said you only teach three classes. What subjects/levels? Are they semester courses or year long?

Cheers: Kevin

Jason said...

I'm a science teacher but I essentially use a grading system based on Marzano'sClassroom Assessment and Grading That Work. I've been using it awhile so I can't quite remember where I've diverged from his exact system.

I've had a lot of success with it.

Nuts and bolts: 0-4 system separated by topics (For me: Forces and Motion, Periodic Table, etc). 2 means they have mastered all the basics with help. 3 means they've mastered the 2 stuff plus all of the complex stuff. 4 means they can take it and use it in ways not directly taught in class. Students track and graph their scores on a sheet that contains a scaled rubric. For me the key is that I use a pattern of responses for their scaled score. We have twice weekly quizzes. Usually about 7 problems with it separated into 3 easy/3 hard/1 advanced. 100% on the easy is a 2. 100% easy plus 100% hard is a 3. There are .5 scores for partials.

But I consider everything to be assessment. They can demonstrate their knowledge when we do labs or daily stuff or even just by our conversations. Their final grade comes from the pattern of those last few scores. That is 100% of their grade. I don't include anything except their mastery of the topics. No work completion or participation or anything.

The clear expectations for the students is the key. Knowing exactly what is required for each grade makes it less of a mystery where the grade comes from. Just as important I think it improves my teaching. Since it's all based on the topic, I have to laser-sight all the lessons and make sure I'm not including the fluff and busy work. The hard part for me is getting rid of some of my favorite projects/labs/demos. I did them because they were fun or because I had always done them, not really because the kids learned anything.

Dan Meyer said...

Hi everybody. Great discussion.

Re Sarah's question, "Dan, how’s it all gone this year?"

My largest adjustment this year was that when a student retook a concept and fell short of mastery, I would assign a homework problem which she would have to complete before she could retake anything again. This, as you can imagine, was intended to slim down on the number of students who would come in as often as possible, throwing answer after answer onto paper, hoping to see one stick.

I also did better with data, especially with keeping my students apprised daily on our percent completed per concept per class. Students dug the information, I think. Occasionally classes would get competitive also, which I didn't mind.

I continue to feel very little guilt over describing exactly which skill I would like to see modeled on an exam. (ie. "Solve By Completing The Square.") I need to work on this but I don't see any simple solutions.

David Cox said...

Money is pretty tight as you can imagine, but I have seen ALEKS. We currently have Orchard which I need to look at a bit more closely this summer.

Our classes are all 94 minute year long blocks. I teach the advanced classes. I have one 7th grade and two 8th grade classes and the curriculum runs from 7th grade pre algebra standards to geometry. My main concern isn't for my classes as my kids are the pretty high achievers, it is for the rest of my department.

I will have to look closer at Marzano's system. I agree that the laser sight is important. If you allow labs and conversations to enter into the grade, how do you set up your gradebook?

I am all for formative assessment driving future instruction. I can wrap my mind around it philosophically, but I sometimes have a tough time seeing what it really looks like in the classroom; especially in a way that I can help others understand.

One problem before reassessment? Is that enough? I like your idea of having students up to speed on their performance as a class. When you say "percent completed per class", do you mean the percent of kids who no longer have to test on that concept because they have been perfect twice? Do you consider those kids to be "advanced" in that particular concept?

I am trying to find a way to prescribe assignments that students can do that really get into the upper levels of Bloom's. I don't know if I'll ever get there, but I'm gonna keep trying.

Dan Meyer said...

"Percent completed per class" refers to the percentage of students who have passed a concept once meaning they are at proficiency.

And it's possible I'll increase the number of remediation problems next year. I have yet to convince myself this necessary, though.

Jason said...

Sorry, really late followup. This is what happens over the summer.

My gradebook is setup into different topics. I have Motion, Graphing, Scientific Literacy, Forces and Newton's Laws, etc. Then it just has a 0-4 score attached. So it'll say something like 0, 0.5, 1.0, 1.0,....and their current grade is whatever their pattern of scores shows. So this student is probably at a 1.0. If I'm not sure they're reassessed. You don't get differing amounts of points for labs or tests because they're just being assessed on their mastery of the standards. All the students have a portfolio where they track and graph their assessment results. Although a line graph is what they should be doing, I go for a series of bar graphs because I think it's clearer when they look at it that they're improving.

For your question @Matt.... That's definitely one of the strengths of standards-based grading. I keep all the scores on an excel spreadsheet and can keep track on a daily basis what standards are being met and what aren't for each student, by class, and overall.

Differentiation in my class usually goes something like this "Take out your portfolio. If you have a 1 in motion go to this table 2 in Motion go to this table, if you have a 3, do this. If you're stuck on finding the slope on a graph go here." Then I try to float around and help while they work things out.

Marcia Avery said...

If only I had a dollar for every time I came to Amazing post.