In the age of accountability, we have become increasingly skill based. That's not necessarily a bad thing. But I've noticed that with each benchmark, common formative assessment or state test we give, teachers have the tendency to focus on the product and not on the process (ie. let's give them the skills and pass 'em down the hall). I encourage inquiry with my students. I allow them to fall down the rabbit holes we encounter. In fact, I've been known to scrap the lesson for the benefit of a good question a student has asked. Most of us do. But the one thing that I've noticed as well is that even my grades have become increasingly skill based. I've been wondering how to assess the problem solving stuff. A colleague clarified some of this for me the other day at lunch. We were discussing the idea that once a student has been shown how to work a problem, it then becomes an exercise and no longer carries with it the attributes that allow a student to be a problem solver. He shared with me that he noticed how his students would ask for an example every time he tried to get them to do some problem solving and how that just didn't sit right with him. It doesn't sit right because the student is trying to take a problem and apply the teacher's algorithm to it. If the example is close enough, he can plug and chug, get an answer and we'll be none the wiser.
If we can get our kids to be proficient with the skills, show the ability to put them together in a problem solving situation and occasionally surprise the pants off of us with one of those "wow-look-what-you-did" moments, I'd say we'd call it a good year. This continuum can be tricky, though. The more time a student spends wrestling with the skills, the more time we need to be there for support. As a student becomes proficient with the skills, we can let go a little more and become a facilitator. I think that many teachers get frustrated because they hear of these great things that are going on in the classrooms of others and wonder why it won't work in their room. Well, maybe it will. But maybe it's because we've put the kid on a mountain bike before he can ride the tricycle. Can't expect a kid who is buried in skill acquisition to blow the doors off you with his inquiry methods--which isn't to say that the student doesn't have the ability to inquire, it may just need some dusting off.
At the end of the year, I'd like my students to be able to duplicate skills, apply skills and create something using the skills. Duplication aka:
Description: Here's the tool. Here's how you use the tool. Show me you know how to use the tool. Do it again, I don't believe you. One more time, just to be sure. Alright, you are now allowed to use this tool whenever you see fit. Essential question:
Can you use the tool?
How these skills are taught may vary. Direct instruction, investigations, Socratic method...I don't care. At the end of the day, the student is acquiring a skill. The more of the lifting they can do themselves while acquiring these skills the better because they are preparing themselves for the next steps of application and creation. Asking questions
is going to be the teacher's best friend.
Skills tests in the form of multiple choice or free response. Students are free to reassess at any time during the year as their understanding of the skill changes. This is why I don't have a problem with posting study guides
and online examples
. There are no surprises on my skills tests. Each different skill corresponds to a different assignment in the gradebook. I'm not really too concerned with how they arrive at the answer as the focus is on the product.
We don't give finals in our middle school, but next year I am going to give a summative test every 6 weeks or so that cover all skills to date. For example, the first assessment will cover skills 1-8, the next will cover skills 1-19, and we will end with a cumulative skills test covering skills 1-39. Application aka:
Description: I want you now to build something. Choose which tools you will use. Make sure your final product looks like *this*.
Essential question: Can you put more than one tool together to do something you haven't done before?
How: Give students a problem that uses skills they already have and walk away--sometimes literally. In this process, I think the students should generate more of the questions--although prodding them along with a question now and then isn't a bad thing. Be careful though, I have found that sometimes I have to take a physical posture of audience member by actually sitting down and looking at the floor in order to cease being the primary resource.
This can be in the form of a teacher created project
, short assessments requiring students to demonstrate their thinking or even an observation during class. I've thought about including higher level problems in my skills tests and students who successfully solve the problems earn a 5, but then I run into the problem of reassessment and context. The skills test will often give a context to the problem that we may not want the student to have. Reassessment on this will be different because it isn't about the particular problem, but the process by which the student attacks the problem. I'd argue that the skills necessary are irrelevant as long as the students being assessed possess the skills necessary to solve the problem. If this is a written test, it should
contain problems the students have never seen before. So I'm thinking about a "Problem solving" weighted category or maybe a weighted standard that is dynamic the same way the other standards are.
Creation aka: ProjectsDescription:
Now, what would you like to build? Design it. Plan it. Build it. Reflect on what you built. Did you choose the right tools? What would you do differently next time?
Essential question: Do you understand your set of tools well enough to recognize what types of things you can and can't do? How:
Student generated projects. My 7th graders are working on a relations project where they choose two variables to compare, gather data and investigate the relationship. Final product will include multiple representations of their data and a presentation using the medium of their choice. (I'll blog about it after we finish state testing.) I really like what Shawn Cornally
is doing with his physics kids. If anyone can help me figure out how to do this with some precocious middle school math students, write a book and I'll buy it.
Assessment: The project is the assessment.
Creation > Application > Duplication
Success in any of the higher levels validates the lower levels. For example: If a student can demonstrate problem solving ability by writing an equation and solving it, then not only does this affect the problem solving score, but it validates the equation solving skill. If a student can create her own project, then I'd say problem solving looks pretty strong as do any skills that were present in the project.
The worst part of all this is that we gotta give it a grade. How do we arrive at a final mark? This year, I left my grades uncalculated until the marking period opened and once grades were submitted, they went back to being uncalculated. I wanted parents and students to focus on the score for each skill and not the "averages." To arrive at a final mark for the students, we look at our rubric the same way one would view a grade point average.
4.5-5.0 = A
4.0-4.4 = B
3.0-3.9 = C
2.0-2.0 = D
< 2.0 = F
This was simple because there were no weighted categories. But now with the fact I want to focus on problem solving, projects and include summative tests, I need to figure out how this should look in the grade book.