This might be my most important resource release this year.
First, you can read here about all the things that frustrated me about that snazzy 2011 rubric that I used to use (and that got downloaded from this site a lot). Some of them probably frustrated those of you who used it, too. So I decided to do a total overhaul. No starting from the original document allowed. A blank page. (Well, really I started with a yellow legal pad and about 12 Chrome tabs open.)
From talking to lots and lots of teachers about it, I hope I can anticipate a lot of questions you might have about the document. Several teachers helped me realize that simply posting it out here isn’t enough. You need some explanation on it. And so I may do a screencast and I may do a PDF but at least, here are explanations of the sections along with some screen shots.
Just looking at the front you can see some major changes. In the old rubric, there was an incredible amount of information that required very small type. The center sections were the target proficiency levels and they were colored in, which visually communicated, in my opinion, that half the rubric was irrelevant. So to begin, since by the time I’m doing performance assessments Novice Low is not a target at any point, I removed it. And since the majority of our students do not achieve intermediate mid in our classes, I kicked off IM and IH as well. But some do. And so I have promised Bethanie Carlson-Drew that I will develop a version ranging from NH to IM.
You’ll also notice the title is Performance toward Proficiency. This is because most of us are not qualified to say, and it is not our intention on assessments to say, “You achieve X proficiency.” Rather, our message is this: “On this particular performance, you are using language characteristic of X proficiency.”
Another major change was the column on the far right: it is a place you can simply check if either the section is not applicable or there is insufficient evidence to assess the category. For example, the comprehension section is not applicable in a presentational writing assessment.
There are now four major sections on the front page and each is divided into a few subsections.
- Message Type: What language do I use?
The first section is called Message Type and communicates to students what kind of language they are using. The ingredients and how they come together, if you will.
The first sublevel here is structure. What pieces of language are the students using: just words and a few phrases? Phrases and some sentences? All sentences when appropriate? How much does the structure reflect their native language (“Yo gusta deliciouso taco”)? I have to give you a major caveat here: for some unknown reason and in a very confusing turn of phrase, ACTFL says that Intermediate Low pronunciation, structure, etc. are strongly influenced by the first language, and that those features in Novice High may be strongly influenced by the first language. I promise. Check it out. This seems completely backwards to me and so I made a judgement call to switch them.
The second sublevel is depth of vocabulary. I’ve always loved this phrase. It’s what happens when students throw out “I adore it” instead of “I like it” and “many people perished” instead of “many people died.” Is the student just using very common words he’s memorized? Can she begin to personalize words by, for example, adding -ísimo to adjectives?
The third sublevel is context. This is a positive section helping students realize what situations they can handle. Is it very common situations they have practiced? Good job. More contexts that are still familiar, everyday situations? What about throwing a bit of complication in there? Great work! More contexts for you!
QUESTION about this context section: a teacher friend asked me a very good question: if we’re dictating the context in the scenario, is it fair to judge this part? In other words, should this section be eliminated, or is it needed to tell students what kind of contexts they’re handling and let them know if they’re going beyond or behind their demonstrated proficiency here, compared to other areas? Let me know your thoughts.
- Message Depth: How do I support my communication?
The second section is called Message Depth and communicates to students how well they are supporting section 1; that is, how does the language they choose to use flesh out their message?
The first sublevel here is content support. This is one place I desperately needed on a rubric that simply didn’t exist on the JCPS rubric, and I noticed the need for it from scoring AP essay after AP essay. I needed a place to tell students how well they were using prior knowledge to support their message. Could they include references to what they’d learned from authentic resources in the unit? This is what ACTFL calls “talking about something I have learned.” In this section I can tell students how well they provide examples from interpretive sources and elaborate on them.
The second sublevel is communication strategies. How do students sustain communication? Lower-level novices have a lot of difficulty keeping up a conversation and they often switch to English or just stay silent. Or use sí and no in ways that don’t make a lot of sense, right? But as they improve, they can ask some questions and even use minimal circumlocution to keep talking when they don’t know a word for something.
- Message Interaction: How do we understand each other?
The third section is Message Interaction and is probably the most straightforward of the group. Simply, can the learner interact with someone in the language? Are they comprehensible and how much do they need things repeated in order to comprehend something themselves?
This section has to do with the ever-present question of errors. I get asked at almost every workshop: “How do you assess errors? How much do you correct them?” I have two answers, depending on the student’s goals: for the College Board, patterns of error are what you’re looking for and trying to help students eradicate. For example, I had students who consistently wrote verbs with no attempt to change the endings at all. That’s a pattern. On the other hand, ACTFL’s guidelines are more about comprehensibility. When the error causes a breakdown in comprehension, in that the student made an error that means I can’t understand their intention, this is a problem.
Also, the proficiency level sometimes has to do with who can’t understand the learner. I can understand many things that someone who doesn’t speak English, or isn’t used to dealing with language learners, wouldn’t understand. As students improve their proficiency, they begin to be more and more comprehensible to native speakers who are not “sympathetic” – that is, they don’t know how or aren’t willing to work harder to understand someone who is a language learner.
This aspect was on the back page of the previous rubric under “Minor focus.” In scoring assessments, it never felt like a minor focus to me when an error made a learner incomprehensible. So it’s on the front now, on equal footing.
- Cultural awareness: How do I show what I know about other cultures?
This was a glaring omission on the previous rubric and really it was the AP exam that made me want to add it, followed by the new ACTFL performance indicators which include a section for cultural awareness (the language here is a mixing of the language from that document). I didn’t have a place to tell students how well they were incorporating cultural knowledge into their production, something absolutely essential for the AP. So if a student can do that, I want to let them know.
To see a much deeper explanation of this aspect including some production examples, please, please read this post.
On to the back page!
The back page is a back-and-forth between me and the learner. They fill out some of this at the beginning of the unit, and some of it after they get my feedback.
- Proficiency Goals
The student actually fills out what proficiency I’m expecting to be shown on this performance.
- The Staircase
On my old rubric, I simply put a smiley face in the box for “approaching expectations” or “meeting expectations,” etc. But what if a student showed novice high proficiency in two areas and novice mid in two? What then? Well, I put the smiley face at “approaching expectations” for novice mid, but farther up in the box, toward “meeting expectations” (novice high). Yes, really. Like any student ever noticed that.
I think it was Greg Duncan and Megan Johnson-Smith who first got me mulling over sublevels to the sublevels. What if there were a way I could tell a student, “Great job! You’re performing novice mid in several areas, but look at these two! Novice high!” What was that? Well, it’s Novice Mid +.
I fill out this section. The plus signs and minus signs are my way to communicate how many of the categories they’re holding at a certain proficiency. I love, love this part.
- The grade
Yes. I’ve given in, and there’s a grading scale.
For more information on how I’ve always assigned grades to a proficiency rubric (there’s no change here), see this post.
So, I’m trying to put ownership of the learning in the student’s hands, and my thinking here is that the student looks at the proficiency I’ve marked and circles the expectation box herself. Then you can put the grade in if you want to (I think I still won’t). And you have a nice feedback box here on the proficiency part.
My students aren’t allowed to score below “approaching expectations.” If this happens they must set a date to re-try. Depending on my class size, I also allow students who score “approaching” to re-try if they want to (and I have had several take me up on this offer to improve).
- My language tasks
The one part of the old rubric that absolutely had to go was the “task completion” section. As it turns out, in three years of scoring assessments I misunderstood this from the JCPS rubric and what I considered “task completion” was on the front in language use. But the back “task completion” section said “I completed (part, almost, all of) what I was asked to do.” And I was scoring AP assessments a lot. On the AP, students are told to incorporate all three of the authentic sources they’ve seen into their presentational essay. If they didn’t, their score would suffer significantly. So it wasn’t a “minor focus” for us like it said on the rubric. It was a big deal. And my old rubric didn’t give me a place to say that.
For an analysis of task completion and this issue being the one that inspired me to overhaul the rubric this year, read this post.
In this section, the student writes (at the beginning of the unit) what language tasks they will be asked to perform in the assessment. Will they need to show they can disagree? Incorporate information from an infographic you discussed in class? Mention some opinions of another person in class? Here’s where they record that. Then, you check whether they’ve shown strong, weak, or no evidence of this skill.
This section is not a place for students to write “I can use 7 verbs in the preterite tense.” If you have them write that sort of thing here, you might as well tear up the rubric because what you are doing is not a proficiency-based performance assessment, it’s a grammar test masquerading as a performance assessment. If you’ve determined that’s what you’re looking for, stop reading now and close this tab. Please.
- Teacher feedback
This is pretty straightforward.
- Student reflection
This is perhaps one of the most important sections in the rubric and you owe it to Colleen Lee-Hayes and Natalia Delaat (ありがとう y спасибо colegas). We’re putting the ownership in their hands!
Whew. If you care about using this kind of rubric, I hope you put up with all that explanation!
Two more issues to go.
Where’s interpretive mode?
Good question. Please know that if you do stand-alone interpretive tasks on integrated performance assessments and use an interpretive rubric or some sort of scoring system to grade them, you are in the majority and I am not. Honestly, I do not know another teacher who handles this the way I do. So don’t feel like I’m telling you that you need to do this.
Eliminating stand-alone interpretive assessments was something the College Board inspired me to do. In the AP essay, students are given sources on which to base their essay, but there are no comprehension questions on the source. Rather, the writer must use the information they understand from all three sources to inform their response.
To me, this is what we do with interpretive tasks in real life, and this question is always in my head: how can my class better reflect the way this plays out in real life? We watch a movie and we don’t fill out worksheets on it. We don’t draw pictures of it or answer multiple-choice questions about the plot line, which we may not accurately remember even though we understood it at the time. No, we use what we saw to tell our good friends what parts we loved, what we hated, why the actress was terrible, and how it compares to the first installment in the series.
That is what I ask students to do with integrated performance assessments. So the answer to your question (“Where’s interpretive mode?”) is that it’s in “Content support” and perhaps also in “My Language Tasks.”
If you’d like to see an example of how I do this, here’s one for novice.
I don’t think this heading will work.
Please tell me all your issues. I’ve been developing this rubric for six weeks and it’s been reviewed by dozens of teachers, but it can’t be a game-changer unless a whole lot more teachers “get their hands dirty” with it, using it on real production assessments and contacting me about how it’s working for you. I’ll continually change this post with updates.
So where is it?
Ready to get the file? If you put up with the rest of this post, you deserve it!
Download the PDF here. Contact me or comment below if you’d like access to a .docx file to edit. Please respect intellectual property rights. If you modify the document for your purposes, please leave the footnote at the bottom directing users to Musicuentos.com/handouts for credits. You may modify the footnote to include a reference such as “Based on the Musicuentos performance assessment rubric. For more information visit Musicuentos.com/handouts.”
I didn’t write this rubric. I simply stole a lot of stuff and put it in one place. I can’t begin to effectively acknowledge how much the work of some very smart people helped inform this rubric in all its drafts. Thanks to Amy Lenord, Colleen Lee-Hayes, Bethanie Carlson-Drew, Martina Bex, and the Ohio language gurus, whose fingerprints can be seen in various sections here. The majority of the wording is taken from either the ACTFL performance descriptors, Can Do statements, and proficiency guidelines; the old Jefferson County (KY) performance assessment rubric; and the Ohio rubrics. Thanks to Natalia Delaat, Thomas Sauer, Sarah Bolaños, and Jacob Shively who took the time to give me honest, in-depth, extensive feedback that greatly improved the validity and user-friendliness of this document. I know your time is super valuable, and we’re all indebted to you for your generosity with it. And definitely, thanks to Melanie Stilson, who gave me the push I needed to get working on this project that had been on a back burner for a while.
Thanks to all the teachers at Camp Musicuentos who gave me some rocking suggestions for improvements. For one thing, you owe the staircase to them, and that might be the best part of the document.
Tags: assessment, proficiency, resources, rubrics.