The Blog

Top post of the year: The 2015 updated rubric

My old rubric served me well for four years, but it was time for a change.  A clean slate, a lot of websites, a lot of feedback, and a lot of collaborative brainstorming later, I finally had something I was willing to put out and test out.  Check the post below – the links go to the latest version based on feedback from teachers using the feedback.  And I may make some more changes after contemplating my recent end-of-semester assessments.  The top post of 2015 – published in August and still accessed twice as much as the next most popular post – was the lengthy discussion and release of my updated performance-toward-proficiency rubric.

best of 2015 rubric

(And happy 2016, everyone!)

The 2015 updated performance assessment rubric

This might be my most important resource release this year.

First, you can read here about all the things that frustrated me about that snazzy 2011 rubric that I used to use (and that got downloaded from this site a lot). Some of them probably frustrated those of you who used it, too. So I decided to do a total overhaul. No starting from the original document allowed. A blank page. (Well, really I started with a yellow legal pad and about 12 Chrome tabs open.)

Unpacking it

From talking to lots and lots of teachers about it, I hope I can anticipate a lot of questions you might have about the document. Several teachers helped me realize that simply posting it out here isn’t enough. You need some explanation on it. And so I may do a screencast and I may do a PDF but at least, here are explanations of the sections along with some screen shots.

2015 rubric page 1

Just looking at the front you can see some major changes. In the old rubric, there was an incredible amount of information that required very small type. The center sections were the target proficiency levels and they were colored in, which visually communicated, in my opinion, that half the rubric was irrelevant. So to begin, since by the time I’m doing performance assessments Novice Low is not a target at any point, I removed it. And since the majority of our students do not achieve intermediate mid in our classes, I kicked off IM and IH as well. But some do. And so I have promised Bethanie Carlson-Drew that I will develop a version ranging from NH to IM.

You’ll also notice the title is Performance toward Proficiency. This is because most of us are not qualified to say, and it is not our intention on assessments to say, “You achieve X proficiency.” Rather, our message is this: “On this particular performance, you are using language characteristic of X proficiency.”

Another major change was the column on the far right: it is a place you can simply check if either the section is not applicable or there is insufficient evidence to assess the category.  For example, the comprehension section is not applicable in a presentational writing assessment.

Page 1:

There are now four major sections on the front page and each is divided into a few subsections.

  • Message Type: What language do I use?
    RUBp1box1messagetype
    The first section is called Message Type and communicates to students what kind of language they are using. The ingredients and how they come together, if you will.
    The first sublevel here is structure. What pieces of language are the students using: just words and a few phrases? Phrases and some sentences? All sentences when appropriate? How much does the structure reflect their native language (“Yo gusta deliciouso taco”)? I have to give you a major caveat here: for some unknown reason and in a very confusing turn of phrase, ACTFL says that Intermediate Low pronunciation, structure, etc. are strongly influenced by the first language, and that those features in Novice High may be strongly influenced by the first language. I promise. Check it out. This seems completely backwards to me and so I made a judgement call to switch them.
    The second sublevel is depth of vocabulary. I’ve always loved this phrase. It’s what happens when students throw out “I adore it” instead of “I like it” and “many people perished” instead of “many people died.” Is the student just using very common words he’s memorized? Can she begin to personalize words by, for example, adding -ísimo to adjectives?
    The third sublevel is context. This is a positive section helping students realize what situations they can handle. Is it very common situations they have practiced? Good job. More contexts that are still familiar, everyday situations? What about throwing a bit of complication in there? Great work! More contexts for you!
    QUESTION about this context section: a teacher friend asked me a very good question: if we’re dictating the context in the scenario, is it fair to judge this part? In other words, should this section be eliminated, or is it needed to tell students what kind of contexts they’re handling and let them know if they’re going beyond or behind their demonstrated proficiency here, compared to other areas? Let me know your thoughts.
  • Message Depth: How do I support my communication?
    RUBbox2Depth
    The second section is called Message Depth and communicates to students how well they are supporting section 1; that is, how does the language they choose to use flesh out their message?
    The first sublevel here is content support. This is one place I desperately needed on a rubric that simply didn’t exist on the JCPS rubric, and I noticed the need for it from scoring AP essay after AP essay. I needed a place to tell students how well they were using prior knowledge to support their message. Could they include references to what they’d learned from authentic resources in the unit? This is what ACTFL calls “talking about something I have learned.” In this section I can tell students how well they provide examples from interpretive sources and elaborate on them.
    The second sublevel is communication strategies. How do students sustain communication? Lower-level novices have a lot of difficulty keeping up a conversation and they often switch to English or just stay silent. Or use  and no in ways that don’t make a lot of sense, right? But as they improve, they can ask some questions and even use minimal circumlocution to keep talking when they don’t know a word for something.
  • Message Interaction: How do we understand each other?
    RUBbox3InteractionThe third section is Message Interaction and is probably the most straightforward of the group. Simply, can the learner interact with someone in the language? Are they comprehensible and how much do they need things repeated in order to comprehend something themselves?
    This section has to do with the ever-present question of errors. I get asked at almost every workshop: “How do you assess errors? How much do you correct them?” I have two answers, depending on the student’s goals: for the College Board, patterns of error are what you’re looking for and trying to help students eradicate. For example, I had students who consistently wrote verbs with no attempt to change the endings at all. That’s a pattern. On the other hand, ACTFL’s guidelines are more about comprehensibility. When the error causes a breakdown in comprehension, in that the student made an error that means I can’t understand their intention, this is a problem.
    Also, the proficiency level sometimes has to do with who can’t understand the learner. I can understand many things that someone who doesn’t speak English, or isn’t used to dealing with language learners, wouldn’t understand. As students improve their proficiency, they begin to be more and more comprehensible to native speakers who are not “sympathetic” – that is, they don’t know how or aren’t willing to work harder to understand someone who is a language learner.
    This aspect was on the back page of the previous rubric under “Minor focus.” In scoring assessments, it never felt like a minor focus to me when an error made a learner incomprehensible. So it’s on the front now, on equal footing.
  • Cultural awareness: How do I show what I know about other cultures?
    RUBbox4 culture
    This was a glaring omission on the previous rubric and really it was the AP exam that made me want to add it, followed by the new ACTFL performance indicators which include a section for cultural awareness (the language here is a mixing of the language from that document). I didn’t have a place to tell students how well they were incorporating cultural knowledge into their production, something absolutely essential for the AP. So if a student can do that, I want to let them know.
    To see a much deeper explanation of this aspect including some production examples, please, please read this post.

On to the back page!

2015 rubric page 2

Page 2

The back page is a back-and-forth between me and the learner. They fill out some of this at the beginning of the unit, and some of it after they get my feedback.

  • Proficiency Goals
    RUBprofexpect
    The student actually fills out what proficiency I’m expecting to be shown on this performance.
  • The Staircase
    RUBstaircase
    On my old rubric, I simply put a smiley face in the box for “approaching expectations” or “meeting expectations,” etc. But what if a student showed novice high proficiency in two areas and novice mid in two? What then? Well, I put the smiley face at “approaching expectations” for novice mid, but farther up in the box, toward “meeting expectations” (novice high). Yes, really. Like any student ever noticed that.
    I think it was Greg Duncan and Megan Johnson-Smith who first got me mulling over sublevels to the sublevels. What if there were a way I could tell a student, “Great job! You’re performing novice mid in several areas, but look at these two! Novice high!” What was that? Well, it’s Novice Mid +.
    I fill out this section. The plus signs and minus signs are my way to communicate how many of the categories they’re holding at a certain proficiency. I love, love this part.
  • The grade
    RUBgrade1
    Yes. I’ve given in, and there’s a grading scale.
    For more information on how I’ve always assigned grades to a proficiency rubric (there’s no change here), see this post.
    So, I’m trying to put ownership of the learning in the student’s hands, and my thinking here is that the student looks at the proficiency I’ve marked and circles the expectation box herself. Then you can put the grade in if you want to (I think I still won’t). And you have a nice feedback box here on the proficiency part.
    My students aren’t allowed to score below “approaching expectations.” If this happens they must set a date to re-try. Depending on my class size, I also allow students who score “approaching” to re-try if they want to (and I have had several take me up on this offer to improve).
  • My language tasks
    RUBtasks
    The one part of the old rubric that absolutely had to go was the “task completion” section. As it turns out, in three years of scoring assessments I misunderstood this from the JCPS rubric and what I considered “task completion” was on the front in language use. But the back “task completion” section said “I completed (part, almost, all of) what I was asked to do.” And I was scoring AP assessments a lot. On the AP, students are told to incorporate all three of the authentic sources they’ve seen into their presentational essay. If they didn’t, their score would suffer significantly. So it wasn’t a “minor focus” for us like it said on the rubric. It was a big deal. And my old rubric didn’t give me a place to say that.
    For an analysis of task completion and this issue being the one that inspired me to overhaul the rubric this year, read this post.
    In this section, the student writes (at the beginning of the unit) what language tasks they will be asked to perform in the assessment. Will they need to show they can disagree? Incorporate information from an infographic you discussed in class? Mention some opinions of another person in class? Here’s where they record that. Then, you check whether they’ve shown strong, weak, or no evidence of this skill.
    This section is not a place for students to write “I can use 7 verbs in the preterite tense.” If you have them write that sort of thing here, you might as well tear up the rubric because what you are doing is not a proficiency-based performance assessment, it’s a grammar test masquerading as a performance assessment. If you’ve determined that’s what you’re looking for, stop reading now and close this tab. Please.
  • Teacher feedback
    RUBtaskfeedback
    This is pretty straightforward.
  • Student reflection
    RUBstureflect
    This is perhaps one of the most important sections in the rubric and you owe it to Colleen Lee-Hayes and Natalia Delaat (ありがとう y спасибо colegas).  We’re putting the ownership in their hands!

Whew. If you care about using this kind of rubric, I hope you put up with all that explanation!

Two more issues to go.

Where’s interpretive mode?

Good question. Please know that if you do stand-alone interpretive tasks on integrated performance assessments and use an interpretive rubric or some sort of scoring system to grade them, you are in the majority and I am not. Honestly, I do not know another teacher who handles this the way I do. So don’t feel like I’m telling you that you need to do this.

Eliminating stand-alone interpretive assessments was something the College Board inspired me to do. In the AP essay, students are given sources on which to base their essay, but there are no comprehension questions on the source. Rather, the writer must use the information they understand from all three sources to inform their response.

To me, this is what we do with interpretive tasks in real life, and this question is always in my head: how can my class better reflect the way this plays out in real life? We watch a movie and we don’t fill out worksheets on it. We don’t draw pictures of it or answer multiple-choice questions about the plot line, which we may not accurately remember even though we understood it at the time. No, we use what we saw to tell our good friends what parts we loved, what we hated, why the actress was terrible, and how it compares to the first installment in the series.

That is what I ask students to do with integrated performance assessments. So the answer to your question (“Where’s interpretive mode?”) is that it’s in “Content support” and perhaps also in “My Language Tasks.”

If you’d like to see an example of how I do this, here’s one for novice.

I don’t think this heading will work.

Please tell me all your issues. I’ve been developing this rubric for six weeks and it’s been reviewed by dozens of teachers, but it can’t be a game-changer unless a whole lot more teachers “get their hands dirty” with it, using it on real production assessments and contacting me about how it’s working for you. I’ll continually change this post with updates.

So where is it?

Ready to get the file? If you put up with the rest of this post, you deserve it!

Download the PDF here (last updated December 2015). Contact me or comment below if you’d like access to a .docx file to edit. Please respect intellectual property rights. If you modify the document for your purposes, please leave the footnote at the bottom directing users to Musicuentos.com/handouts for credits. You may modify the footnote to include a reference such as “Based on the Musicuentos performance assessment rubric. For more information visit Musicuentos.com/handouts.”

Credits

I didn’t write this rubric. I simply stole a lot of stuff and put it in one place. I can’t begin to effectively acknowledge how much the work of some very smart people helped inform this rubric in all its drafts. Thanks to Amy Lenord, Colleen Lee-Hayes, Bethanie Carlson-Drew, Martina Bex, and the Ohio language gurus, whose fingerprints can be seen in various sections here. The majority of the wording is taken from either the ACTFL performance descriptors, Can Do statements, and proficiency guidelines; the old Jefferson County (KY) performance assessment rubric; and the Ohio rubrics. Thanks to Natalia Delaat, Thomas Sauer, Sarah Bolaños, and Jacob Shively who took the time to give me honest, in-depth, extensive feedback that greatly improved the validity and user-friendliness of this document. I know your time is super valuable, and we’re all indebted to you for your generosity with it. And definitely, thanks to Melanie Stilson, who gave me the push I needed to get working on this project that had been on a back burner for a while.

Thanks to all the teachers at Camp Musicuentos who gave me some rocking suggestions for improvements. For one thing, you owe the staircase to them, and that might be the best part of the document.

Tags: , , , .

December 31, 2015 0 Comments

Best of 2015 #3: How important is task completion?

The third and fourth most popular posts of 2015 were very close, but with the benefit of a few extra months the post on task completion on rubrics barely edged out the #4 post to take the bronze medal.  I’m so glad I wondered about task completion and its importance in life and in rubrics in public here, because the feedback I got from this post was so informative for my rubric overhaul.

best of 2015 task comp

Rubrics: How important is task completion?

Forgive me while I brainstorm in public a moment.

rubric screen shotAlmost four years ago I created this rubric, based on the ACTFL guidelines and the Jefferson County (KY) Public Schools’ world language rubric.  I loved it.  It’s one of my most requested resources.  I used it for years.  But as I wrap up my first year out of the classroom and prepare to embark on a new journey (teaching my own Spanish classes for homeschooled students and adult learners), I’ve been reflecting on the good and bad of my rubric and how to redesign it for my newest journey.

It’s not all the same

One of the things I like most about the rubric is also one of the things I like least: it separates a major focus from a minor focus.  That resonates with me.  Not all language use factors are created equal.  Pronunciation in the sense of sounding like a native isn’t a goal or even possible for most learners.  Pronunciation for comprehensibility - that is important.  As language nerds teachers we love to nitpick about the verb endings and adjective agreement, but the fact is that the vast majority of the time, those mistakes do not impede communication.  For my fourth-year students striving for Intermediate High, eliminating those patterns is a goal, but for my first-year novices, it’s just not.  They just want to talk.

So what is it I don’t like?  I always wondered why task completion was listed as a minor focus, almost in such a way that it would not affect the overall grade at all.  I think I started wondering this as I used the rubric more and more to grade AP assessments, and finally exclusively teaching AP.  In that class, task completion was a major focus for sure.  Students couldn’t be very successful on a task if they responded in a way that did not address what they were asked to do.  And as I intend to make my interpretive tasks look more and more like incorporating authentic resources into production tasks (e.g. tell whether or not you agree with the opinion in this meme), regardless of level, yeah, it matters to me.  If you produce a whole bunch of pretty language on the AP but don’t cite a single one of the three sources they asked you to, you’re sunk.  My rubric didn’t give me a good place to say that.

But I liked my rubric.  Other people liked my rubric.  Surely there wasn’t anything wrong with it.  But I knew there was.  And then Melanie reminded me there was.

Is task completion part of life?

As I evaluate what to do with task completion on my rubric, I’m not sure I know what it’s going to look like.  I can tell you it won’t be labeled “minor focus.”  I can tell you what questions I’m asking myself.

  • When someone asks me to do something that requires language, how important is it that I actually answer the question?
  • If I ask a student a question, and they use great language to address something entirely different, how can I give credit for the language effort without letting them get away with avoiding the task?
  • How much will task completion be a part of the life I’m supposed to be preparing my students for?
  • How does the importance of task completion compare to the importance of the language used to complete it?

Rumblings of change

I do know that there are several things I want to keep and things I want to change about my rubric.

What I love:

  • I must have my large feedback box to write anything I can think of to help the student reach his goal.
  • I will still have everything I want on one rubric so I use the same one for every task I assess.
  • Students will still know exactly where they are in regard to the expectation: approaching, meeting, or exceeding.
  • The descriptions will still be full of proficiency-based terminology focused on successful communication.

What I’ll probably change:

  • I don’t like the word “Unsatisfactory.”  Looking for a new way to say, “You’ve gotta try this again before we move on.”
  • Task completion needs a different spot not labeled “minor focus.”  I will probably remove the term “minor focus” altogether.  What other way can I indicate that not all language aspects are created equal?
  • I don’t expect to teach students hitting Advanced Low language and most other teachers don’t either. So I’m kicking that one off to give me more space.
  • I want to make the “language control” descriptions communicate more to the student (those last two on the right confusing, anyone?).
  • I’d like to figure out how to make the rubric more interpersonal-friendly, since this is the mode most of my students actually want most.

As always, turning to the PLN

Isn’t our online community of language teachers fantastic?  I can tell you to whom I’ll be turning for input on my new rubric:

Maybe I’ll even have my new rubric developed in time to share with the teachers at the Camp Musicuentos workshops.  I’ve always worked better with deadlines!

Tags: , .

December 19, 2015 0 Comments

Semester 1 assessment: Elementary edition

December found us doing our first formal assessment of the semester.  That is my reality this year, and I love it.  We go at our own pace and make our own rules and I don’t see my students enough to warrant spending our precious class time on assessment instead of engaging ourselves with the language.  On the other hand, stopping at the end of the semester to do an assessment both gives kids and parents some tangible evidence of what we’re achieving and a ton of helpful information for me on what we need to work on.

But in contemplating how to do this, my mind screeched to a halt.  I had never done a formal assessment for young early language learners.  I thought and tweaked for a long time to put together an assessment I liked, and here I want to share it with you and what I learned from it.

The Assessment

To view and download the assessment, click here.

Scenario

All the tasks are framed around a situation that is quite plausible in our city: the child meets a Spanish-speaking child at the zoo’s playground.

Interpersonal: Finding out about someone

I played “Marta” and asked children to find out my name, age, origin, likes, etc.  I waited to see if they could initiate this inquiry, and if it was clear they couldn’t, I initiated finding out these things about them, and they used “¿Y tú?” to toss the questions back to me.

Interpretive listening

My friend Pilar recorded herself describing the girl’s mother (see document for audio link).  Children were to listen to the audio 3 times and circle the woman they identified as the mom (I couldn’t afford to copy the assessment in color but I projected the photo in color).  Then they needed to justify their answer to me (in English or Spanish).

color words in context = communication!

color words in context = communication!

Presentational speaking

We’ve been working on my adaptation of Bears in Chairs as our novice-low story this year.  I asked students to tell me whatever they could about one of the final pictures in the book.

(Side anecdote, my students “performed” this story with one of them narrating at our end-of-semester celebration.  The narrator, when the third osito came out, said “cuatro ositos,” and my 3-year-old who was interestedly wandering around their performance gave him a “doh” look at said, “TRES ositos.”)

Interpretive reading

A while back I found an amazing authentic resource in which a bunch of children comment on a blog and introduce themselves, often with interesting novice-level details.  I copied three comments into the assessment and asked children to chart some of this information that they understood.  Then I asked them to write a similar comment based on their own likes and interests.

IMG_0907

no measurable proficiency 15 once-weekly hours ago…
I am proud of her!

Interpretive cultural awareness

I asked students to perform our novice level cultural awareness goal.  I gave them a series of images (also projected in color for the flag help) and asked them to identify which were from Spain and which from Mexico.  I included two images of U.S. products.  (I was amused that one kid said Chichén Itzá was in Spain; I would have thought that to be the one they would all get right – it was also a focus in their Maya lesson in history class this semester.)

IMG_0910

bit of cultural mixup here… dum dums and Ford, not from Spain
(tacos don’t hail from Spain either…)

The lessons learned

Age factor

My 9-12 year-olds knocked this out of the park.  Not so much in performing way above their expected level, but they understood what was going on, worked at it diligently, clearly felt comfortable with it (with one notable exception) and gave me answers on most sections.

My 6-8 year-olds, including my own bilingual (reluctant speaker) daughter, were overwhelmed.  The format wasn’t friendly enough, the directions weren’t clear enough, the content was too long.  I’m not sure I’ll even ask them to do the assessment in the spring, but if I do, I’ll pare it down.  I need to cut the scenario altogether and just ask them questions to see if they can answer.  Asking them to circle a described picture: fine.  Asking them to write a justification: no way.  Asking them to identify information in three blog comments: too much.  One well-selected text would have been plenty.  Especially with some cute kid-friendly formatting.

IMG_0906

Design tweaks

I knew going into this that my students had not had enough practice on finding out where someone lives or is from.  I asked this question myself.  I think one of them was able to initiate it.

¿Qué te gusta? was a fine question. Adding hacer – that we had not had enough interaction with.

I originally gave them all the names in the chart on interpretive reading.  Why?  I don’t know.  Of all information that would have been their most successful endeavor to find out.

It was helpful to find out a little more about why a child circled the wrong mom in the photo; my fastest-processing high-aptitude 9-year-old told me he heard baja at least twice but also heard negro twice so he circled the woman who was wearing two black articles of clothing.  Okay, I get that.

Emphasizing the journey

Some of my kids were only able to say a few words about the Bears picture.  Ositos. Cuatro las sillas. That was fine.  ”Are you done?” I asked.  Nod.  ”Okay, great! Thanks!”  In our learning community we are all about the journey – what can you do now, and where are we headed next.  It’s an adventure much more inspiring than “how are you doing compared to X” or “can you get a high enough grade to Y”.  And the younger kids, leaving sections blank, that was fine.  We just talked about it.  Sometimes I was able to help them discover they could do things they didn’t know they could do.  I had one 9-year-old absolutely shut down on the interpersonal task with me.  She claimed she didn’t know how to find out any of these things.  She wasn’t upset or crying but rather was almost antagonistic.  ”I can’t do any of this.”  I said, “Okay, I’ll ask the questions first and see what you can do then.”  She was able to record my answers in the appropriate place (listening skill!) and use y tú to give me back the question.  I said, “See? You could do a lot with this!”

IMG_0908

Hmmm… need some more input on baja and negros here…
but great job with bonita!

Have you tried a summative assessment with young learners?  How are they showing you what they can do?

As for scoring – I do not give grades and none of my parents keep them either.  I’m providing these assessments with my new elementary rubric for them to put in the children’s school progress portfolios.

Tags: , , , .

December 17, 2015 0 Comments

Correcting all those errors? Step away from the red pen. (BlackBox)

Disclaimer: No red pens were harmed in the making of this episode.

Here we confront a continual dilemma in language teaching.  As language teachers who are good at the languages we teach, every error grates on our ears and eyes.  We want to correct.  We want to cross out the masculine ending and write the feminine one.  We want to insert the missing article.  We want to shout, “WHEN HAVE YOU EVER HEARD ME SAY ME LLAMO ES AND ACCURATE INPUT IS ALL IT IS SUPPOSED TO TAKE AND YOU ARE STILL SAYING ME LLAMO ES!?!”

Karen BB8 WCF

Watch Karen’s informative review of this article on one researcher’s study.  Exactly how effective is written corrective feedback? Don’t expect a hard and fast answer.  It’s a muddy issue!

For more information about the Musicuentos Black Box collection of resources, including how to help keep this resource available for teachers everywhere, visit the Black Box page.

Tags: , , , , .

October 15, 2015 1 Comment

ANNOUNCING: The 2015 updated performance assessment rubric

This might be my most important resource release this year.

First, you can read here about all the things that frustrated me about that snazzy 2011 rubric that I used to use (and that got downloaded from this site a lot). Some of them probably frustrated those of you who used it, too. So I decided to do a total overhaul. No starting from the original document allowed. A blank page. (Well, really I started with a yellow legal pad and about 12 Chrome tabs open.)

Unpacking it

From talking to lots and lots of teachers about it, I hope I can anticipate a lot of questions you might have about the document. Several teachers helped me realize that simply posting it out here isn’t enough. You need some explanation on it. And so I may do a screencast and I may do a PDF but at least, here are explanations of the sections along with some screen shots.

2015 rubric page 1

Just looking at the front you can see some major changes. In the old rubric, there was an incredible amount of information that required very small type. The center sections were the target proficiency levels and they were colored in, which visually communicated, in my opinion, that half the rubric was irrelevant. So to begin, since by the time I’m doing performance assessments Novice Low is not a target at any point, I removed it. And since the majority of our students do not achieve intermediate mid in our classes, I kicked off IM and IH as well. But some do. And so I have promised Bethanie Carlson-Drew that I will develop a version ranging from NH to IM.

You’ll also notice the title is Performance toward Proficiency. This is because most of us are not qualified to say, and it is not our intention on assessments to say, “You achieve X proficiency.” Rather, our message is this: “On this particular performance, you are using language characteristic of X proficiency.”

Another major change was the column on the far right: it is a place you can simply check if either the section is not applicable or there is insufficient evidence to assess the category.  For example, the comprehension section is not applicable in a presentational writing assessment.

Page 1:

There are now four major sections on the front page and each is divided into a few subsections.

  • Message Type: What language do I use?
    RUBp1box1messagetype
    The first section is called Message Type and communicates to students what kind of language they are using. The ingredients and how they come together, if you will.
    The first sublevel here is structure. What pieces of language are the students using: just words and a few phrases? Phrases and some sentences? All sentences when appropriate? How much does the structure reflect their native language (“Yo gusta deliciouso taco”)? I have to give you a major caveat here: for some unknown reason and in a very confusing turn of phrase, ACTFL says that Intermediate Low pronunciation, structure, etc. are strongly influenced by the first language, and that those features in Novice High may be strongly influenced by the first language. I promise. Check it out. This seems completely backwards to me and so I made a judgement call to switch them.
    The second sublevel is depth of vocabulary. I’ve always loved this phrase. It’s what happens when students throw out “I adore it” instead of “I like it” and “many people perished” instead of “many people died.” Is the student just using very common words he’s memorized? Can she begin to personalize words by, for example, adding -ísimo to adjectives?
    The third sublevel is context. This is a positive section helping students realize what situations they can handle. Is it very common situations they have practiced? Good job. More contexts that are still familiar, everyday situations? What about throwing a bit of complication in there? Great work! More contexts for you!
    QUESTION about this context section: a teacher friend asked me a very good question: if we’re dictating the context in the scenario, is it fair to judge this part? In other words, should this section be eliminated, or is it needed to tell students what kind of contexts they’re handling and let them know if they’re going beyond or behind their demonstrated proficiency here, compared to other areas? Let me know your thoughts.
  • Message Depth: How do I support my communication?
    RUBbox2Depth
    The second section is called Message Depth and communicates to students how well they are supporting section 1; that is, how does the language they choose to use flesh out their message?
    The first sublevel here is content support. This is one place I desperately needed on a rubric that simply didn’t exist on the JCPS rubric, and I noticed the need for it from scoring AP essay after AP essay. I needed a place to tell students how well they were using prior knowledge to support their message. Could they include references to what they’d learned from authentic resources in the unit? This is what ACTFL calls “talking about something I have learned.” In this section I can tell students how well they provide examples from interpretive sources and elaborate on them.
    The second sublevel is communication strategies. How do students sustain communication? Lower-level novices have a lot of difficulty keeping up a conversation and they often switch to English or just stay silent. Or use  and no in ways that don’t make a lot of sense, right? But as they improve, they can ask some questions and even use minimal circumlocution to keep talking when they don’t know a word for something.
  • Message Interaction: How do we understand each other?
    RUBbox3InteractionThe third section is Message Interaction and is probably the most straightforward of the group. Simply, can the learner interact with someone in the language? Are they comprehensible and how much do they need things repeated in order to comprehend something themselves?
    This section has to do with the ever-present question of errors. I get asked at almost every workshop: “How do you assess errors? How much do you correct them?” I have two answers, depending on the student’s goals: for the College Board, patterns of error are what you’re looking for and trying to help students eradicate. For example, I had students who consistently wrote verbs with no attempt to change the endings at all. That’s a pattern. On the other hand, ACTFL’s guidelines are more about comprehensibility. When the error causes a breakdown in comprehension, in that the student made an error that means I can’t understand their intention, this is a problem.
    Also, the proficiency level sometimes has to do with who can’t understand the learner. I can understand many things that someone who doesn’t speak English, or isn’t used to dealing with language learners, wouldn’t understand. As students improve their proficiency, they begin to be more and more comprehensible to native speakers who are not “sympathetic” – that is, they don’t know how or aren’t willing to work harder to understand someone who is a language learner.
    This aspect was on the back page of the previous rubric under “Minor focus.” In scoring assessments, it never felt like a minor focus to me when an error made a learner incomprehensible. So it’s on the front now, on equal footing.
  • Cultural awareness: How do I show what I know about other cultures?
    RUBbox4 culture
    This was a glaring omission on the previous rubric and really it was the AP exam that made me want to add it, followed by the new ACTFL performance indicators which include a section for cultural awareness (the language here is a mixing of the language from that document). I didn’t have a place to tell students how well they were incorporating cultural knowledge into their production, something absolutely essential for the AP. So if a student can do that, I want to let them know.
    To see a much deeper explanation of this aspect including some production examples, please, please read this post.

On to the back page!

2015 rubric page 2

Page 2

The back page is a back-and-forth between me and the learner. They fill out some of this at the beginning of the unit, and some of it after they get my feedback.

  • Proficiency Goals
    RUBprofexpect
    The student actually fills out what proficiency I’m expecting to be shown on this performance.
  • The Staircase
    RUBstaircase
    On my old rubric, I simply put a smiley face in the box for “approaching expectations” or “meeting expectations,” etc. But what if a student showed novice high proficiency in two areas and novice mid in two? What then? Well, I put the smiley face at “approaching expectations” for novice mid, but farther up in the box, toward “meeting expectations” (novice high). Yes, really. Like any student ever noticed that.
    I think it was Greg Duncan and Megan Johnson-Smith who first got me mulling over sublevels to the sublevels. What if there were a way I could tell a student, “Great job! You’re performing novice mid in several areas, but look at these two! Novice high!” What was that? Well, it’s Novice Mid +.
    I fill out this section. The plus signs and minus signs are my way to communicate how many of the categories they’re holding at a certain proficiency. I love, love this part.
  • The grade
    RUBgrade1
    Yes. I’ve given in, and there’s a grading scale.
    For more information on how I’ve always assigned grades to a proficiency rubric (there’s no change here), see this post.
    So, I’m trying to put ownership of the learning in the student’s hands, and my thinking here is that the student looks at the proficiency I’ve marked and circles the expectation box herself. Then you can put the grade in if you want to (I think I still won’t). And you have a nice feedback box here on the proficiency part.
    My students aren’t allowed to score below “approaching expectations.” If this happens they must set a date to re-try. Depending on my class size, I also allow students who score “approaching” to re-try if they want to (and I have had several take me up on this offer to improve).
  • My language tasks
    RUBtasks
    The one part of the old rubric that absolutely had to go was the “task completion” section. As it turns out, in three years of scoring assessments I misunderstood this from the JCPS rubric and what I considered “task completion” was on the front in language use. But the back “task completion” section said “I completed (part, almost, all of) what I was asked to do.” And I was scoring AP assessments a lot. On the AP, students are told to incorporate all three of the authentic sources they’ve seen into their presentational essay. If they didn’t, their score would suffer significantly. So it wasn’t a “minor focus” for us like it said on the rubric. It was a big deal. And my old rubric didn’t give me a place to say that.
    For an analysis of task completion and this issue being the one that inspired me to overhaul the rubric this year, read this post.
    In this section, the student writes (at the beginning of the unit) what language tasks they will be asked to perform in the assessment. Will they need to show they can disagree? Incorporate information from an infographic you discussed in class? Mention some opinions of another person in class? Here’s where they record that. Then, you check whether they’ve shown strong, weak, or no evidence of this skill.
    This section is not a place for students to write “I can use 7 verbs in the preterite tense.” If you have them write that sort of thing here, you might as well tear up the rubric because what you are doing is not a proficiency-based performance assessment, it’s a grammar test masquerading as a performance assessment. If you’ve determined that’s what you’re looking for, stop reading now and close this tab. Please.
  • Teacher feedback
    RUBtaskfeedback
    This is pretty straightforward.
  • Student reflection
    RUBstureflect
    This is perhaps one of the most important sections in the rubric and you owe it to Colleen Lee-Hayes and Natalia Delaat (ありがとう y спасибо colegas).  We’re putting the ownership in their hands!

Whew. If you care about using this kind of rubric, I hope you put up with all that explanation!

Two more issues to go.

Where’s interpretive mode?

Good question. Please know that if you do stand-alone interpretive tasks on integrated performance assessments and use an interpretive rubric or some sort of scoring system to grade them, you are in the majority and I am not. Honestly, I do not know another teacher who handles this the way I do. So don’t feel like I’m telling you that you need to do this.

Eliminating stand-alone interpretive assessments was something the College Board inspired me to do. In the AP essay, students are given sources on which to base their essay, but there are no comprehension questions on the source. Rather, the writer must use the information they understand from all three sources to inform their response.

To me, this is what we do with interpretive tasks in real life, and this question is always in my head: how can my class better reflect the way this plays out in real life? We watch a movie and we don’t fill out worksheets on it. We don’t draw pictures of it or answer multiple-choice questions about the plot line, which we may not accurately remember even though we understood it at the time. No, we use what we saw to tell our good friends what parts we loved, what we hated, why the actress was terrible, and how it compares to the first installment in the series.

That is what I ask students to do with integrated performance assessments. So the answer to your question (“Where’s interpretive mode?”) is that it’s in “Content support” and perhaps also in “My Language Tasks.”

If you’d like to see an example of how I do this, here’s one for novice.

I don’t think this heading will work.

Please tell me all your issues. I’ve been developing this rubric for six weeks and it’s been reviewed by dozens of teachers, but it can’t be a game-changer unless a whole lot more teachers “get their hands dirty” with it, using it on real production assessments and contacting me about how it’s working for you. I’ll continually change this post with updates.

So where is it?

Ready to get the file? If you put up with the rest of this post, you deserve it!

Download the PDF here (last updated December 2015). Contact me or comment below if you’d like access to a .docx file to edit. Please respect intellectual property rights. If you modify the document for your purposes, please leave the footnote at the bottom directing users to Musicuentos.com/handouts for credits. You may modify the footnote to include a reference such as “Based on the Musicuentos performance assessment rubric. For more information visit Musicuentos.com/handouts.”

Credits

I didn’t write this rubric. I simply stole a lot of stuff and put it in one place. I can’t begin to effectively acknowledge how much the work of some very smart people helped inform this rubric in all its drafts. Thanks to Amy Lenord, Colleen Lee-Hayes, Bethanie Carlson-Drew, Martina Bex, and the Ohio language gurus, whose fingerprints can be seen in various sections here. The majority of the wording is taken from either the ACTFL performance descriptors, Can Do statements, and proficiency guidelines; the old Jefferson County (KY) performance assessment rubric; and the Ohio rubrics. Thanks to Natalia Delaat, Thomas Sauer, Sarah Bolaños, and Jacob Shively who took the time to give me honest, in-depth, extensive feedback that greatly improved the validity and user-friendliness of this document. I know your time is super valuable, and we’re all indebted to you for your generosity with it. And definitely, thanks to Melanie Stilson, who gave me the push I needed to get working on this project that had been on a back burner for a while.

Thanks to all the teachers at Camp Musicuentos who gave me some rocking suggestions for improvements. For one thing, you owe the staircase to them, and that might be the best part of the document.

Tags: , , , .

August 24, 2015 13 Comments

Homework choice for elementary students (and my syllabus)

What would homework choice look like for elementary students?

elem choice optionsI can’t believe it didn’t occur to me to ask this question earlier.  I knew this year I was going to have a group of students ages 6 to 10 but I thought I’d just give them the same options sheet as my older group.  Ha! These kids don’t have Facebook. They’re not engaged by Audio Lingua clips. They don’t know what the U-Scan is, much less how to use it.  They want to meet Noah and hear stories.  Obviously, the choice list needed an overhaul, an even greater one than was required for the early-novice list I released several days ago.

My little guys need to fulfill one point per week and if they do a two-point activity, they will be able to skip a week.  To see the options, check out this document.  You’ll be able to tell that I had to slaughter my old list and frankly, I’m not able to come up with as many effective, motivating options suitable for young children with no measurable proficiency. Once they get some skills I can think of all kinds of authentic websites I can add, but for now, I encountered incomprehensibility in site after site. Please, if you have any ideas, post them in the comments!

This choice list is part of my elementary syllabus for the fall.  Keep in mind as you look at this document that I teach in a faith-based homeschool co-op where I have 60 minutes, one day a week with my students.

Tags: , , , , .

August 13, 2015 5 Comments

Rubrics: How important is task completion?

Forgive me while I brainstorm in public a moment.

rubric screen shotAlmost four years ago I created this rubric, based on the ACTFL guidelines and the Jefferson County (KY) Public Schools’ world language rubric.  I loved it.  It’s one of my most requested resources.  I used it for years.  But as I wrap up my first year out of the classroom and prepare to embark on a new journey (teaching my own Spanish classes for homeschooled students and adult learners), I’ve been reflecting on the good and bad of my rubric and how to redesign it for my newest journey.

It’s not all the same

One of the things I like most about the rubric is also one of the things I like least: it separates a major focus from a minor focus.  That resonates with me.  Not all language use factors are created equal.  Pronunciation in the sense of sounding like a native isn’t a goal or even possible for most learners.  Pronunciation for comprehensibility - that is important.  As language nerds teachers we love to nitpick about the verb endings and adjective agreement, but the fact is that the vast majority of the time, those mistakes do not impede communication.  For my fourth-year students striving for Intermediate High, eliminating those patterns is a goal, but for my first-year novices, it’s just not.  They just want to talk.

So what is it I don’t like?  I always wondered why task completion was listed as a minor focus, almost in such a way that it would not affect the overall grade at all.  I think I started wondering this as I used the rubric more and more to grade AP assessments, and finally exclusively teaching AP.  In that class, task completion was a major focus for sure.  Students couldn’t be very successful on a task if they responded in a way that did not address what they were asked to do.  And as I intend to make my interpretive tasks look more and more like incorporating authentic resources into production tasks (e.g. tell whether or not you agree with the opinion in this meme), regardless of level, yeah, it matters to me.  If you produce a whole bunch of pretty language on the AP but don’t cite a single one of the three sources they asked you to, you’re sunk.  My rubric didn’t give me a good place to say that.

But I liked my rubric.  Other people liked my rubric.  Surely there wasn’t anything wrong with it.  But I knew there was.  And then Melanie reminded me there was.

Is task completion part of life?

As I evaluate what to do with task completion on my rubric, I’m not sure I know what it’s going to look like.  I can tell you it won’t be labeled “minor focus.”  I can tell you what questions I’m asking myself.

  • When someone asks me to do something that requires language, how important is it that I actually answer the question?
  • If I ask a student a question, and they use great language to address something entirely different, how can I give credit for the language effort without letting them get away with avoiding the task?
  • How much will task completion be a part of the life I’m supposed to be preparing my students for?
  • How does the importance of task completion compare to the importance of the language used to complete it?

Rumblings of change

I do know that there are several things I want to keep and things I want to change about my rubric.

What I love:

  • I must have my large feedback box to write anything I can think of to help the student reach his goal.
  • I will still have everything I want on one rubric so I use the same one for every task I assess.
  • Students will still know exactly where they are in regard to the expectation: approaching, meeting, or exceeding.
  • The descriptions will still be full of proficiency-based terminology focused on successful communication.

What I’ll probably change:

  • I don’t like the word “Unsatisfactory.”  Looking for a new way to say, “You’ve gotta try this again before we move on.”
  • Task completion needs a different spot not labeled “minor focus.”  I will probably remove the term “minor focus” altogether.  What other way can I indicate that not all language aspects are created equal?
  • I don’t expect to teach students hitting Advanced Low language and most other teachers don’t either. So I’m kicking that one off to give me more space.
  • I want to make the “language control” descriptions communicate more to the student (those last two on the right confusing, anyone?).
  • I’d like to figure out how to make the rubric more interpersonal-friendly, since this is the mode most of my students actually want most.

As always, turning to the PLN

Isn’t our online community of language teachers fantastic?  I can tell you to whom I’ll be turning for input on my new rubric:

Maybe I’ll even have my new rubric developed in time to share with the teachers at the Camp Musicuentos workshops.  I’ve always worked better with deadlines!

Tags: , , .

May 14, 2015 8 Comments

What a design-based WL program looks like

cassettes

cassettes

If you know me you know I love a good research book, particularly one that tells us in lay language what it’s going to take to help kids succeed in a world we can’t even imagine, one that’s vastly different from the one we grew up in.  The other day, Zoe asked me,

Mami, what’s a cassette?

Ah, the pain in my soul.  And I thought people who liked records were old.

The most eye-opening book I’ve read recently on this topic is Tony Wagner’s Creating Innovators.  (If you haven’t read it, click and read my review.  Then come back.  You’ll thank me.)

Of course, as usually happens, since I read the book I’ve also come across articles (like this one on Edutopia) that are finally converting me to the inquiry-based approaches collectively referred to as project-based learning (or problem-based learning, or inquiry-based learning, or problem-based inquiry – you get the idea).  The research is compelling: the 21st century will reward innovators, and innovators come from a background of “deep understanding derived from collaborative methods.”

One of the ways the book and article really got me thinking was to emphasize that this type of learning is best approached and referred to as design-based learning.  So of course, I’ve been mulling over the big question ever since:

What does a design-based world language program look like?

According to the article, design-based learning asks students to “create products that require understanding and application of knowledge.”  That’s really the only answer I have for you.  Other than that, I can simply offer you the questions I’m asking myself, that I think would help me develop a design-based world language program.  In no particular order, they are, from the student’s perspective:

  • What is a problem related to this topic?
  • What is a cultural product related to this topic?
  • How do the relevant products, practices, and perspectives compare to my culture?
  • What can I do to help solve a problem?
  • Can I use what I’m learning to provide a service to the TL community?
  • Can I design something while using the TL and that involves enough TL use to help me develop real communication skills?

And so, it seems to me, those of us interested in design-based learning in the world language classroom want to inspire our students to ask one overarching question:

DESIGN based pic

How’s that for a curriculum development project for the summer?  A group of like-minded teachers would love to help you work through this and other curriculum planning issues at this summers’ two Camp Musicuentos sites, Louisville, Kentucky and Warwick, Rhode Island.  There’s still some very limited space left.

This is a tough question, especially for teachers in novice classrooms.  If you want to know how this could really work, as I do, let me put you in touch with some friends of mine.  Get a discussion going with Don Doehla or Laura Sexton, or ask the global mindset folks over at VIF International what they’re doing about it.

What are you doing to create innovators?

Tags: , , , .

May 5, 2015 2 Comments

Why interpersonal isn’t interpretive

Plaza Treinta y Tres Orientales, Montevideo

Recently on #langchat we were discussing interpretive and interpersonal tasks and someone asked whether interpersonal also functioned as interpretive, since the listener is interpreting auditory information.  I thought it was Lisa Shepard, a lesson to me to note my sources right away, but I can’t find the conversation.  So while I can’t credit my interlocutor, I can still tell you what we talked about and hope the distinction helps you in some way.

Let me spell out the two differences, and then what I think they mean for our class practice in general.

Two differences between interpersonal and interpretive

As we talked through our thoughts on this topic, we identified two reasons we think interpretive listening isn’t the same as interpersonal listening.

  • In interpersonal communication, the speaker is sympathetic, at least often and maybe usually so.  Sympathetic is a term assessors use to mean that the partner in conversation wants to and is willing to work to achieve communication.
    An authentic audio resource is a static thing; it cannot inherently try to help you understand it.
  • A learner listening to an audio resource cannot negotiate meaning.  This is related to the first point because negotiation of meaning is one way a sympathetic conversation partner helps learners achieve communication.  Negotiation of meaning is a term linguists use to talk about the strategies we use to try to be understood and try to understand, from something as simple as asking “Can you repeat that?” to using circumlocution.
    An authentic audio source cannot clarify itself for you.  It cannot respond to requests for repetition or slowing down, and it cannot stop to explain words simply because you do not have them in your vocabulary.

What this means for teachers

I can think of several implications of this distinction for teachers.

  • Realistic, different expectations for interpretive vs. interpersonal
    I’ve seen immersion programs have incredibly high expectations for interpretive listening skills, much higher than their output expectations.  I think this may be a mistake, unless the teachers are habitually using authentic audio sources, because their teacher is not an authentic audio source; she is a sympathetic partner who is committed to helping them achieve communication and comprehension.  The interpretive listening skills aren’t referring to the ability to understand sympathetic partners in communication.
  • Commitment to use authentic audio
    I’ve written about this a lot.  You can do this even with novices!  Check out why it’s a myth that novices can’t understand authentic material and some sample activities like using El perdón and Voy a vivir and Shrek.  Also, please, please read my letter from an AP teacher to teachers of novices.
  • Teaching students negotiation of meaning skills
    Like how to use circumlocution to both get their meaning across and figure out what their partner is saying.

I love conversations like this and how they make me think through my practices – let’s keep learning by talking together!

Tags: , .

April 17, 2015 2 Comments

It’s not about the I in IPA, or the vocab list

Sometimes it's not that black-and-white. Chris Devers

Sometimes it’s not that black-and-white.
Chris Devers

Do you sometimes feel like we’re working in an all-or-nothing profession?

I’m not sure if it’s an artifact of social media, of tweets and blog posts designed to be punchy and petite at the same time.  I’m not sure if it’s a desire to be the next big thing, the acronym everyone’s talking about.  I raise my hand, I’m guilty here, I sign on to bandwagons and think-

Yes! I must be doing this! I must sell out to it, heart and soul, right now!

And after a while, I realize I got dazzled by the names behind it and forgot to ask,

Why?

Take the IPA, for example.  It stars in an ACTFL publication, for heaven’s sake, courtesy of a former ACTFL president.  And so I jumped in (without much research into them, because who has time for that?), thinking, I’ve gotta do 100% performance assessments!  I’ve gotta put them all in a scenario!  I need every assessment to solicit performance in every mode!

It didn’t take me long to realize I actually wasn’t willing to do that.  There were all kinds of assessments my students and I liked, and they worked for us.  There were other factors that were equally or more important to me.  So I’ve designed an all-encompassing IPA or two (you’ll even see some come out as resources on the blog) but before long I was watching teachers try to come up with some scenario under which they could get all the students to perform in all the modes and the result was a frustrated teacher and the most contrived language scenario with mediocre, unrealistic production tasks.

Really, the red flag came up right away for me, when I emailed someone and asked,

Can you help me figure this IPA thing out?  What’s it all about?

And she sent me an article from The Language Educator from the founding mother of IPAs herself and though I saw the point and better understood the concept, I couldn’t help thinking that asking fourth-graders to tackle the topic of their future profession was a bit of a stretch.

I feel this way about vocabulary, too.  I’m totally with you on the frustration with textbook vocabulary lists that are way too long and can’t possibly be acquired in the time allotted to the chapter.  But it’s just a tool.  It’s just a list.  Let me propose that we stop dying on this hill of

you cannot use a vocab list in a communicative classroom

and focus more properly on the deeper questions here:

I’ll confess, there are some things I’ll still sound all-or-nothing about.  I’ll always avoid asking multiple choice questions if I can.  It may snow in Acapulco before I give out a word search.  But that doesn’t mean you haven’t found a way to do it communicatively.  If you don’t use a list, great.  If you use a list, great – let’s look at the list of words as a field of possibilities, that some will stick and some won’t. Whether in a list I put together or not, whether I do quizzes or not, what they need for communicative tasks should be going in the eyes and ears, and staying in the brain, and coming out the mouth and hands.

NO WORD SEARCH

Tags: , , .

March 24, 2015 3 Comments