The Blog

Correcting all those errors? Step away from the red pen. (BlackBox)

Disclaimer: No red pens were harmed in the making of this episode.

Here we confront a continual dilemma in language teaching.  As language teachers who are good at the languages we teach, every error grates on our ears and eyes.  We want to correct.  We want to cross out the masculine ending and write the feminine one.  We want to insert the missing article.  We want to shout, “WHEN HAVE YOU EVER HEARD ME SAY ME LLAMO ES AND ACCURATE INPUT IS ALL IT IS SUPPOSED TO TAKE AND YOU ARE STILL SAYING ME LLAMO ES!?!”

Karen BB8 WCF

Watch Karen’s informative review of this article on one researcher’s study.  Exactly how effective is written corrective feedback? Don’t expect a hard and fast answer.  It’s a muddy issue!

For more information about the Musicuentos Black Box collection of resources, including how to help keep this resource available for teachers everywhere, visit the Black Box page.

Tags: , , , , .

October 15, 2015 1 Comment

ANNOUNCING: The 2015 updated performance assessment rubric

This might be my most important resource release this year.

First, you can read here about all the things that frustrated me about that snazzy 2011 rubric that I used to use (and that got downloaded from this site a lot). Some of them probably frustrated those of you who used it, too. So I decided to do a total overhaul. No starting from the original document allowed. A blank page. (Well, really I started with a yellow legal pad and about 12 Chrome tabs open.)

Unpacking it

From talking to lots and lots of teachers about it, I hope I can anticipate a lot of questions you might have about the document. Several teachers helped me realize that simply posting it out here isn’t enough. You need some explanation on it. And so I may do a screencast and I may do a PDF but at least, here are explanations of the sections along with some screen shots.

2015 rubric page 1

Just looking at the front you can see some major changes. In the old rubric, there was an incredible amount of information that required very small type. The center sections were the target proficiency levels and they were colored in, which visually communicated, in my opinion, that half the rubric was irrelevant. So to begin, since by the time I’m doing performance assessments Novice Low is not a target at any point, I removed it. And since the majority of our students do not achieve intermediate mid in our classes, I kicked off IM and IH as well. But some do. And so I have promised Bethanie Carlson-Drew that I will develop a version ranging from NH to IM.

You’ll also notice the title is Performance toward Proficiency. This is because most of us are not qualified to say, and it is not our intention on assessments to say, “You achieve X proficiency.” Rather, our message is this: “On this particular performance, you are using language characteristic of X proficiency.”

Another major change was the column on the far right: it is a place you can simply check if either the section is not applicable or there is insufficient evidence to assess the category.  For example, the comprehension section is not applicable in a presentational writing assessment.

Page 1:

There are now four major sections on the front page and each is divided into a few subsections.

  • Message Type: What language do I use?
    The first section is called Message Type and communicates to students what kind of language they are using. The ingredients and how they come together, if you will.
    The first sublevel here is structure. What pieces of language are the students using: just words and a few phrases? Phrases and some sentences? All sentences when appropriate? How much does the structure reflect their native language (“Yo gusta deliciouso taco”)? I have to give you a major caveat here: for some unknown reason and in a very confusing turn of phrase, ACTFL says that Intermediate Low pronunciation, structure, etc. are strongly influenced by the first language, and that those features in Novice High may be strongly influenced by the first language. I promise. Check it out. This seems completely backwards to me and so I made a judgement call to switch them.
    The second sublevel is depth of vocabulary. I’ve always loved this phrase. It’s what happens when students throw out “I adore it” instead of “I like it” and “many people perished” instead of “many people died.” Is the student just using very common words he’s memorized? Can she begin to personalize words by, for example, adding -ísimo to adjectives?
    The third sublevel is context. This is a positive section helping students realize what situations they can handle. Is it very common situations they have practiced? Good job. More contexts that are still familiar, everyday situations? What about throwing a bit of complication in there? Great work! More contexts for you!
    QUESTION about this context section: a teacher friend asked me a very good question: if we’re dictating the context in the scenario, is it fair to judge this part? In other words, should this section be eliminated, or is it needed to tell students what kind of contexts they’re handling and let them know if they’re going beyond or behind their demonstrated proficiency here, compared to other areas? Let me know your thoughts.
  • Message Depth: How do I support my communication?
    The second section is called Message Depth and communicates to students how well they are supporting section 1; that is, how does the language they choose to use flesh out their message?
    The first sublevel here is content support. This is one place I desperately needed on a rubric that simply didn’t exist on the JCPS rubric, and I noticed the need for it from scoring AP essay after AP essay. I needed a place to tell students how well they were using prior knowledge to support their message. Could they include references to what they’d learned from authentic resources in the unit? This is what ACTFL calls “talking about something I have learned.” In this section I can tell students how well they provide examples from interpretive sources and elaborate on them.
    The second sublevel is communication strategies. How do students sustain communication? Lower-level novices have a lot of difficulty keeping up a conversation and they often switch to English or just stay silent. Or use  and no in ways that don’t make a lot of sense, right? But as they improve, they can ask some questions and even use minimal circumlocution to keep talking when they don’t know a word for something.
  • Message Interaction: How do we understand each other?
    RUBbox3InteractionThe third section is Message Interaction and is probably the most straightforward of the group. Simply, can the learner interact with someone in the language? Are they comprehensible and how much do they need things repeated in order to comprehend something themselves?
    This section has to do with the ever-present question of errors. I get asked at almost every workshop: “How do you assess errors? How much do you correct them?” I have two answers, depending on the student’s goals: for the College Board, patterns of error are what you’re looking for and trying to help students eradicate. For example, I had students who consistently wrote verbs with no attempt to change the endings at all. That’s a pattern. On the other hand, ACTFL’s guidelines are more about comprehensibility. When the error causes a breakdown in comprehension, in that the student made an error that means I can’t understand their intention, this is a problem.
    Also, the proficiency level sometimes has to do with who can’t understand the learner. I can understand many things that someone who doesn’t speak English, or isn’t used to dealing with language learners, wouldn’t understand. As students improve their proficiency, they begin to be more and more comprehensible to native speakers who are not “sympathetic” – that is, they don’t know how or aren’t willing to work harder to understand someone who is a language learner.
    This aspect was on the back page of the previous rubric under “Minor focus.” In scoring assessments, it never felt like a minor focus to me when an error made a learner incomprehensible. So it’s on the front now, on equal footing.
  • Cultural awareness: How do I show what I know about other cultures?
    RUBbox4 culture
    This was a glaring omission on the previous rubric and really it was the AP exam that made me want to add it, followed by the new ACTFL performance indicators which include a section for cultural awareness (the language here is a mixing of the language from that document). I didn’t have a place to tell students how well they were incorporating cultural knowledge into their production, something absolutely essential for the AP. So if a student can do that, I want to let them know.
    To see a much deeper explanation of this aspect including some production examples, please, please read this post.

On to the back page!

2015 rubric page 2

Page 2

The back page is a back-and-forth between me and the learner. They fill out some of this at the beginning of the unit, and some of it after they get my feedback.

  • Proficiency Goals
    The student actually fills out what proficiency I’m expecting to be shown on this performance.
  • The Staircase
    On my old rubric, I simply put a smiley face in the box for “approaching expectations” or “meeting expectations,” etc. But what if a student showed novice high proficiency in two areas and novice mid in two? What then? Well, I put the smiley face at “approaching expectations” for novice mid, but farther up in the box, toward “meeting expectations” (novice high). Yes, really. Like any student ever noticed that.
    I think it was Greg Duncan and Megan Johnson-Smith who first got me mulling over sublevels to the sublevels. What if there were a way I could tell a student, “Great job! You’re performing novice mid in several areas, but look at these two! Novice high!” What was that? Well, it’s Novice Mid +.
    I fill out this section. The plus signs and minus signs are my way to communicate how many of the categories they’re holding at a certain proficiency. I love, love this part.
  • The grade
    Yes. I’ve given in, and there’s a grading scale.
    For more information on how I’ve always assigned grades to a proficiency rubric (there’s no change here), see this post.
    So, I’m trying to put ownership of the learning in the student’s hands, and my thinking here is that the student looks at the proficiency I’ve marked and circles the expectation box herself. Then you can put the grade in if you want to (I think I still won’t). And you have a nice feedback box here on the proficiency part.
    My students aren’t allowed to score below “approaching expectations.” If this happens they must set a date to re-try. Depending on my class size, I also allow students who score “approaching” to re-try if they want to (and I have had several take me up on this offer to improve).
  • My language tasks
    The one part of the old rubric that absolutely had to go was the “task completion” section. As it turns out, in three years of scoring assessments I misunderstood this from the JCPS rubric and what I considered “task completion” was on the front in language use. But the back “task completion” section said “I completed (part, almost, all of) what I was asked to do.” And I was scoring AP assessments a lot. On the AP, students are told to incorporate all three of the authentic sources they’ve seen into their presentational essay. If they didn’t, their score would suffer significantly. So it wasn’t a “minor focus” for us like it said on the rubric. It was a big deal. And my old rubric didn’t give me a place to say that.
    For an analysis of task completion and this issue being the one that inspired me to overhaul the rubric this year, read this post.
    In this section, the student writes (at the beginning of the unit) what language tasks they will be asked to perform in the assessment. Will they need to show they can disagree? Incorporate information from an infographic you discussed in class? Mention some opinions of another person in class? Here’s where they record that. Then, you check whether they’ve shown strong, weak, or no evidence of this skill.
    This section is not a place for students to write “I can use 7 verbs in the preterite tense.” If you have them write that sort of thing here, you might as well tear up the rubric because what you are doing is not a proficiency-based performance assessment, it’s a grammar test masquerading as a performance assessment. If you’ve determined that’s what you’re looking for, stop reading now and close this tab. Please.
  • Teacher feedback
    This is pretty straightforward.
  • Student reflection
    This is perhaps one of the most important sections in the rubric and you owe it to Colleen Lee-Hayes and Natalia Delaat (ありがとう y спасибо colegas).  We’re putting the ownership in their hands!

Whew. If you care about using this kind of rubric, I hope you put up with all that explanation!

Two more issues to go.

Where’s interpretive mode?

Good question. Please know that if you do stand-alone interpretive tasks on integrated performance assessments and use an interpretive rubric or some sort of scoring system to grade them, you are in the majority and I am not. Honestly, I do not know another teacher who handles this the way I do. So don’t feel like I’m telling you that you need to do this.

Eliminating stand-alone interpretive assessments was something the College Board inspired me to do. In the AP essay, students are given sources on which to base their essay, but there are no comprehension questions on the source. Rather, the writer must use the information they understand from all three sources to inform their response.

To me, this is what we do with interpretive tasks in real life, and this question is always in my head: how can my class better reflect the way this plays out in real life? We watch a movie and we don’t fill out worksheets on it. We don’t draw pictures of it or answer multiple-choice questions about the plot line, which we may not accurately remember even though we understood it at the time. No, we use what we saw to tell our good friends what parts we loved, what we hated, why the actress was terrible, and how it compares to the first installment in the series.

That is what I ask students to do with integrated performance assessments. So the answer to your question (“Where’s interpretive mode?”) is that it’s in “Content support” and perhaps also in “My Language Tasks.”

If you’d like to see an example of how I do this, here’s one for novice.

I don’t think this heading will work.

Please tell me all your issues. I’ve been developing this rubric for six weeks and it’s been reviewed by dozens of teachers, but it can’t be a game-changer unless a whole lot more teachers “get their hands dirty” with it, using it on real production assessments and contacting me about how it’s working for you. I’ll continually change this post with updates.

So where is it?

Ready to get the file? If you put up with the rest of this post, you deserve it!

Download the PDF here. Contact me or comment below if you’d like access to a .docx file to edit. Please respect intellectual property rights. If you modify the document for your purposes, please leave the footnote at the bottom directing users to for credits. You may modify the footnote to include a reference such as “Based on the Musicuentos performance assessment rubric. For more information visit”


I didn’t write this rubric. I simply stole a lot of stuff and put it in one place. I can’t begin to effectively acknowledge how much the work of some very smart people helped inform this rubric in all its drafts. Thanks to Amy Lenord, Colleen Lee-Hayes, Bethanie Carlson-Drew, Martina Bex, and the Ohio language gurus, whose fingerprints can be seen in various sections here. The majority of the wording is taken from either the ACTFL performance descriptors, Can Do statements, and proficiency guidelines; the old Jefferson County (KY) performance assessment rubric; and the Ohio rubrics. Thanks to Natalia Delaat, Thomas Sauer, Sarah Bolaños, and Jacob Shively who took the time to give me honest, in-depth, extensive feedback that greatly improved the validity and user-friendliness of this document. I know your time is super valuable, and we’re all indebted to you for your generosity with it. And definitely, thanks to Melanie Stilson, who gave me the push I needed to get working on this project that had been on a back burner for a while.

Thanks to all the teachers at Camp Musicuentos who gave me some rocking suggestions for improvements. For one thing, you owe the staircase to them, and that might be the best part of the document.

Tags: , , , .

August 24, 2015 9 Comments

Homework choice for elementary students (and my syllabus)

What would homework choice look like for elementary students?

elem choice optionsI can’t believe it didn’t occur to me to ask this question earlier.  I knew this year I was going to have a group of students ages 6 to 10 but I thought I’d just give them the same options sheet as my older group.  Ha! These kids don’t have Facebook. They’re not engaged by Audio Lingua clips. They don’t know what the U-Scan is, much less how to use it.  They want to meet Noah and hear stories.  Obviously, the choice list needed an overhaul, an even greater one than was required for the early-novice list I released several days ago.

My little guys need to fulfill one point per week and if they do a two-point activity, they will be able to skip a week.  To see the options, check out this document.  You’ll be able to tell that I had to slaughter my old list and frankly, I’m not able to come up with as many effective, motivating options suitable for young children with no measurable proficiency. Once they get some skills I can think of all kinds of authentic websites I can add, but for now, I encountered incomprehensibility in site after site. Please, if you have any ideas, post them in the comments!

This choice list is part of my elementary syllabus for the fall.  Keep in mind as you look at this document that I teach in a faith-based homeschool co-op where I have 60 minutes, one day a week with my students.

Tags: , , , , .

August 13, 2015 5 Comments

Rubrics: How important is task completion?

Forgive me while I brainstorm in public a moment.

rubric screen shotAlmost four years ago I created this rubric, based on the ACTFL guidelines and the Jefferson County (KY) Public Schools’ world language rubric.  I loved it.  It’s one of my most requested resources.  I used it for years.  But as I wrap up my first year out of the classroom and prepare to embark on a new journey (teaching my own Spanish classes for homeschooled students and adult learners), I’ve been reflecting on the good and bad of my rubric and how to redesign it for my newest journey.

It’s not all the same

One of the things I like most about the rubric is also one of the things I like least: it separates a major focus from a minor focus.  That resonates with me.  Not all language use factors are created equal.  Pronunciation in the sense of sounding like a native isn’t a goal or even possible for most learners.  Pronunciation for comprehensibility - that is important.  As language nerds teachers we love to nitpick about the verb endings and adjective agreement, but the fact is that the vast majority of the time, those mistakes do not impede communication.  For my fourth-year students striving for Intermediate High, eliminating those patterns is a goal, but for my first-year novices, it’s just not.  They just want to talk.

So what is it I don’t like?  I always wondered why task completion was listed as a minor focus, almost in such a way that it would not affect the overall grade at all.  I think I started wondering this as I used the rubric more and more to grade AP assessments, and finally exclusively teaching AP.  In that class, task completion was a major focus for sure.  Students couldn’t be very successful on a task if they responded in a way that did not address what they were asked to do.  And as I intend to make my interpretive tasks look more and more like incorporating authentic resources into production tasks (e.g. tell whether or not you agree with the opinion in this meme), regardless of level, yeah, it matters to me.  If you produce a whole bunch of pretty language on the AP but don’t cite a single one of the three sources they asked you to, you’re sunk.  My rubric didn’t give me a good place to say that.

But I liked my rubric.  Other people liked my rubric.  Surely there wasn’t anything wrong with it.  But I knew there was.  And then Melanie reminded me there was.

Is task completion part of life?

As I evaluate what to do with task completion on my rubric, I’m not sure I know what it’s going to look like.  I can tell you it won’t be labeled “minor focus.”  I can tell you what questions I’m asking myself.

  • When someone asks me to do something that requires language, how important is it that I actually answer the question?
  • If I ask a student a question, and they use great language to address something entirely different, how can I give credit for the language effort without letting them get away with avoiding the task?
  • How much will task completion be a part of the life I’m supposed to be preparing my students for?
  • How does the importance of task completion compare to the importance of the language used to complete it?

Rumblings of change

I do know that there are several things I want to keep and things I want to change about my rubric.

What I love:

  • I must have my large feedback box to write anything I can think of to help the student reach his goal.
  • I will still have everything I want on one rubric so I use the same one for every task I assess.
  • Students will still know exactly where they are in regard to the expectation: approaching, meeting, or exceeding.
  • The descriptions will still be full of proficiency-based terminology focused on successful communication.

What I’ll probably change:

  • I don’t like the word “Unsatisfactory.”  Looking for a new way to say, “You’ve gotta try this again before we move on.”
  • Task completion needs a different spot not labeled “minor focus.”  I will probably remove the term “minor focus” altogether.  What other way can I indicate that not all language aspects are created equal?
  • I don’t expect to teach students hitting Advanced Low language and most other teachers don’t either. So I’m kicking that one off to give me more space.
  • I want to make the “language control” descriptions communicate more to the student (those last two on the right confusing, anyone?).
  • I’d like to figure out how to make the rubric more interpersonal-friendly, since this is the mode most of my students actually want most.

As always, turning to the PLN

Isn’t our online community of language teachers fantastic?  I can tell you to whom I’ll be turning for input on my new rubric:

Maybe I’ll even have my new rubric developed in time to share with the teachers at the Camp Musicuentos workshops.  I’ve always worked better with deadlines!

Tags: , , .

May 14, 2015 7 Comments

What a design-based WL program looks like



If you know me you know I love a good research book, particularly one that tells us in lay language what it’s going to take to help kids succeed in a world we can’t even imagine, one that’s vastly different from the one we grew up in.  The other day, Zoe asked me,

Mami, what’s a cassette?

Ah, the pain in my soul.  And I thought people who liked records were old.

The most eye-opening book I’ve read recently on this topic is Tony Wagner’s Creating Innovators.  (If you haven’t read it, click and read my review.  Then come back.  You’ll thank me.)

Of course, as usually happens, since I read the book I’ve also come across articles (like this one on Edutopia) that are finally converting me to the inquiry-based approaches collectively referred to as project-based learning (or problem-based learning, or inquiry-based learning, or problem-based inquiry – you get the idea).  The research is compelling: the 21st century will reward innovators, and innovators come from a background of “deep understanding derived from collaborative methods.”

One of the ways the book and article really got me thinking was to emphasize that this type of learning is best approached and referred to as design-based learning.  So of course, I’ve been mulling over the big question ever since:

What does a design-based world language program look like?

According to the article, design-based learning asks students to “create products that require understanding and application of knowledge.”  That’s really the only answer I have for you.  Other than that, I can simply offer you the questions I’m asking myself, that I think would help me develop a design-based world language program.  In no particular order, they are, from the student’s perspective:

  • What is a problem related to this topic?
  • What is a cultural product related to this topic?
  • How do the relevant products, practices, and perspectives compare to my culture?
  • What can I do to help solve a problem?
  • Can I use what I’m learning to provide a service to the TL community?
  • Can I design something while using the TL and that involves enough TL use to help me develop real communication skills?

And so, it seems to me, those of us interested in design-based learning in the world language classroom want to inspire our students to ask one overarching question:

DESIGN based pic

How’s that for a curriculum development project for the summer?  A group of like-minded teachers would love to help you work through this and other curriculum planning issues at this summers’ two Camp Musicuentos sites, Louisville, Kentucky and Warwick, Rhode Island.  There’s still some very limited space left.

This is a tough question, especially for teachers in novice classrooms.  If you want to know how this could really work, as I do, let me put you in touch with some friends of mine.  Get a discussion going with Don Doehla or Laura Sexton, or ask the global mindset folks over at VIF International what they’re doing about it.

What are you doing to create innovators?

Tags: , , , .

May 5, 2015 2 Comments

Why interpersonal isn’t interpretive

Plaza Treinta y Tres Orientales, Montevideo

Recently on #langchat we were discussing interpretive and interpersonal tasks and someone asked whether interpersonal also functioned as interpretive, since the listener is interpreting auditory information.  I thought it was Lisa Shepard, a lesson to me to note my sources right away, but I can’t find the conversation.  So while I can’t credit my interlocutor, I can still tell you what we talked about and hope the distinction helps you in some way.

Let me spell out the two differences, and then what I think they mean for our class practice in general.

Two differences between interpersonal and interpretive

As we talked through our thoughts on this topic, we identified two reasons we think interpretive listening isn’t the same as interpersonal listening.

  • In interpersonal communication, the speaker is sympathetic, at least often and maybe usually so.  Sympathetic is a term assessors use to mean that the partner in conversation wants to and is willing to work to achieve communication.
    An authentic audio resource is a static thing; it cannot inherently try to help you understand it.
  • A learner listening to an audio resource cannot negotiate meaning.  This is related to the first point because negotiation of meaning is one way a sympathetic conversation partner helps learners achieve communication.  Negotiation of meaning is a term linguists use to talk about the strategies we use to try to be understood and try to understand, from something as simple as asking “Can you repeat that?” to using circumlocution.
    An authentic audio source cannot clarify itself for you.  It cannot respond to requests for repetition or slowing down, and it cannot stop to explain words simply because you do not have them in your vocabulary.

What this means for teachers

I can think of several implications of this distinction for teachers.

  • Realistic, different expectations for interpretive vs. interpersonal
    I’ve seen immersion programs have incredibly high expectations for interpretive listening skills, much higher than their output expectations.  I think this may be a mistake, unless the teachers are habitually using authentic audio sources, because their teacher is not an authentic audio source; she is a sympathetic partner who is committed to helping them achieve communication and comprehension.  The interpretive listening skills aren’t referring to the ability to understand sympathetic partners in communication.
  • Commitment to use authentic audio
    I’ve written about this a lot.  You can do this even with novices!  Check out why it’s a myth that novices can’t understand authentic material and some sample activities like using El perdón and Voy a vivir and Shrek.  Also, please, please read my letter from an AP teacher to teachers of novices.
  • Teaching students negotiation of meaning skills
    Like how to use circumlocution to both get their meaning across and figure out what their partner is saying.

I love conversations like this and how they make me think through my practices – let’s keep learning by talking together!

Tags: , .

April 17, 2015 2 Comments

It’s not about the I in IPA, or the vocab list

Sometimes it's not that black-and-white. Chris Devers

Sometimes it’s not that black-and-white.
Chris Devers

Do you sometimes feel like we’re working in an all-or-nothing profession?

I’m not sure if it’s an artifact of social media, of tweets and blog posts designed to be punchy and petite at the same time.  I’m not sure if it’s a desire to be the next big thing, the acronym everyone’s talking about.  I raise my hand, I’m guilty here, I sign on to bandwagons and think-

Yes! I must be doing this! I must sell out to it, heart and soul, right now!

And after a while, I realize I got dazzled by the names behind it and forgot to ask,


Take the IPA, for example.  It stars in an ACTFL publication, for heaven’s sake, courtesy of a former ACTFL president.  And so I jumped in (without much research into them, because who has time for that?), thinking, I’ve gotta do 100% performance assessments!  I’ve gotta put them all in a scenario!  I need every assessment to solicit performance in every mode!

It didn’t take me long to realize I actually wasn’t willing to do that.  There were all kinds of assessments my students and I liked, and they worked for us.  There were other factors that were equally or more important to me.  So I’ve designed an all-encompassing IPA or two (you’ll even see some come out as resources on the blog) but before long I was watching teachers try to come up with some scenario under which they could get all the students to perform in all the modes and the result was a frustrated teacher and the most contrived language scenario with mediocre, unrealistic production tasks.

Really, the red flag came up right away for me, when I emailed someone and asked,

Can you help me figure this IPA thing out?  What’s it all about?

And she sent me an article from The Language Educator from the founding mother of IPAs herself and though I saw the point and better understood the concept, I couldn’t help thinking that asking fourth-graders to tackle the topic of their future profession was a bit of a stretch.

I feel this way about vocabulary, too.  I’m totally with you on the frustration with textbook vocabulary lists that are way too long and can’t possibly be acquired in the time allotted to the chapter.  But it’s just a tool.  It’s just a list.  Let me propose that we stop dying on this hill of

you cannot use a vocab list in a communicative classroom

and focus more properly on the deeper questions here:

I’ll confess, there are some things I’ll still sound all-or-nothing about.  I’ll always avoid asking multiple choice questions if I can.  It may snow in Acapulco before I give out a word search.  But that doesn’t mean you haven’t found a way to do it communicatively.  If you don’t use a list, great.  If you use a list, great – let’s look at the list of words as a field of possibilities, that some will stick and some won’t. Whether in a list I put together or not, whether I do quizzes or not, what they need for communicative tasks should be going in the eyes and ears, and staying in the brain, and coming out the mouth and hands.


Tags: , , .

March 24, 2015 3 Comments

Speaking of motivation: Guest interview on Paulino Brener’s EPC Show


I’m looking forward to participating in a special interview with Paulino Brener on his EPC Show in about a week.  Join us online to talk about motivational aspects of our curricula.

Cross-posted from Paulino Brener at and you’ll find out more about where to find the video here:

Join me on Saturday February 28 at 1pm CST for an interview and presentation with Sara-Elizabeth Cottrell, World Language teacher and blogger at Musicuentos.

Sara-Elizabeth will be talking about  motivation and how it affects various parts of our process – resources we choose, vocabulary, assessments. S he will also give  us a preview of her presentation at Central States Conference 2015  (#CSCTFL15).

You can send your question for Sara-Elizabeth Cottrell in advance to or ask your questions DURING the show by leaving a comment on this YouTube or send a tweet using hasghtag  #epcshow.


Tags: , , , .

February 20, 2015 0 Comments

It’s a myth, #11: Assessing communication without communication

For the original myths post, click here.  You can also view all of the myths posts.

This, my eleventh post on myths I believe make us ineffective in the world language classroom, is about saying we’re assessing something without actually asking students to do it.

11. A multiple-choice question counts as a valid assessment of proficiency (or, “I can actually assess communication without asking students to communicate”).

Where's the communication? Josué Goge

Where’s the communication?
Josué Goge

I don’t want to pretend that good assessment is easy.  Exploring these questions-

  • what is valid assessment?
  • how can I make all my assessment valid?
  • how can I do this without spending my life grading?

has been a long, difficult, worthwhile, amazing journey for me.  From the days in my tests and measurements classes when I was required to write the very best Scantron test I could generate – whatever was easiest to grade –  to now, when my philosophy is that students don’t answer a multiple choice question unless they’re doing AP prep, I have been on a mission to figure out what was wrong in the way I was treating assessment and fixing it.  I’m not there yet, but I’m a lot farther than I was when I started, and as always, the journey itself is a lesson.

What’s wrong with non-communicative assessment

The answer to this comes down to two issues: goals and certainty.

If you’re going to use assessment that does not ask students to communicate, that may be fine, if communication is not your goal.  That is, if you’re trying to motivate or ‘hook’ students using something like PollEverywhere at the beginning of class, or you want students to reflect on how they feel about what they learned in class in a type of reflective exit ticket, there can be a lot of value in that.  The value evaporates when we try to say that we’re doing such an assessment to, say, assess whether students have learned to tell their name by choosing among
a) yo llamo
b) se llama
c) me llamo

The other issue is with certainty, and this is my primary issue with the multiple choice question.  When a student selects C in the above question, the answer is correct, but that does not tell you anything about why the student chose it.  It cannot tell you this:

letter C

So you cannot be certain that the student actually knows the answer.  You can only be certain that the student wrote C.  And what does that tell you?

What communicative assessment looks like

Communicative assessment doesn’t have to be hard or extraordinarily time-consuming.  It doesn’t have to look like a detailed IPA every other week.  It simply has to ask students to communicate something.  So, in the above example, instead of asking a multiple choice question, you’re asking students the question, “What’s your name?”  If they can answer, you’ve assessed whether they can communicate that information… today, anyway.


Interpretive tasks are the ones most prone to lack communication.  And yes, I call it communication, because receiving a message is communication; it’s not a one-way street.  There are so many muddy questions here.  If I ask interpretive questions in English, is that appropriate assessment?  I used to say no.  I’ve changed my mind.  Because on the other hand, if I ask the question in the TL, I’ve lost my certainty again.  If the student gets the question wrong, is it that he misunderstood the message, or that he misunderstood the question/answers?  I can’t tell.  I watched this frustrate my AP students time and time again.  They knew that the article was talking about people cooking a dish with pork, but because the comprehension question offered choices of extraordinarily low-frequency alternative words for goat, pig, and calf, they couldn’t select the right answer.  So we assumed that the College Board cared more about whether they could comprehend these random alternative terms than actually comprehend the authentic text.

All that to say, my go-to way to incorporate interpretive tasks in a communicative program is to ask students to incorporate them into a production task.  On the lower levels, I ask students to simply retell me what’s going on, or perhaps recreate with their own content (look at a ‘lost dog poster’ and change the information to their own pet, for example).  For higher levels, they need to use the content to make a comparison or defend an opinion.


There’s an easy aspect and a hard aspect to interpersonal tasks.  Easy:  Ask students to have a conversation (in writing, maybe a Twitter exchange).  If I’m assessing it, the conversation is with me.  If it’s simply practice, the conversation can be with each other.  Hard: don’t do skits and call it interpersonal.  If students have a chance to draft and/or practice a conversation before performing it, this is not interpersonal.  It can be valid, if you call it presentational, but it’s not interpersonal.


This is my primary method of acquiring test grades.  I usually alternate or allow students to choose (but they must alternate choices): one presentational speaking or one presentational writing assessment per unit (that I grade).  They may do lots of other presentational communication, even in every class period, as the definition is simply communication they have time to plan and edit.  Their weekly blogs are a form of presentational writing.  Bottom line, I’m asking them to communicate something in writing or speaking that we’ve been working on.
Novice example: Write a short review of your favorite restaurant for someone who is coming to visit our city.
Intermediate example: Compare the McDonald’s menu in Argentina with the McDonald’s menu here and tell what you like best and why.  What would you eat at McDonald’s in Buenos Aires?  Post your video presentation on YouTube (if allowed) and tweet it at McDonald’s Argentina.

More reading

Here are some previous Musicuentos posts that I think may help further with this issue:

Consider this: what current practices are making our assessments invalid, and how can we change them (and maintain our sanity)?


Tags: , , , , , .

January 30, 2015 5 Comments

Best of 2014 #10: The new JCPS curriculum documents


to the 2014 “Best of Musicuentos” series.  In the month of December I do not post much new material as I enjoy the season with my family, but rather I re-post the top ten posts of the year, in case you want to re-read, or in case you’ve joined us this year and didn’t see these popular posts.  We’ll start with the tenth most popular post, which offers you links (and they should finally all work, yay!) to resources many of us have been working on and many more of us have been waiting for, for a long time: the new drafts of the Jefferson County (KY) Public Schools’ world language documents, for secondary, and for the first time, for elementary as well.

Another resource: The new JCPS curriculum documents

Brittany Randolph

Brittany Randolph


It’s a busy season for Musicuentos, can you tell?

I feel like I just said that.

I’m breathing a huge sigh of relief as an excellent cohort of teachers and I wrapped up a year-long project to lay the groundwork for something that has not existed in entirety before: an elementary curriculum map for the Jefferson County (KY) Public Schools.

If you’ve been looking at resources online for any length of time you know that JCPS has developed and is developing one of the most proficiency-focused, communicative, research-based curricula out there.  But the elementary program has been a different story.  The project to develop a district-wide map has started and stopped and fizzled several times over the years, but it’s finally happened and will continue happening.

Where? Where?!

If you’ve been interested in JCPS’s projects for any length of time you know it’s been a bear to get access to them.  Password protected.  Blocked.  Unblocked – for a matter of hours.  Someone in some workshop has them on a USB drive.  The district was protective even as the district specialist wanted them public.  You may find all the new documents online here.  If you can’t – best news ever for you – they have moved to a public Google Drive folder here.

Let me say that again.

It’s a public Google Drive folder here.

Watch for updates as the great teachers at JCPS continue working on powerful assessments, resources, and lesson plans.

A few notes about the elementary curriculum:

  • JCPS categorizes elementary grades beginning with P1 as kindergarten, P2 as 1st grade, and so on.  At 4th grade the teachers stop using the P# reference.
  • We tried to address the problems that plague elementary programs – kids transferring in and out, the program getting hijacked by pull-outs and testing prep, too many students per teacher, not enough time per week. So we divided the program into two levels, with the levels layered.  Then we developed five six-week units for the last six-week period to be used as review and assessment as the state testing schedule allows.  So the first level has the same five units every year for kindergarten, first grade, and second grade, but every year the vocabulary and functions in that theme get deeper.  There’s a lot of recycling and then moving deeper.  Same with third, fourth, and fifth grades- the same theme for the unit every year with a lot of recycling and moving deeper.
  • We developed the program as if every teacher had the recommended minimum 90 minutes per week with students, which no one in the JCPS system does yet, so we actually recommend that teachers with less time throw out an entire unit instead of doing less per unit.  If it were me I would skip unit 1 in Level 1 on the assumption that kids will develop the school vocabulary as the year goes on, and in Level 2 I would combine the All About Us and Hanging Out with my Friends units.
  • There are also many core content and connections built in.
  • As teachers develop units and find resources those will be updated too, with a goal to have a really good IPA for at least each semester of 3rd-5th soon.
  • The intercultural goals are something cool and innovative but will need some improvement so you can watch for that as well.

We hope you find it useful.

I’m going to take a nap now.

Tags: , , , .

December 2, 2014 0 Comments