Monthly Archives: May 2013

Inverse Functions, Illustrated

I’m cleaning out my email inbox and found this in an old, unread blog post:

(This is the blog post, by Dan Meyer. This is the original source of the image, by Rachel Kernodle.)

My efforts at teaching inverse functions this year were fun, but, alas, not very effective. I started with the “function box,” then added the “inverse function box” with the full range of appropriate sound effects, but the two “boxes” were just too similar. There wasn’t a strong enough visual signal of the opposite-ness of the two kinds of functions. Ms. Kernodle’s stapler/remover analogy could be the key to finally getting the message across.

Testing to Aid Retention

This (by Tania Lombrozo on NPR’s Cosmos & Culture blog) interests me because it aligns with what I’ve seen for myself and my students.

And that’s not all: there’s also evidence that test-taking itself can improve retention for the material being tested. In a 2006 demonstration of a phenomenon known as the “testing effect,” for example, Roedieger and Karpicke had students read passages of text and then either repeatedly study them or repeatedly test their ability to recall them, without any feedback on how well they did on the tests. The students who repeatedly studied the passage were more confident about their ability to remember the content than those who were repeatedly tested. But the latter group considerably outperformed the former when it came to actual memory for the passage one week later.

So testing can be an excellent tool in an educator’s toolbox, but it’s one that needs to be used wisely. The American Psychological Association warns of the dangers of “high-stakes” testing in our nation’s schools, and a report from the National Academies of Science suggests few benefits to our current test-based accountability system.

Snapshot of a Mid-September Crisis

On September 19, 2012 I hit a wall. Having been inspired by Dan Meyer, I had set out to teach magnificently, using ideas and tools found on math-teacher-blogs, and resented that those supposed-to-be-brilliant ideas and tools did not always work as expected. I wrote an email to Mr. Meyer that I tried to make measured and professional, but which was really bursting with distress and anger.

I remember reading in one of your posts that you wondered whether it’s possible to shorten the amount of time it takes for new teachers to start doing things better. My two cents: shortening the improvement period makes it tumultuous and unreliable. I work at a small school with a weak math program and a student body made up entirely of students with IEPs. I blog, read blogs, and talk to teachers outside my discipline to enhance the instruction I give. My pie in the sky was to reduce the amount of time it takes me to get better to zero. That was obviously a long shot.

I crave the opportunity to work closely with an expert teacher, to observe and be mentored by him/her, but instead I engage in a constant trial-and-error method of implementing the ideas I find in blogs while crossing my fingers that I understood what the blogger intended and that the ideas are transferable to my unique student population. I guess the trial and error aspect would stick around even with an expert mentor teacher, but at least I’d have a leg up.

You spent two years (or so) establishing yourself as a teacher before you branched away from familiar methods. My situation tells me that those two years were invaluable to your success upon branching away. I don’t want to wait two years but I don’t especially enjoy the alternative.

Unguided trial and error succeed in making me (a newbie) a better teacher on some days and advances me toward being a consistently better teacher in the future, but the overall immediate effect is that my approach is inconsistent and unsure. Oh, I don’t like that feeling. I’m not sure it benefits my students either. So my working conclusion is that reducing the time it takes to get better can only be done by increasing the badness of the intervening time.

Have you found something different to be true among the teachers you know?

Mr. Meyer responded considerately, and I sought support from colleagues at my school. This isn’t the end of the story. Just a snapshot.

Class Starters

Early last fall the speech-language pathologist I collaborate with suggested occasionally starting class with a short pop quiz for extra credit based on the material from the previous night’s homework. I don’t recall what issue it was meant to address, but I think it had something to do with students needing additional incentives/reinforcement to practice solving the problems accurately.

Here’s how they worked. The quizzes were usually 4-6 questions, with each question worth half-a-point added onto their homework score. Since they were worth extra credit, I didn’t guarantee plentiful time to complete them; when I needed to move on, it was time to pass the quizzes in. (Again, whining from the mathematically anxious crowd. And the chronically late crowd.) We didn’t discuss them together, but I passed them back, marked, the next day.

From a class management standpoint I liked that the quizzes helped get class started and reminded the students what kind of information they would be held accountable for. It also succeeded at giving students who completed their homework an extra chance to show what they had learned and boost their grades.

Because I didn’t want to offer extra credit all the time but I still wanted something to help get class started and give prepared students an extra chance to show what they had learned, I started doing “problems on the board” on off days. For these I simply spread problems of varying difficulty levels across the board and told the students to find one they felt comfortable solving, grab a dry-erase marker, and solve it on the board. Unlike the pop quizzes, these we did go over together after everyone was done. A couple additional benefits of this technique were that it started class with a bit of self-assessment as each student determined which problem to volunteer for and a little full-body motion as they went up to the board and solved it.

Grading Accuracy and Completion

One of my pre-planning ideas that actually succeeded enough to stick with me throughout the year was grading assignments for both completion and accuracy. In previous math classes my students had taken, their homework assignments had been graded only on completion, which meant all they needed to do was make some kind of effort on each problem and they would get full credit. I think this arrangement was meant to communicate to them that trying is important, and that as long as they try they shouldn’t feel bad for getting things wrong at first. But the message that many of them received instead was that there is no point in getting any homework problems right, since there will be no reward for it. And if there is no point in getting the problems right, there is no point in giving a genuinely full effort. So the policy that was meant to reinforce effort undercut it severely, not to mention all the learning that was supposed to be taking place.

I figured I would build a bridge for them away from that flawed policy by giving a grade for both completion and accuracy. For example, if a homework assignment consisted of 12 problems, I would write something like this:

Completion: 11/12

Accuracy: 5/12

Total: 16/24

Into my grade book would go the 16/24. Still, with fully half their score coming from effort, some students stressed out BIG TIME about losing points for getting homework problems wrong. Cue the whining. But you know what? My students became more accurate. They began paying more attention to detail. They came to appreciate that some methods led to getting the right answer and others did not. What’s more, they liked getting answers right. I’m pretty sure they liked getting some answers right even more than they liked getting zero answers marked wrong. Just the other day a student (not one of the top scorers) described to me in private how the trying-is-enough policy had changed her as a math student and how leaving it behind had changed her again. She appreciated that I didn’t “take any of [the students’] bull.”

Grading each assignment this way was much more time consuming than just marking down completion and then reading the answers, but was also invaluable to my students’ success and growth.

Students Lead the Review

For the first time in their high school careers, my students (juniors and seniors) are taking a math final that is cumulative over the entire year. To support them in preparing for this, I took a page from my own senior-year-math-teacher’s playbook and organized a student-led review. Each student signed up for one or two carefully defined topics from the study guide for which they would lead a 10-15 minute review in the final days of class. Each was required to meet with me in advance of their presentation to discuss the details of their plan. So far, those one-on-one meetings have yielded great teaching moments. The presentations themselves have mostly been rough. (I think it’s a healthy eye-opener to the travails faced on the teacher’s end of things.)

Next year I’d like to do this kind of review at the end of first semester as well as at the end of the year. That way they’ll have a chance to learn from their first attempt at teaching and hopefully do better and feel more at ease the second time.

*On a sort of related note, I broke the study guide for the final exam into two parts. There is a list of specific skills that will be tested, but there is also a list of concepts and definitions students will need to know/understand/be able to describe. The student presentations focus on specific skills; I field questions about the concepts/definitions.

Good Intentions

Last June I wrote oh so enthusiastically about the homework policy I planned to implement in trig this year, which I had borrowed from one of my college math professors. It was going to basically change the world. Until I abandoned it only a few weeks into the school year. Wisely.

This year was funny that way. So many of my theoretically solid plans turned out to be worth bupkis in actual practice and had to be tossed. But, also in practice, I picked up ideas, plans, strategies worth far more than anything I thought I had before.

I haven’t posted here since September, partly because of time, priorities, stress, etc., and partly because it was then that I bore the heaviest load of inadequacy awareness. I had learned just enough to know that my “insights” into teaching could well turn out to be that much rubbish, and it seemed that my options for publishing posts were to either pretend to greater wisdom than I had or fill them with nauseating angst.

But here I am in May! With a list of things I’d like to reflect on in writing! And enough objectivity, despite my still-new-ness, to give it all, I think, a fair shake! Expect to hear more.