Practice Matters: “The effect of curriculum on Force Concept Inventory performance: A five thousand student study”

A few years ago, we switched to the Matter & Interactions curriculum for our introductory classes. This has not been without its hiccups, among them the fact that there has been a small decline in the conceptual learning gains measured by the Force Concept Inventory, the oldest and most widely used of the conceptual tests favored by the Physics Education Research community. We’ve spent some time discussing whether this is a temporary glitch, due to the transition, or something inherent in the curriculum. (Our numbers are small enough that these results remain at the level of plural anecdotes.)

Yesterday, via ZapperZ, I ran across an arxiv paper showing that we’re not the only ones to see this:

The performance of over 5000 students in introductory calculus-based mechanics courses at the Georgia Institute of Technology was assessed using the Force Concept Inventory (FCI). Results from two different curricula were compared: a traditional mechanics curriculum and the Matter & Interactions (M&I) curriculum. Post-instruction FCI averages were significantly higher for the traditional curriculum than for the M&I curriculum; the differences between curricula persist after accounting for factors such as pre-instruction FCI scores, grade point averages, and SAT scores. FCI performance on categories of items organized by concepts was also compared; traditional averages were significantly higher in each concept. We examined differences in student preparation between the curricula and found that the relative fraction of homework and lecture topics devoted to FCI force and motion concepts correlated with the observed performance differences.

So, this is a clear case where a reform curriculum is outperformed by a traditional one (though, it should be noted, this is not a case of traditional large-lecture instruction outperforming more modern teaching methods– both classes use active engagement tools like “clickers” and discussion questions). Why is this? The last sentence of that abstract tells you: Matter & Interactions spends less time on concepts covered on the FCI than a traditional curriculum does. This is summarized in the following table from the paper:

i-db630e2691589456ccd3e7cbb985ec21-MI_FCI_table.png

This shows the fraction of the homework assignments devoted to covering topics on the FCI, with the questions being classified by examination by one ofn the researchers. Over half of the traditional curriculum assignments cover FCI concepts, while about a quarter of the M&I assignments did. In the subject-by-subject breakdown, you can see that the traditional curriculum spends more homework time on every area except Newton’s 1st Law, which both curricula give short shrift.

This matches pretty well with my experience teaching under M&I. In our previous curriculum, I used a lot of FCI-type questions in class and on exams, but it’s always felt like there’s less room for that in M&I. And even where they do fit, the language and style of the M&I presentation makes it harder to directly carry questions over.

So, is this a major problem? It’s hard to say. On the one hand, this is unquestionably a matter of the design of the course, not an accident of implementation– students using M&I get lower scores on the FCI because the course as designed spends less time on those topics. If asked, I suspect that the M&I authors would say that this is a feature, not a bug. The curriculum is designed to teach other sorts of physical reasoning, so the FCI is not a good match for the course, and some other sort of conceptual assessment would be more appropriate.

(There are a few other methodological quibbles that might be raised, chief among them the fact that the M&I courses had a single weekly class meeting for three hours(!), compared to three weekly meetings for the traditional class, which also probably reduces the time available for engaging with FCI concepts, in a way that might not be what the designers of the curriculum intended.)

On the other hand, though, a lot of the impetus for physics education research based reform has come from the fact that students in traditional lecture classes do a miserable job on conceptual tests like the FCI. If the new curriculum doesn’t stack up in those conceptual areas, then we probably have to think more carefully about what, exactly, we want to accomplish with the introductory physics sequence.

So, as noted yesterday, physics education remains a complicated and messy business.

6 thoughts on “Practice Matters: “The effect of curriculum on Force Concept Inventory performance: A five thousand student study”

  1. “If the new curriculum doesn’t stack up in those conceptual areas, then we probably have to think more carefully about what, exactly, we want to accomplish with the introductory physics sequence.”

    The second part’s the key. I’m a chemist and we don’t, as far as I’m aware, have an equivalent evaluation. We do have something called the DUCK exam (diagnostic of undergrad chemistry knowledge) aimed at majors near the end of their coursework. While it has pointed out some areas that we agree we need to improve, we’re not sure that it’s a good assessment for us. Do our students do worse than we’d like on the exam because they don’t know chemistry as well as we would like or because the test is asking the wrong questions? Chemistry education is also a complicated and messy business.

  2. Don’t fall into the trap of assuming that the FCI is everything that there is to conceptual learning. The FCI says something about how well students understand certain concepts. It says nothing (at least directly) about how well they understand other concepts. If Sherwood and Chabay deliberately emphasize certain other topics at the expense of FCI topics, then it is appropriate to ask two questions:

    1) How well do students master those other topics (as measured in a conceptual sense, or mathematical sense, or whatever it is that you value)?

    2) Is the trade-off worth it, i.e. are those other topics important enough that a decreased emphasis on the FCI topics is worthwhile?

    The first question can only be answered with data. The second question is to some extent a value judgment, though no doubt the answer could be informed by data. Lower FCI scores are only a valid criticism of M&I to the extent that we have a corresponding answer to the second question.

  3. Is there some new PER research I am blissfully unaware of, or is doing a single 3 hour session per week pedagogically insane?

    Our PERs worry about student concentration lasting 40 minutes, which I think underestimates the students, but I don’t know anybody teaching university level concepts who would advocate 3 hour sessions on introductory topics.

    That is the sort of thing we inflict on experts doing research grade workshop education, and there we try to at least provide infinite free coffee.

  4. I will advocate for 3 hour sessions on introductory topics… I guess I kind of have to, since that’s how my university (http://www.questu.ca) does everything. We’re on the block system, like Colorado College. Students take one class at a time, and class meets for three hours a day, every day, for 18 days. (That’s 18 M-F days, so it happens over the course of 24 days if you include weekends. Then there’s a 4-day block break.)

    The advantage of the block system is focus. Students are only taking one class at a time, so they aren’t distracted and switching to other classes (and spending a week or two ignoring your class because big assignments are due in another). You’re deeply immersed in what you’re doing for the short time that you’re doing it.

    Of course, you do not simply lecture for three hours straight! Nobody would survive that. I try to avoid lecturing for more than 20 minutes at a run, and usually try to avoid it for more than 10 minutes. The goal is to have engaged students, with them actively participating. Some of this is “clickers” like stuff, although since no classes are larger than 20, you don’t really need to use things like clickers. But, more than that, many of us who teach on this system send students off to breakout rooms to work in small groups together on problems and similar sorts of things.

  5. Ok, I can see that make sense for a block system with small classes.
    But, how is retention? It seems you never get continuity of topics, students are always switching and having to pick up a new topic after a presumably long break.

  6. I firmly believe that your question
    So, is this a major problem?
    is the correct question and one that needs careful study. If anyone from Ga Tech reads this, I’d recommend doing an A/B comparison of how the engineering majors within this very large population fared in their initial engineering mechanics class, and you might do the same at Union. (It is probably pretty easy for you to do this, just ask the engineering faculty if entering students are better or worse than before you made the change, then seek data via the registrar’s office and your “institutional research” staff. Tell the latter that this is a warm-up exercise for future Outcomes Assessment. They will understand that!)

    Ever since I learned that the FCI was not aced by incoming physics graduate students, I’ve wondered if there is any relationship between performance on it and the ability to solve real-life engineering or physics problems at the next level. I gather that that M+I people think you can get the physics right in applications even if you still get tricked into using pre-physics Aristotelian methods by carefully constructed FCI questions. Has this been studied?

Comments are closed.