So let’s say you gave a certain test to your students. There were 30 questions on the test. They were all multiple-choice each with 5 choices. The test is ungraded but students are required to take the exam. The exam is used solely for evaluating the effectiveness of instruction.

Let’s say you plotted for each students “Test score” vs. “Time spent taking the test”. Let’s say the plot looked like this:

What do you make of it?

Equation of Best Fit:  Score = (0.35 Points / Minute) * Time + 6.5

Correlation Coefficient:  R = 0.71

1. July 5, 2012 9:59 pm

I’ve never seen a positive correlation between time spent on an exam and score. Was there free ice cream as soon as people finished or some other artificial incentive for ending early? Or were the students so demoralized that they gave up and stared out the window?

• July 5, 2012 10:40 pm

The test isn’t graded… but it is required. You can probably guess what test it is.

2. July 5, 2012 10:32 pm

I would say that it shows there is a lot of different types of test takers. I think the positive correlation you are seeing is heavily influenced by the one really deliberate (slow) perfectionist student you had — and the two or so students you have who appeared to rush through things and really had no idea but guessed. If you take those out (you’ll probably have them just about every quiz or test you take — i certainly do) I think you’ll find the correlation weakens significantly, which just shows you have some quick students who know it, some quick students who make careless mistakes, some slower students who know it and are careful, some slow students who don’t know it but are trying hard, and all in between.

I wish it was as simple as “take your time, do better” but it doesn’t work that way. In fact, I think if a student knows it well, (or thinks they do at least) they’ll probably do it fast — I see that in my math and physics tests.

• July 5, 2012 10:43 pm

Taking out the outliers… we get .50 correlation…not nearly as strong, but perhaps something. I’m trying to decide what data to remove for obvious “didn’t try” reasons.

3. July 6, 2012 1:13 am

If the test is the FCI, I would say that many answers are written to play strongly into a student’s gut instinct, and if they are the type to quickly go with their gut and not try to really reason through each of the answers, they’re likely to do quite poorly. I’d be curious to know what would happen if you explicitly instructed students to think carefully about each problem and the reasonability of each answer, rather than simply searching for the one they feel is right.

• July 6, 2012 1:51 pm

It just makes me wonder, how much the score has to do with whether students’ approach to the test is a patient one where you don’t just answer the first thing that comes to mind. Andrew Heckler, at OSU, did a study where they forced students to wait a few seconds before answering questions involving interpreting graphs, and student success shot way way up. I’m curious to what extent do FCI scores reflect the mindset of the test taker–how to take the test. In other words, are there some students who have knowledge, but don’t access it, because they are either hurried or don’t care to think it through.

4. July 6, 2012 4:25 am

The cluster of 3 below the line at 40 minutes and the cluster of 3 above the line at 20 minutes, plus the outliers, might make interesting subjects for interviews.

• July 6, 2012 1:52 pm

Yeah, I agree. I mean we can throw away any students who take < 10 min. In reality, you probably need 30+ min minimum to read each question. But students who can do well quickly– are they just fast readers? Or would they do better if they took their time?