I stumbled across a decent simulation, while I was reading up about ISLE. The simulation can be found here:

http://media.pearsoncmg.com/bc/aw_young_physics_11/pt1a/Media/DescribingMotion/AnalyMotUsingDiag/Main.html

I think these simple kinematics simulations are pretty cool, especially the four problems at end, where you have to adjust the initial position, initial velocity, and initial acceleration to match the motion map. Nothing fancy, but pretty engaging.

Content Learning: It’s a nice bridge between qualitative and quantitative representation of kinematics, supporting mathematical sense-making rather than plug-and-chug approaches. It would likely support students distinguishing between position, velocity, and acceleration. It would also provide students with opportunities to wrestle with the meaning of algebraic sign for each of those quantities.

Pedagogical Affordances: The sequence begins with observations and moves toward application. It’s game-like in a productive way–fun, challenging, easy to jump into and try, and provides immediate feedback. You’d probably just have to help students from mindlessly manipulating values to match the motion.

The full range of simulations, which I haven’t looked at closely is linked here: http://wps.aw.com/aw_young_physics_11/

This is not a comprehensive treatment of some complex ideas, but here are some thoughts from today.

I bought myself a copy of Greg Jacob‘s 5 steps to a five to add to our library for pre-service physics teacher. In reading it, I’ve come across a statement that is representative of ontological differences in how physicists think about a few concepts in introductory physics, which I think stems from differences in how one can interpret the equal sign. I don’t have the exact quite, but Greg I think in the text implies that Impulse is both the change in momentum and the product of Net Force and its corresponding time interval.

Impulse = Fnet.Δt = Δp

From this perspective, the equal sign allows one to say that all three things are equally, both quantitatively and ontologically. Impulse is a word for both things.

Impulse ≡ F.Δt  We’ll define impulse for a single force to be the product or integral.

Then we we can add up individual impulses, to get the Net impulse ≡ Σ Impulses = Σ F.Δt = Fnet.Δt

By applying Newton’s 2nd law, you get that Fnet.Δt  = Δp.

Thus Net Impulse = Δp

To me, impulses are causal influences that together cause a change in a momentum, which is the effect. So to me, impulse is not change in momentum, not ontologically, because one is the cause and the other is the effect. So, I guess I see two differences, and they may or may not be related. First, I think we can define impulses for individual forces (and I’m not sure what Greg would think), and I also think that impulses are events whereas change in momentum is a change in state. Since I think they are ontologically different, I would never want to say that impulse is a change in momentum.

Of course, you can take such a momentum perspectives even further, such that even static situations involve momentum flow. In this case, individual impulses each actually flow momentum, such that the net momentum flow is zero. That is, in this perspective, it’s taken even further that each impulse (cause) has an effect (momentum) flow and the momentum flows combined to create a net momentum flow. In other words, the mathematical steps above are different, because Newton’s 2nd law is applied first and then the sum is taken.

And of course, similar differences in conceptualizations exist when we think about work, net work, change in kinetic energy, and the product of Force and displacement.

I’m not necessarily convinced that any way of thinking about this is “correct”, but I do think it’s useful to be able to acknowledge and attempt to reconcile the different ways of thinking about it.

People who I suspect will have an opinion on this: Leslie Atkins, Andy Rundquist, Benedikt Harrer, and many others.

Now that I’ve had some time away from it, I want to try to reflect on what was a truly wonderful class and teaching experience that occurred  in my inquiry / physical science course for elementary education majors this past spring. It was a class where we learned a whole lot together while laughing almost everyday (sometimes very loudly).

The bulk of the course focused around two very different parts of the course.

Part One: 7 weeks of Guided Inquiry using the Physical Science and Everyday Thinking Curriculum (Focused around Energy)

Part Two: 5 weeks of Responsive Inquiry informed by facilitation from Student Generated Scientific Inquiry (Focused around the Moon)

My gut feeling about the class has been that a large part of what made it so great had nothing to do with anything I was doing differently. The story in my head goes: “I just happened to have been lucky with the group of students I had. In terms of individual students I was lucky, but I was also just lucky in terms of the group as a whole. Things just happen to fall into place with the right people.” I think there is a lot of truth to that. My inquiry class can be difficult to navigate for many students, especially those  who are not used to taking responsibility for their own learning, or who have never had to grapple with uncertainty and the unknown for extended periods of time, or are not used to really talking and listening as a way of learning. In the past, I’ve had mixed success, often with usually one or two disgruntled students and usually a varied size of students who embrace the class strongly.

This past semester, the story would merely go that I just happened to have a group of students who, for whatever reason, really found ways and reasons to embrace these experiences. That’s not to say that students were never uncomfortable or frustrated, but their discomfort and frustrations were experiences that occurred within a overall supportive environment rather than being a defining, pervasive aspect of the course. But still,  I’d like to be able to walk away from that experience with more than just, “It was luck. You just have to get the right combination of students.” So I hope hear to reflect on things that I may have done differently.

Guided Inquiry before Open Inquiry: Students had 7 weeks of guided inquiry in which there would be short periods of uncertainty with strong content scaffolding, importantly, before having extended periods of uncertainty with less scaffolding on content and more scaffolding on inquiry. This gave students positive experiences with learning science content which let them dabble in inquiry waters before jumping in. Because I can’t possibly follow the structured curriculum closely, students also got to experience moments of intense unscripted inquiry and responsive whole class discussion. With the class I had explicit discussions about the differences between some of the worksheet science we were doing and the real science we were doing when it occurred more spontaneously. Our class spent a lot of time during our guided inquiry into energy talking about Amy’s pee theory and investigating phenomena (which according to the curriculum should have been homework practice), but instead became rich contexts for extended inquiry. When students didn’t believe a simulation they were investigating, we improvised to do our our experiments to help settle the issue. I think this also meant, in the first part of the course, I could focus on being a good teacher rather than being a curriculum designer/developer.

Change in Day/Time Structure: The class used to meet 2 days a week (3 hours each meeting) to 3 days a week (2 hours each meeting). I don’t think this is insignificant, both for students and me. For students, three hour twice a week is rough. But for me, planning for 2 hours is much easier than 3 hours. Plus, in a responsive inquiry setting, in which improvisation is often necessary mid-instruction, many more things can go wrong in 3 hours than in 2 hours. You get more chances with three meetings to reflect on what’s happened and plan.

No Attendance Grade (Except for a Participation Self-Assessment): Previously, because being continuously present and participating is so critical to coherence in the classroom (both for individual students and the class), I had an attendance policy. This semester, I just asked students to self-assess their participation along a rubric several times throughout the semester. For the most part, students gave themselves honest assessments. As part of those assessments, they had to give themselves goals for next time and self-evaluate next time with evidence. I can say that participation was about the same as before–pretty good. Before, students felt like I was punishing them for not showing up. Now, students usually felt like they had punished themselves. Students also self-assessed and peer-assessed on their moon journals.

Summary:

I guess it boils down to (1) scaffolding early experiences for success by using a structured curriculum, (2) improving clarity about expectations (especially writing), (3) use of self-assessment and peer-assessment, (4) more thorough preparation for classroom discussions, and (5) more workable timetable / schedule.

I think those things are tangible things I can think about that were different. I’m sure there are lots of less tangible things I may have done in terms of how I interacted with students, but I can’t say for certain. I know my interactions with students were very positive, but the nature of interactions are complicated and can’t be solely attributed to things I did.

Was it all in my head? No, I don’t think so.

So, it wasn’t just me that felt the class was so wonderful. For the most part, evidence suggests that students tremendously valued the time they spent in class. In other classes, I typically get notes from students saying things like, “I admire your professionalism and your passion for your chosen field,” but in my inquiry class this past spring, students wrote things like, “You really are a great friend,” “We love you,” and “Love your guts”.

Student evaluations also suggest that students felt like this classroom experience was more worthwhile and effective than previous classes of mine. Two categories that are the very signifying on our evaluations are,”How worthwhile was this course in comparison with other courses you have taken at this university?” and “How would you rate the overall teaching effectiveness of your instructors?” With both of those questions, every student answered those two question as highly as possible. Here are graphs showing trends in this class over the last 3 years.

The sad ending to this post is that I am likely to not be teaching this class in the near future. The elementary education program here has been declining in enrollments, which has meant that our offerings of the course are now half of what they used to be. I am not slated to teach the class next year. I suppose it’s nice to end on a high note, so there’s that.

My sense has been that the PER community still implement subpar standards of research reporting that minimizes our ability to carry out meaningful meta-analysis. I’m not an expert, but I’m assuming that the scores with standard deviations / standard errors would be necessary for a meta-analysis, right? So I’m curious. I’m going to quickly take a look at some  recent papers that report FCI scores as a major part of their study, and see what kind of information is provided by the authors. Here’s how I’ll break it down.

Raw-(ish) Data:

N = number of students

Pre = FCI pre-score either as raw score out of 30 or a percentage (with or without standard deviation / standard error of mean)

Post = FCI post-score either as a raw score out of 30 or a percentage (with or without standard deviation / standard error of mean)

Calculated Data:

g = normalized gain with or without errors bars / confidence intervals

<G> = average normalized gain with or without errors bars / confidence intervals

Gain = Post minus Pre (with or without standard deviation / standard error of mean)

APost = ANOVCA adjusted post score (with or without standard error of mean)

d = Cohen’s d is a measure of effect size (with or without confidence intervals)

I’m leaving out statistical transparency such t-statistics or p-values, or other measures from ANOVA, and I’m sure there are others, such as accompanying data about gender, under-represented minorities, ACT scores, declared major, etc.

Anyway, here we go:

1. Thacker, Dulli, Pattillo, and West (2014) ,”Lessons from large-scale assessment: Results from conceptual inventories

Raw Data: N

Accompanying Data: None

Calculated Data:  g with standard error of the mean (mostly must be read from graphs)

2. Lasry, Charles and Whittaker, “When teacher-centered instructors are assigned to student-centered classrooms”

Raw Data: N, Pre with standard deviation

Accompanying Data: None

Calculated Data: g with standard error of mean (must be read from graphs), Apost with standard error,

Raw Data: N

Accompanying Data: Gender, major, ACT

Calculated Data: g with standard error of mean (must be read from graphs)

Raw Data: N, Pre (with standard deviation), Post (with standard deviation),

Accompanying Data:  Others related to study, CLASS, for example

Calculate Data: g with standard error of mean

5. Couch and Mazur: Peer Instruction: Ten years of experience and results”

Raw Data: N, Pre (without standard deviation), PostPre (without standard deviation)

Calculated Data: g (with out standard deviation), d (without confidence intervals)

Raw Data: N, Pre (with SD), Post (with SD),

Accompanying Data: Gender, race, etc.

Calculated Data: Gain (with SD), d (with CI)

Raw Data: N, pre (with SE), Post (with SE)

Accompanying Data: Gender, majority/minority

Calculated Data: Gain (with SE), d (with CI)

So, what do I see?

Of my quick grab of 7 recent papers, only 3 papers meet the criteria for reporting the minimum raw data that I would think are necessary to perform meta-analyses. Not coincidentally, two of these three papers are from the same research group. Also, probably not coincidentally, all three papers include data both in graphs and tables and include errors bars or confidence intervals. They also consistently reported measures related to any statistical analyses performed.

Four of the papers did not fully report raw data. One of the four almost gave all the raw information needed, reporting ANCOVA adjusted post scores rather than raw post scores. Even here the pre-score data is buried and Apost and g scores can almost only be gleaned from graphs. Two of the papers did not give raw data about pre or post. They reported normalized gain information with errors bars shown, but they could only be read from a graph. These two papers did some statistical analyses, but didn’t report them fully. The last of the four reported pre and post scores but didn’t include standard error or deviations. They carried out some statistically analysis as well, but did not report it meaningfully or include confidence intervals.

I don’t intend this post to be pointing the finger at anyone, but rather to point out how inconsistent we are. Responsibility is community-wide–authors, reviewers, and editors. My sense looking at these papers, even the ones that didn’t fully report data, is that this is much better than what was historically done in our field. Statistical tests were largely performed, but not necessarily reported out fully. Standard errors were often reported, but often needing to be read from small graphs.

There’s probably a lot some person could dig into with this, but it’s probably not going to be me.

An undergraduate student working with me this past year focused his thesis research on investigating student difficulties with projectile motion. The research consisted mostly of analyzing student responses to written problems, multiple-choice questions, and some clinical interviews. He focused mostly on student difficulties amid problem-solving, but also some questions targeting their reasoning about vectors not in the context of problem-solving.

In this post, I’m mostly just going to focus on the common difficulties that were observed in students’ problem-solving. Here are the five most common mistakes that showed up in our sample. All of these were somewhat familiar to me as an instructor, but the prevalence of some were surprising. All and all, 50% of students made at least one of these errors.

(a) Identifying the final velocity of the projectile as zero (and to a lesser extent initial velocity)

This was definitely the most common difficulty with upwards of 20% of students making this mistake. Students who made this error were very likely to make at least one of the other errors below.

Although it’s tempting to think that this error can merely be addressed by focusing students attention to the fact that we are talking about the speed before impact, some of our conversations with students suggest that it goes deeper. For some students, it seems that it connects with difficulties with instantaneous velocity. For example, I talked with a student who suggested the velocity just before impact must be zero because velocity is distance over time and there is no more distance to travel. Beyond difficulties with velocity, my guess would be this difficulty cannot be fully resolved without Newton’s Laws, whereby students are given explicit practice drawing free-body diagrams along various snapshots during the initiating launch, various points during free-fall, and during the impact stage. Our students do projectile motion before Newton’s laws and I think that’s a mistake.

(b) Identifying a non-zero acceleration in the x-direction (or identifying it as unknown)

The most common way this was instantiated was for students to identify both the acceleration in the x (and y) direction as 9.8 m/s/s. Some students, however, would indicate that the horizontal acceleration was an unknown to be solved for by placing a question mark next to it. Making this mistake seems to suggest students are not understanding the basic idea behind projectile motion.

(c) Difficulties translating written description of initial and final positions into x-y coordinates.

Much of this involved switching what would be correct for x and y. For example, the problem might say that a golfer hits the ball from 10 meters above the green. Students would indicate that the 10 m was associated with the x-variable rather than the y-variable. There was a decent variety in exactly how this mistake was made.

A different student, working over this summer and fall, is doing some research to investigate the extent to which this difficulty stems from reading comprehension difficulties vs. coordinate system difficulties. One of the things we are asking students to do is to indicate all the places where x=0 on both axes that represent x-y coordinates and axes that represent x vs. t graphs.

(d) Finding launching or impact angles using triangles with distance information (rather than velocity information)

A correct way (and the way students are taught) to find the launch angle is by using trigonometry with a triangle composed from the initial velocity components. We observed lots of students solving for an angle using the distances. Basically, students end up solving for the angle describing the line that connects the initial and final positions rather than the launch angle.

We are looking into student understanding of the difference between these two angles in non-computational settings. It’s possible that students are actually confused about the two, or that during problem-solving that are just following a algorithm they don’t understand and aren’t really thinking about it all that much. My guess is this really stems from our lack of any instructional focus on kinematic vector concepts and its relationship to trajectory.

(e) Not clearly discriminating among velocity and velocity components.

Most commonly this would be observed where students would solve for the component of velocity, and then later be asked about the initial speed. Many students would identifying one or the other of the components as the speed, rather than combining them. A second way we observed this was when students might use x-component of velocity in a y-component kinematic equation or vice-versa. A final way this can arise is from students never finding components and using the initial speed in both x- and y-component equations. While some of this could certainly stem from carelessness, I’d bet most is related to vector issues.

A lot of these difficulties seem to relate to (i) difficulties with coordinate systems, (ii) not having a developed understanding of the vector nature of motion in 2D, and (iii) not having sufficient understanding of the fundamental idea (and even phenomenology) concerning the horizontal motion.

I happened to pick up a year ago while in a used book store called, “Force + Motion: An Illustrated Guide to Newton’s Laws” by Jason Zimba. Although I wouldn’t necessarily recommend it as a introductory textbook, it has some real nice gems which could certainly be put to good use. Here are a few things that make it a worthwhile addition to your collection as a teacher:

Discussions of Ontology

In Chapter 10, “The Concept of Force”, one of the first sections in the chapter is called, “Force is not Havable”. Here and else where, the authors discusses the ontology of force partly by examining examples from english language. In this section he analyzes examples that emphasize how forces always involve a pusher/pushee (e.g., “I push the wall.”). In other sections, his worked examples (instead of being problems to solve) are lyrics and quotes that use the word force. The problem as presented to the reader is to explain how the use of the word differs from the physics usage and to rewrite the line to make it more consistent with the physicist’s conception. One example is, “I helped her of a Jam, I guess,/ But I used a little too much Force.” from Bob Dylan’s Tangled up and Blue. His solution to problem begins, “The problem with Dylan’s use of the word force is that he makes it sound as though force is a substance that can be doled out–you can use too much, too little–like garlic…”

It’s the kind of things that might be the right kind of task for the future physics teachers in our program.

Attention to Learner Difficulties

In discussing force diagrams, he is very careful to spell out things about Forces and diagrams that students struggle with. For example, he has sections titled:

“A Force Diagram Focuses on a Single Target”–in many texts this goes unsaid or said said in a passing way. There’s a whole section devoted to this idea.

“Forces can Turn On and Off”, in which the authors writes, “Forces are evanescent things. They are not like material objects. They appear and disappear all the time… When you and I are shaking hands… once we let go of each other’s hands, both of these forces simply vanish…”

“A Force Diagram Illustrates a Single Instant in Time…” This idea has become a big emphasis in my own teaching of force.

In general, what I appreciate about the text is that it’s not just, “Here’s the correct physics understanding of these concepts” Instead, the text seems to be focused on, “Here are ways of thinking about these physics concepts” Many of those “ways of thinking” seem informed by ways that learners especially need.

Attention to Intuition and Argument

In some of worked problems about force, the author actually introduces incorrect force diagrams (e.g., force in direction of motion), accompanied by student dialogues about them (e.g., “the force keeps the bullet going across the field”, arguments against them (e.g., The rifle is not longer touching the bullet”, and rebuttals “But if there’s not force, what keeps the bullet moving?”. He ends, not with disdain for misconceptions, but with a tacit love that recognizes how confusion about the right issues is at the heart or learning: “Now we reach the heart of the matter. Bob’s instinct is that something must keep the bullet moving across the field. I’d that’s a perfectly reasonable instinct. Bob’s mistake is to seize on force as the sort of thing that keeps the bullet going… ” He goes onto the introduce, but not settle, the struggles and thoughts of Newton in his attempt to address this issue with the concept of inertia.

In making this progression, he is keeping our attention to definitions, argument, intuition, and joy of recognizing (even if not resolving) contradiction.

Refining Learner’s Intuition to Find “Seed of Truth”

In Chapter 12, the aithors introduces a section called, “The weaker link between Force and Velocity”. So often we can focus on, “What’s correct” or “What’s wrong”, but I think this author does a nice job of returning to arguments, and refining them.  The idea the authors returns to here is the common idea that force and velocity are linked. I’d never thought about it this way, but here’s what the authors has to say about the misconception regarding the connection between force and velocity.

“What about the link between force and velocity? Is there a link at all? The answer is that overtime there’s a link. If you apply a steady force Fnet to an object for a long enough time, the velocity vector v will eventually turn itself around more and more to a point along Fnet… However, at any fixed instant, there is no obvious relationship between the Fnet and v vectors… Students often want their force diagrams to show them something about how their target is moving at the moment of time in question. But force diagrams can’t do that. Indeed, because Fnet points along the acceleration vector rather than the velocity vector, it would be better to say that the force diagram shows you something about how the target is about to move.”

He goes on to discuss the power that this subtle idea gives: The power to make predictions, not just descriptions. What’s happening now, actually tells you about the (near) future.

Awareness that Mathematics is a Language–the Case of Rearranging Algebra

He has a section called, “Rearranging Newton’s Second Law,” in which he introduces Fnet = ma  as a rearrangement of  a = 1/m Fnet. In doing so, the authors talks about how Newton was in the business of observing accelerations (of planets) and trying to figure out the forces causing them; and so for him Fnet=ma  made sense because acceleration was the input while force was the output of his investigations. He does a nice job in other places of discussing this as the two major kinds of problems in physics–what can motion tell you about forces underlying some system vs given some known forces, what can we say about what some motion will be.

Other notable things about this book are it’s strong focus on vectors, graphs, reasoning, and history of science. In general, the text has some nice insights into student thinking; and when he discusses difficulties and mistakes students tend to make, he is not disparaging. Instead, he tries to understand why students would say, think, or do those things, and it makes for a pleasurable read.

I’m sure there’s things “not to like about the text,” but that’s the game I’m playing with this book review.

If you’ve read this (or get around to), let me know what you think in the comments.

Late Jan, February, and Early March:

~2 months of observations, light discussion, and moon journaling during inquiry units on other topics. The 1st month was done without much structure, except bi-weekly self and peer evaluations assessing mostly whether it’s getting done. 2nd month had more explicit focus on measuring angle moon and sun (after instruction on some methods of doing this), and still bi-weekly peer evaluations but evaluating more explicit expectations about what should be in an entry. During both months, we made in-class observations whenever weather and phase were amenable. We also discussed our moon observations and thoughts on the moon about 15-20 minutes per week, sometimes spilling over more. There was one day where I think we spent 1.5 hours talking about the moon. Early discussions often revolved around, ‘When did we last see the moon? When do we think you will see it again?”, “How come we couldn’t see the moon last night?”, “Why (in the world) could we see the moon out during the day?” “Where do we think we’ll see the moon and sun if we come out at the same time tomorrow?” “When was the last time in our journals where the moon looked like this?”, “Does everybody see the same moon the same way around the globe?”, “Why does the moon sometimes appear orange?”

When I do this again, I will more deliberately introduce and give practice with compasses and coordinating that with looking at maps of our school and their homes.

Week One of Focused Moon Inquiry

Day One: Initial Moon Ideas with Crumpled Paper Toss

Everyone was asked to spend 5-10 minutes writing about why they thought the moon went through its phases. Everyone had to use words and diagrams. Everyone knew they were going to crumple up their writing and toss it in the middle of the room, and then go get one that was no their own.

With elbow partners, students read and discussed the ideas and diagrams in the paper they found, and had to prepare a whiteboard to share what they had found about other’s ideas. Here is a smattering of the ideas

• The phases of the moon depend upon a little bit of everything: weather, temperature, the seasons, etc.
• The phases of the moon are the result of the earth’s shadow passing over the moon as the moon goes around the earth
• The phases of the moon are caused by the moon spinning (around its own axis),
• The phases of the moon are the result of a “blow out effect”, whereby when the moon is close to the sun, the light from the sun overwhelms our view of parts of the moon (like how too strong a flash can “obscure” details in a photograph).
• The phases of the moon are related to the various orbits and rotations of moon, earth, and sun,

This served the purpose of getting ideas on the table, focused us on the role of trying to understand other people’s ideas, and introduced us to the real challenges of writing about your ideas so that others can understand. This whole unit builds toward writing a moon paper, using structures from “They Say, I Say…” by Graff & Berkenstein, so students have also had reading assignments from the book.

Day Two: Creating a Community Moon Calendar

We spent most of the day, taking data from our individual journals and putting them onto large calendars I had created on the whiteboards around the room. I’ve tried in the past of “structuring this” in various ways, but I found it went well this time to let students just go up and put something on the board free-for-all style. Not one at a time, just mob style. The only structure I gave was, before hand, suggesting that we all share convention of whether shading in a diagram will show “what’s lit” or “what’s not lit” of the moon. Otherwise it gets confusing. I also encouraged students to check if someone else in class had a similar observation before putting anything up and encourage students who hadn’t gotten up to do so. I also let them chat off-task if they wanted.

It was a low pressure situation with a relaxed tempo and vibe. Many groups were up and adding things to the calendar, discussing. Other were comparing notes from their journals at their desks. Some were hanging back. A previous me would have thought ill of the laxadasical flow and pace, which included a real lack of structure and even permitted off-task talk. Someone watching my class could have easily just been confused as to why I was just letting students wander around, some on task, some off task. It could seem to someone else that it took us a lot of time to do this, and someone could have been wondering if this could be done more efficiently. I think the feel and pace was near perfect. [Side Note: In general, my feeling, thoughts, and even response concerning off-task talk has changed dramatically over the last 3 years.]

Afterwards, we started looking for patterns. Patterns we discussed were ideas related to waxing and waning about how it seems to take 28-29 days to repeat. Other patterns focused on what side of the moon was lit and what time of the day. Next time I want to scaffold this a little bit towards “proposing possibilities” language such as “I’m noticing that, I’m wondering if…” One reason is that some patterns students will suggest will be “correct” (from scientist’s perspective) and others will not. I don’t want to be put in the position (now) of having to be the arbiter of that. Second, I want students to feel OK in proposing possibilities. Third, I want every student to be in the position of deciding whether or not they understand the pattern being proposed, whether or not they agree that pattern might be there.

Day Three:  Analyzing Writing Moves (and Modeling a Specific Moon Day)

Students had just submitted their first writing assignment the previous night. I took an example from one student who had decided to write about the blowout theory–what the idea was and why they had come to believe that the blowout theory could not be an explanation for the moon phases. The assignment had to been to write about one idea from class and then respond to it–either agree and give reasons or disagree and give reasons.

In class, students were given the excerpt and the prompted to:

• (A) Highlight phrases within the text that signal to the reader whether they author is discussing what “Others Say” about the moon or what “They say” about the moon.
• (B) Highlight phrases within the text that signal to the reader that an idea is about to be clarified, elaborated, or compare/contrasted.
• (C) Highlight any other important phrases or words within the text that you think standout. Be ready to explain why you chose a phrase, and what purpose you think that phrase serves in helping the reader understand the text.

Here is the (snippet of) student writing that was analyzed and discussed:

“One observer believes that the Moon’s phases are in direct correlation to the specific distance of the Moon from the Sun as it travels in an orbital path around the Earth. They say that when the Moon reaches its closest possible orbital position next to the Sun, the bright sunlight overpowers the lesser light of the Moon, thereby making the Moon virtually invisible to our eyes. Then, as the Moon continues to travel in its orbital path around the Earth – it is simultaneously changing in its distance from the Sun – and, that is what causes us to observe incremental changes in the phases of the Moon. This theory seems to imply that the further away the Moon moves from the Sun, we are then able to see a larger area of the Moon’s surface.

Personally, I disagree with the first observer if by their explanation they are implying that the Moon is a lesser source of light than the Sun. Based upon that premise, we wouldn’t be able to observe different phases of the Moon’s surface. Instead – depending upon how far the Moon is away from the Sun along its orbital path around Earth – we would only see varying intensities in brightness of the light upon the entire surface of the Moon itself. Thus, I say the Moon merely reflects the light which originates from the Sun.”

After students worked in groups to read and discuss, we put the text under the document camera, and students suggested lines to highlight and gave reasons.

– One group had decided that the moon could not orbit around the equator, because it would seem that there would never be a full moon. They were excitedly toying with the possibility that the moon went over the pole’s instead.

– One group was becoming increasingly confident with the shadow theory.

– One group was becoming increasingly disenchanted with the shadow theory

It’s not true that the groups were so homogenous, perhaps it’s better to say that these three things were happening. There was a loose correlation with groups. But I know there was one anti-shadow person in the group that was swaying toward the shadow theory. And I know that two students in particular were driving the disenchantment with the shadow theory.

I’ll write later about what happened the following week. I actually had to miss class the following Monday and so students had to run class without me.

Here’s quick outline for me to remember:

Monday: Student-led Class–explaining the shadow theory in depth; and introducing objection’s. [Bubble Popper vs Brick Builders]

Wednesday: Building Foothold Ideas: “What We agree on, What we don’t agree on, What questions we still have”; Shadow Theory Revisited

Friday: Olaf’s Cousin who lives in a Rocket Ship; and Spinny Chair Modelling