I was motivated during my flight today to come up with physics problems that have multiple right answers, have a low barrier to entry and a high ceiling. Here’s my go at it, along with thoughts.
The idea behind these is students are supposed to come up with as many ways as possible.
1. Draw as many velocity vs time graphs that show an object moving +45m from where it started.
Extend 1: Describe each in words.
Extend 2: Pick one and draw its corresponding position vs time graph.
2. Draw pictures depicting situations where a normal force exerted on an object is different than the objects weight.
Extend 1: Pick one to draw a free-body diagram that will help you to explain your reasoning.
Extend 2: Categorize them by Fn > mg and Fn > mg.
3. Draw a picture of a situation where the initial and final states consist entirely of potential energy.
Extend 1: Draw energy pie charts for the initial and final state and at least two in between.
4. Identify the mass and initial velocities for two objects objects that when they collide, they stick together and remain motionless.
5. Draw free body diagrams for an object that will accelerate at 1 m/s/s.
6. Draw a velocity vs time graphs and categorize them into those that involve an object turning around and those that do not .
Extend 1: Come up with a rule.
Extend 2: Do the same for position vs time.
7. Draw a force that acts on an extended object such that the Torque due to that force is CCW.
Extensions: multiple forces where net torque is…
Brian’s Development Rules of Thumb:
– Situations should involve relationships with wiggle room. For example, consider a = Fnet / m. Not only can Fnet and m vary but the same Fnet can be accomplished in different ways. Torque similarly has wiggle room in location, angle, choice of pivot qualitatively and force, distance, angle quantitatively.
– Design around tasks that get close to known difficulties, but don’t over constrain things to make it narrowly about the difficulty. For example, don’t do, “Negative acceleration and speeding up”. Just do speeding up velocity graphs and see what happens. Or if you are going to go right at difficulties, don’t make it a trick or you being clever. My normal force situation I think tackles a difficulty in a straight forward manner and it may work, because there are do many ways to do this.
– I like processes where initial and end states are constrained but not the process in between. (Energy example above). This provides a large variety.
– I think you want choose representations very deliberately. Perhaps, ask students to start with or move to representations that support semi-quantification, or ask them to extend to multiple representations. I think it’s OK to start with picture, but it’s important to bridge to a representation (Normal and Energy are examples)
– When using in class, I would want to think carefully about the sequence of individual work leading to group work leading to whole class sharing and discussion.
– If I designed the task with a particular issue to come up and it didn’t spontaneously, I would just introduce it and ask students to consider it.
– I think these tasks are very amenable to the Five Practices for Orchestrating Productive Discussions framework. (Link to come on an edit)
Anyway, what do you think? I’m interested in what others would come up with.
I stumbled across a decent simulation, while I was reading up about ISLE. The simulation can be found here:
I think these simple kinematics simulations are pretty cool, especially the four problems at end, where you have to adjust the initial position, initial velocity, and initial acceleration to match the motion map. Nothing fancy, but pretty engaging.
Content Learning: It’s a nice bridge between qualitative and quantitative representation of kinematics, supporting mathematical sense-making rather than plug-and-chug approaches. It would likely support students distinguishing between position, velocity, and acceleration. It would also provide students with opportunities to wrestle with the meaning of algebraic sign for each of those quantities.
Pedagogical Affordances: The sequence begins with observations and moves toward application. It’s game-like in a productive way–fun, challenging, easy to jump into and try, and provides immediate feedback. You’d probably just have to help students from mindlessly manipulating values to match the motion.
The full range of simulations, which I haven’t looked at closely is linked here: http://wps.aw.com/aw_young_physics_11/
This is not a comprehensive treatment of some complex ideas, but here are some thoughts from today.
I bought myself a copy of Greg Jacob‘s 5 steps to a five to add to our library for pre-service physics teacher. In reading it, I’ve come across a statement that is representative of ontological differences in how physicists think about a few concepts in introductory physics, which I think stems from differences in how one can interpret the equal sign. I don’t have the exact quite, but Greg I think in the text implies that Impulse is both the change in momentum and the product of Net Force and its corresponding time interval.
Impulse = Fnet.Δt = Δp
From this perspective, the equal sign allows one to say that all three things are equally, both quantitatively and ontologically. Impulse is a word for both things.
My thinking about this mathematically is more like
Impulse ≡ F.Δt We’ll define impulse for a single force to be the product or integral.
Then we we can add up individual impulses, to get the Net impulse ≡ Σ Impulses = Σ F.Δt = Fnet.Δt
By applying Newton’s 2nd law, you get that Fnet.Δt = Δp.
Thus Net Impulse = Δp
To me, impulses are causal influences that together cause a change in a momentum, which is the effect. So to me, impulse is not change in momentum, not ontologically, because one is the cause and the other is the effect. So, I guess I see two differences, and they may or may not be related. First, I think we can define impulses for individual forces (and I’m not sure what Greg would think), and I also think that impulses are events whereas change in momentum is a change in state. Since I think they are ontologically different, I would never want to say that impulse is a change in momentum.
Of course, you can take such a momentum perspectives even further, such that even static situations involve momentum flow. In this case, individual impulses each actually flow momentum, such that the net momentum flow is zero. That is, in this perspective, it’s taken even further that each impulse (cause) has an effect (momentum) flow and the momentum flows combined to create a net momentum flow. In other words, the mathematical steps above are different, because Newton’s 2nd law is applied first and then the sum is taken.
And of course, similar differences in conceptualizations exist when we think about work, net work, change in kinetic energy, and the product of Force and displacement.
I’m not necessarily convinced that any way of thinking about this is “correct”, but I do think it’s useful to be able to acknowledge and attempt to reconcile the different ways of thinking about it.
People who I suspect will have an opinion on this: Leslie Atkins, Andy Rundquist, Benedikt Harrer, and many others.
Now that I’ve had some time away from it, I want to try to reflect on what was a truly wonderful class and teaching experience that occurred in my inquiry / physical science course for elementary education majors this past spring. It was a class where we learned a whole lot together while laughing almost everyday (sometimes very loudly).
The bulk of the course focused around two very different parts of the course.
Part One: 7 weeks of Guided Inquiry using the Physical Science and Everyday Thinking Curriculum (Focused around Energy)
Part Two: 5 weeks of Responsive Inquiry informed by facilitation from Student Generated Scientific Inquiry (Focused around the Moon)
My gut feeling about the class has been that a large part of what made it so great had nothing to do with anything I was doing differently. The story in my head goes: “I just happened to have been lucky with the group of students I had. In terms of individual students I was lucky, but I was also just lucky in terms of the group as a whole. Things just happen to fall into place with the right people.” I think there is a lot of truth to that. My inquiry class can be difficult to navigate for many students, especially those who are not used to taking responsibility for their own learning, or who have never had to grapple with uncertainty and the unknown for extended periods of time, or are not used to really talking and listening as a way of learning. In the past, I’ve had mixed success, often with usually one or two disgruntled students and usually a varied size of students who embrace the class strongly.
This past semester, the story would merely go that I just happened to have a group of students who, for whatever reason, really found ways and reasons to embrace these experiences. That’s not to say that students were never uncomfortable or frustrated, but their discomfort and frustrations were experiences that occurred within a overall supportive environment rather than being a defining, pervasive aspect of the course. But still, I’d like to be able to walk away from that experience with more than just, “It was luck. You just have to get the right combination of students.” So I hope hear to reflect on things that I may have done differently.
Guided Inquiry before Open Inquiry: Students had 7 weeks of guided inquiry in which there would be short periods of uncertainty with strong content scaffolding, importantly, before having extended periods of uncertainty with less scaffolding on content and more scaffolding on inquiry. This gave students positive experiences with learning science content which let them dabble in inquiry waters before jumping in. Because I can’t possibly follow the structured curriculum closely, students also got to experience moments of intense unscripted inquiry and responsive whole class discussion. With the class I had explicit discussions about the differences between some of the worksheet science we were doing and the real science we were doing when it occurred more spontaneously. Our class spent a lot of time during our guided inquiry into energy talking about Amy’s pee theory and investigating phenomena (which according to the curriculum should have been homework practice), but instead became rich contexts for extended inquiry. When students didn’t believe a simulation they were investigating, we improvised to do our our experiments to help settle the issue. I think this also meant, in the first part of the course, I could focus on being a good teacher rather than being a curriculum designer/developer.
Structuring the Media that Structured Classroom Discourse: I spent a lot of time this past semester working to craft environments for whole class discussion. In previous classes, I mostly though about the seating arrangements (e.g., tables, circles,etc) and methods for sharing / collaborating students’ written work (whiteboards, document cameras, etc). This semester my environments for discourse were much more rich and required a to more prep work. For example, when discussing a particular energy representation about a phenomena we couldn’t get consensus on, I cut out big colored arrows, boxes, and circles with labels. Previously, I would have had students do whiteboards and share out or have a whole class discussion while making a consensus diagram at the board. Instead, we had these magnetized manipulatives to move around the board. One at a time students had to come up and add, change, or take away something at the board and give reasons. I did similar thing with Venn Diagrams when comparing students related but different ideas students were struggling with–big Venn Diagrams on the board and words students could put different places. Groups had all the choices to do together, but then each group was given a select portion to put up on the common Venn Diagram. We only talked about the ones that there was disagreement about. When we got to the moon, I spent a whole weekend cutting out 2D and 3D manipulatives, including many of the student-generated representational supports that had been invented in previous semesters. All and all, I spent a lot of time thinking about how to give students just the right balance of constraints and freedom to have meaningful discussion.
Structuring Students’ Writing: Students have always had to do a lot of writing for class, but this time I did a lot more to structure students writing–to give them explicit expectations and feedback. The PSET curriculum already has a strong structured writing component, in which students learn about, practice, and both give and receive feedback on three criteria: completeness, clarity, and consistency. In the responsive more open inquiry unit, students had to read, practice, and give/get feedback related to readings from “They Say/ I Say”. For their large, original piece of work they had to write about the moon, students had to write about and respond to ideas from class, which really helped students care about and be motivated to keep good records about their peer’s thinking without me having to grade notebooks on such matters. Previously, I had tried to structured students writing, but I never structured in well enough for students to really understand and for me to stick to giving feedback closely to those structures.
Change in Day/Time Structure: The class used to meet 2 days a week (3 hours each meeting) to 3 days a week (2 hours each meeting). I don’t think this is insignificant, both for students and me. For students, three hour twice a week is rough. But for me, planning for 2 hours is much easier than 3 hours. Plus, in a responsive inquiry setting, in which improvisation is often necessary mid-instruction, many more things can go wrong in 3 hours than in 2 hours. You get more chances with three meetings to reflect on what’s happened and plan.
No Attendance Grade (Except for a Participation Self-Assessment): Previously, because being continuously present and participating is so critical to coherence in the classroom (both for individual students and the class), I had an attendance policy. This semester, I just asked students to self-assess their participation along a rubric several times throughout the semester. For the most part, students gave themselves honest assessments. As part of those assessments, they had to give themselves goals for next time and self-evaluate next time with evidence. I can say that participation was about the same as before–pretty good. Before, students felt like I was punishing them for not showing up. Now, students usually felt like they had punished themselves. Students also self-assessed and peer-assessed on their moon journals.
I guess it boils down to (1) scaffolding early experiences for success by using a structured curriculum, (2) improving clarity about expectations (especially writing), (3) use of self-assessment and peer-assessment, (4) more thorough preparation for classroom discussions, and (5) more workable timetable / schedule.
I think those things are tangible things I can think about that were different. I’m sure there are lots of less tangible things I may have done in terms of how I interacted with students, but I can’t say for certain. I know my interactions with students were very positive, but the nature of interactions are complicated and can’t be solely attributed to things I did.
Was it all in my head? No, I don’t think so.
So, it wasn’t just me that felt the class was so wonderful. For the most part, evidence suggests that students tremendously valued the time they spent in class. In other classes, I typically get notes from students saying things like, “I admire your professionalism and your passion for your chosen field,” but in my inquiry class this past spring, students wrote things like, “You really are a great friend,” “We love you,” and “Love your guts”.
Student evaluations also suggest that students felt like this classroom experience was more worthwhile and effective than previous classes of mine. Two categories that are the very signifying on our evaluations are,”How worthwhile was this course in comparison with other courses you have taken at this university?” and “How would you rate the overall teaching effectiveness of your instructors?” With both of those questions, every student answered those two question as highly as possible. Here are graphs showing trends in this class over the last 3 years.
The sad ending to this post is that I am likely to not be teaching this class in the near future. The elementary education program here has been declining in enrollments, which has meant that our offerings of the course are now half of what they used to be. I am not slated to teach the class next year. I suppose it’s nice to end on a high note, so there’s that.
My sense has been that the PER community still implement subpar standards of research reporting that minimizes our ability to carry out meaningful meta-analysis. I’m not an expert, but I’m assuming that the scores with standard deviations / standard errors would be necessary for a meta-analysis, right? So I’m curious. I’m going to quickly take a look at some recent papers that report FCI scores as a major part of their study, and see what kind of information is provided by the authors. Here’s how I’ll break it down.
N = number of students
Pre = FCI pre-score either as raw score out of 30 or a percentage (with or without standard deviation / standard error of mean)
Post = FCI post-score either as a raw score out of 30 or a percentage (with or without standard deviation / standard error of mean)
g = normalized gain with or without errors bars / confidence intervals
<G> = average normalized gain with or without errors bars / confidence intervals
Gain = Post minus Pre (with or without standard deviation / standard error of mean)
APost = ANOVCA adjusted post score (with or without standard error of mean)
d = Cohen’s d is a measure of effect size (with or without confidence intervals)
I’m leaving out statistical transparency such t-statistics or p-values, or other measures from ANOVA, and I’m sure there are others, such as accompanying data about gender, under-represented minorities, ACT scores, declared major, etc.
Anyway, here we go:
1. Thacker, Dulli, Pattillo, and West (2014) ,”Lessons from large-scale assessment: Results from conceptual inventories“
Raw Data: N
Accompanying Data: None
Calculated Data: g with standard error of the mean (mostly must be read from graphs)
2. Lasry, Charles and Whittaker, “When teacher-centered instructors are assigned to student-centered classrooms”
Raw Data: N, Pre with standard deviation
Accompanying Data: None
Calculated Data: g with standard error of mean (must be read from graphs), Apost with standard error,
Raw Data: N
Accompanying Data: Gender, major, ACT
Calculated Data: g with standard error of mean (must be read from graphs)
Raw Data: N, Pre (with standard deviation), Post (with standard deviation),
Accompanying Data: Others related to study, CLASS, for example
Calculate Data: g with standard error of mean
5. Couch and Mazur: Peer Instruction: Ten years of experience and results”
Raw Data: N, Pre (without standard deviation), Post, Pre (without standard deviation)
Calculated Data: g (with out standard deviation), d (without confidence intervals)
Raw Data: N, Pre (with SD), Post (with SD),
Accompanying Data: Gender, race, etc.
Calculated Data: Gain (with SD), d (with CI)
Raw Data: N, pre (with SE), Post (with SE)
Accompanying Data: Gender, majority/minority
Calculated Data: Gain (with SE), d (with CI)
So, what do I see?
Of my quick grab of 7 recent papers, only 3 papers meet the criteria for reporting the minimum raw data that I would think are necessary to perform meta-analyses. Not coincidentally, two of these three papers are from the same research group. Also, probably not coincidentally, all three papers include data both in graphs and tables and include errors bars or confidence intervals. They also consistently reported measures related to any statistical analyses performed.
Four of the papers did not fully report raw data. One of the four almost gave all the raw information needed, reporting ANCOVA adjusted post scores rather than raw post scores. Even here the pre-score data is buried and Apost and g scores can almost only be gleaned from graphs. Two of the papers did not give raw data about pre or post. They reported normalized gain information with errors bars shown, but they could only be read from a graph. These two papers did some statistical analyses, but didn’t report them fully. The last of the four reported pre and post scores but didn’t include standard error or deviations. They carried out some statistically analysis as well, but did not report it meaningfully or include confidence intervals.
I don’t intend this post to be pointing the finger at anyone, but rather to point out how inconsistent we are. Responsibility is community-wide–authors, reviewers, and editors. My sense looking at these papers, even the ones that didn’t fully report data, is that this is much better than what was historically done in our field. Statistical tests were largely performed, but not necessarily reported out fully. Standard errors were often reported, but often needing to be read from small graphs.
There’s probably a lot some person could dig into with this, but it’s probably not going to be me.
An undergraduate student working with me this past year focused his thesis research on investigating student difficulties with projectile motion. The research consisted mostly of analyzing student responses to written problems, multiple-choice questions, and some clinical interviews. He focused mostly on student difficulties amid problem-solving, but also some questions targeting their reasoning about vectors not in the context of problem-solving.
In this post, I’m mostly just going to focus on the common difficulties that were observed in students’ problem-solving. Here are the five most common mistakes that showed up in our sample. All of these were somewhat familiar to me as an instructor, but the prevalence of some were surprising. All and all, 50% of students made at least one of these errors.
(a) Identifying the final velocity of the projectile as zero (and to a lesser extent initial velocity)
This was definitely the most common difficulty with upwards of 20% of students making this mistake. Students who made this error were very likely to make at least one of the other errors below.
Although it’s tempting to think that this error can merely be addressed by focusing students attention to the fact that we are talking about the speed before impact, some of our conversations with students suggest that it goes deeper. For some students, it seems that it connects with difficulties with instantaneous velocity. For example, I talked with a student who suggested the velocity just before impact must be zero because velocity is distance over time and there is no more distance to travel. Beyond difficulties with velocity, my guess would be this difficulty cannot be fully resolved without Newton’s Laws, whereby students are given explicit practice drawing free-body diagrams along various snapshots during the initiating launch, various points during free-fall, and during the impact stage. Our students do projectile motion before Newton’s laws and I think that’s a mistake.
(b) Identifying a non-zero acceleration in the x-direction (or identifying it as unknown)
The most common way this was instantiated was for students to identify both the acceleration in the x (and y) direction as 9.8 m/s/s. Some students, however, would indicate that the horizontal acceleration was an unknown to be solved for by placing a question mark next to it. Making this mistake seems to suggest students are not understanding the basic idea behind projectile motion.
(c) Difficulties translating written description of initial and final positions into x-y coordinates.
Much of this involved switching what would be correct for x and y. For example, the problem might say that a golfer hits the ball from 10 meters above the green. Students would indicate that the 10 m was associated with the x-variable rather than the y-variable. There was a decent variety in exactly how this mistake was made.
A different student, working over this summer and fall, is doing some research to investigate the extent to which this difficulty stems from reading comprehension difficulties vs. coordinate system difficulties. One of the things we are asking students to do is to indicate all the places where x=0 on both axes that represent x-y coordinates and axes that represent x vs. t graphs.
(d) Finding launching or impact angles using triangles with distance information (rather than velocity information)
A correct way (and the way students are taught) to find the launch angle is by using trigonometry with a triangle composed from the initial velocity components. We observed lots of students solving for an angle using the distances. Basically, students end up solving for the angle describing the line that connects the initial and final positions rather than the launch angle.
We are looking into student understanding of the difference between these two angles in non-computational settings. It’s possible that students are actually confused about the two, or that during problem-solving that are just following a algorithm they don’t understand and aren’t really thinking about it all that much. My guess is this really stems from our lack of any instructional focus on kinematic vector concepts and its relationship to trajectory.
(e) Not clearly discriminating among velocity and velocity components.
Most commonly this would be observed where students would solve for the component of velocity, and then later be asked about the initial speed. Many students would identifying one or the other of the components as the speed, rather than combining them. A second way we observed this was when students might use x-component of velocity in a y-component kinematic equation or vice-versa. A final way this can arise is from students never finding components and using the initial speed in both x- and y-component equations. While some of this could certainly stem from carelessness, I’d bet most is related to vector issues.
A lot of these difficulties seem to relate to (i) difficulties with coordinate systems, (ii) not having a developed understanding of the vector nature of motion in 2D, and (iii) not having sufficient understanding of the fundamental idea (and even phenomenology) concerning the horizontal motion.
I happened to pick up a year ago while in a used book store called, “Force + Motion: An Illustrated Guide to Newton’s Laws” by Jason Zimba. Although I wouldn’t necessarily recommend it as a introductory textbook, it has some real nice gems which could certainly be put to good use. Here are a few things that make it a worthwhile addition to your collection as a teacher:
Discussions of Ontology
In Chapter 10, “The Concept of Force”, one of the first sections in the chapter is called, “Force is not Havable”. Here and else where, the authors discusses the ontology of force partly by examining examples from english language. In this section he analyzes examples that emphasize how forces always involve a pusher/pushee (e.g., “I push the wall.”). In other sections, his worked examples (instead of being problems to solve) are lyrics and quotes that use the word force. The problem as presented to the reader is to explain how the use of the word differs from the physics usage and to rewrite the line to make it more consistent with the physicist’s conception. One example is, “I helped her of a Jam, I guess,/ But I used a little too much Force.” from Bob Dylan’s Tangled up and Blue. His solution to problem begins, “The problem with Dylan’s use of the word force is that he makes it sound as though force is a substance that can be doled out–you can use too much, too little–like garlic…”
It’s the kind of things that might be the right kind of task for the future physics teachers in our program.
Attention to Learner Difficulties
In discussing force diagrams, he is very careful to spell out things about Forces and diagrams that students struggle with. For example, he has sections titled:
“A Force Diagram Focuses on a Single Target”–in many texts this goes unsaid or said said in a passing way. There’s a whole section devoted to this idea.
“Forces can Turn On and Off”, in which the authors writes, “Forces are evanescent things. They are not like material objects. They appear and disappear all the time… When you and I are shaking hands… once we let go of each other’s hands, both of these forces simply vanish…”
“A Force Diagram Illustrates a Single Instant in Time…” This idea has become a big emphasis in my own teaching of force.
In general, what I appreciate about the text is that it’s not just, “Here’s the correct physics understanding of these concepts” Instead, the text seems to be focused on, “Here are ways of thinking about these physics concepts” Many of those “ways of thinking” seem informed by ways that learners especially need.
Attention to Intuition and Argument
In some of worked problems about force, the author actually introduces incorrect force diagrams (e.g., force in direction of motion), accompanied by student dialogues about them (e.g., “the force keeps the bullet going across the field”, arguments against them (e.g., The rifle is not longer touching the bullet”, and rebuttals “But if there’s not force, what keeps the bullet moving?”. He ends, not with disdain for misconceptions, but with a tacit love that recognizes how confusion about the right issues is at the heart or learning: “Now we reach the heart of the matter. Bob’s instinct is that something must keep the bullet moving across the field. I’d that’s a perfectly reasonable instinct. Bob’s mistake is to seize on force as the sort of thing that keeps the bullet going… ” He goes onto the introduce, but not settle, the struggles and thoughts of Newton in his attempt to address this issue with the concept of inertia.
In making this progression, he is keeping our attention to definitions, argument, intuition, and joy of recognizing (even if not resolving) contradiction.
Refining Learner’s Intuition to Find “Seed of Truth”
In Chapter 12, the aithors introduces a section called, “The weaker link between Force and Velocity”. So often we can focus on, “What’s correct” or “What’s wrong”, but I think this author does a nice job of returning to arguments, and refining them. The idea the authors returns to here is the common idea that force and velocity are linked. I’d never thought about it this way, but here’s what the authors has to say about the misconception regarding the connection between force and velocity.
“What about the link between force and velocity? Is there a link at all? The answer is that overtime there’s a link. If you apply a steady force Fnet to an object for a long enough time, the velocity vector v will eventually turn itself around more and more to a point along Fnet… However, at any fixed instant, there is no obvious relationship between the Fnet and v vectors… Students often want their force diagrams to show them something about how their target is moving at the moment of time in question. But force diagrams can’t do that. Indeed, because Fnet points along the acceleration vector rather than the velocity vector, it would be better to say that the force diagram shows you something about how the target is about to move.”
He goes on to discuss the power that this subtle idea gives: The power to make predictions, not just descriptions. What’s happening now, actually tells you about the (near) future.
Awareness that Mathematics is a Language–the Case of Rearranging Algebra
He has a section called, “Rearranging Newton’s Second Law,” in which he introduces Fnet = ma as a rearrangement of a = 1/m Fnet. In doing so, the authors talks about how Newton was in the business of observing accelerations (of planets) and trying to figure out the forces causing them; and so for him Fnet=ma made sense because acceleration was the input while force was the output of his investigations. He does a nice job in other places of discussing this as the two major kinds of problems in physics–what can motion tell you about forces underlying some system vs given some known forces, what can we say about what some motion will be.
Other notable things about this book are it’s strong focus on vectors, graphs, reasoning, and history of science. In general, the text has some nice insights into student thinking; and when he discusses difficulties and mistakes students tend to make, he is not disparaging. Instead, he tries to understand why students would say, think, or do those things, and it makes for a pleasurable read.
I’m sure there’s things “not to like about the text,” but that’s the game I’m playing with this book review.
If you’ve read this (or get around to), let me know what you think in the comments.