School kids outshine adult commenters in thinking critically about evidence. And so what?

“Science educators, here’s what you’re up against. A debate in the comments on this story over whether the movie “Mission to Mars” proves that ancient Martian life was used to seed life on Earth.”

There’s no way I could pass over a Facebook status like this one. My friend K.O. recently made the comment in reference to a Popular Science article called “A Significant Portion of Mars Could Be Friendly to Life, New Models Suggest.”  The article itself is a short summary of a paper published in the journal Astrobiology, which uses models to predict how deep a microbial biosphere might extend into Mars’s surface. And while I might quibble when the author uses the phrase “slam-dunk” to describe the evidence for water on Mars, the interesting story isn’t in the article. The real story, for me and for K.O., is in the comments.

(more…)

Advertisements

Mourning science on December 6 (Repost)

Originally posted December 10, 2010

“For 45 minutes on Dec. 6, 1989 an enraged gunman roamed the corridors of Montreal’s École Polytechnique and killed 14 women. Marc Lepine, 25, separated the men from the women and before opening fire on the classroom of female engineering students he screamed, “I hate feminists.” Almost immediately, the Montreal Massacre became a galvanizing moment in which mourning turned into outrage about all violence against women.”

This summary from the CBC news archivesdescribes well the horrifying incident of that day and the impact that it has had across Canada. At most Canadian universities the day is marked with candlelight gatherings and vigils for victims of violence against women. To this day, though, I’ve never been to one.

(more…)

Who is the traditional right type of person for science?

Traditions in science education? At first that might seem like a strange way to think about science in schools. The word ‘tradition’ often conjures images of formal traditions: holiday dinners, Christmas carols, festivus poles, and wedding ceremonies. But that’s not the only kind. As Greg Laden wrote recently, traditions are also those things that we take for granted, those practices and ways of thinking that we explain by saying “it’s just always been that way”.  Science education doesn’t really have formal traditions (there’s no commemorative long weekend as far as I know) but it definitely has this kind of more embedded tradition.

(more…)

WEPAN Webinar: Identity and persistence in STEM (with link to recording)

In September, I had the pleasure of presenting an online seminar for the Women in Engineering ProActive Network (WEPAN). In the session, I spoke about the concept of science identity and how it can help researchers like me bring together studies on interest, encouragement, confidence, and competence in science. Thinking about science identity, rather than all of the those other outcomes separately, is really helpful for finding focused strategies that can help bring non-traditional students into science and help them stay. After describing science identity, I presented results from two studies that I’ve been involved in. One I’ve blogged about before that looked for high school teaching strategies and classroom practices that were related to strong science identities in first year physics students. The second is a study that asked students about the expectations they experience in their science classes and how those expectations affect their identification with science and their desire to study it in the future. You can listen to the recording and follow the slides on Vimeo. Please excuse how nervous I must sound. It was a new (but fun) experience to present a seminar for an audience that I couldn’t see!

Identity and Persistence in STEM: WEPAN Professional Development Webinar 09 22 11 from Women in Engr ProActive Network on Vimeo.

Learning about science writing from kids

What do kids think about the science that they read? What lessons can science writers learn from them? Tonight, I got 5 minutes to try to answer those questions at the National Association of Science Writers annual meeting. This is my first year attending the conference so I thought it would be fun to jump right in and give an Ignite talk. These are 5 minute presentations that must use exactly 20 slides advancing automatically every 15 seconds. It’s a fast-paced and fun format. In my 5 minutes I shared six lessons that I’ve learned from what kids say about the science reading they do. Ben Young Landis (@younglandis) snapped this photo of my presentation.
Welcome @mcshanahan to #sciwri11 and NASW! Already at it w an... on Twitpic

(more…)

Why do I do social research in science education? Hint, it’s not because I don’t care about learning

On Thursday, the usually provocative Globe and Mail columnist Margaret Wente wrote this: “Too many teachers can’t do math, let alone teach it.” She begins by describing plans at the University of Saskatchewan to reduce the required math education courses in their elementary teacher education program.  I don’t have specific information on the USask proposal but in general I would agree with Wente that math (and science and social studies and health…) education courses are very important for elementary teachers.

(more…)

Students don’t lose their ability to think scientifically

On Tuesday night, just as I was settling in to read before falling asleep, I took one last look at Twitter to see if anything interesting had been posted. Overseas friends were up for the morning and, not feeling entirely sleepy yet, a nice meaty article or blog post was just the thing I was looking for. One headline from Scientific American caught my eye:

“More Than Child’s Play: Ability to Think Scientifically Declines as Kids Grow Up. Young children think like researchers but lose the feel for the scientific method as they age”

A statement like that would have serious implications for science education and for my teacher education students. It was a must read.

And a slightly frustrating one.

The headline link led to a short research summary by Sharon Begley (Note that the free excerpt actually includes the full piece so you’re not missing anything, even without a subscription). It describes a study published in the September 2011 issue of the journal Cognition by Claire Cook (MIT), Noah Goodman (Stanford), and Laura Schulz (MIT). The study is a clever investigation of preschoolers’ stronger than expected ability to find ways of understanding causation when they are presented with an ambiguous situation. The children were shown play blocks that, when pressed against a toy box, sometimes make the box light up. They were encouraged to play with the blocks and some children found unexpected ways to isolate them so they could test each one separately to see if it made the box light up. The first four paragraphs do a nice job of explaining the research (aside from a misuse of the word variable, but more on that in a second). The fourth paragraph  ends with the sentence “That suggests basic scientific principles help very young children learn about the world.” Cool. And nothing wrong with that statement. It captures the study and the researchers’ conclusions well.

It’s the final paragraph that both inspired the headline and my frustration (underlined for emphasis).

“The growing evidence that children think scientifically presents a conundrum: If even the youngest kids have an intuitive grasp of the scientific method, why does that understanding seem to vanish within a few years? Studies suggest that K–12 students struggle to set up a controlled study and cannot figure out what kind of evidence would support or refute a hypothesis. One reason for our failure to capitalize on this scientific intuition we display as toddlers may be that we are pretty good, as children and adults, at reasoning out puzzles that have something to do with real life but flounder when the puzzle is abstract, Goodman suggests—and it is abstract puzzles that educators tend to use when testing the ability to think scientifically. In addition, as we learn more about the world, our knowledge and beliefs trump our powers of scientific reasoning. The message for educators would seem to be to build on the intuition that children bring to science while doing a better job of making the connection between abstract concepts and real-world puzzles.”

Suggesting that scientific abilities vanish ignores the differences between two types of thinking: finding concrete causal factors (such as which block will make a toy work) and abstract scientific thinking (such as variable manipulation). At first I was thinking that it is like comparing apples and oranges but really it’s like comparing apples and something that on the surface seem kind of like apples but are vastly more complicated (a quantum apple?).

In the conclusion of the study the authors note that in schools and among researchers there has been a tendency to use overly abstract tests of scientific reasoning. These tests underestimate the intuitive skills that young children have for isolating concrete causal factors (which the authors unfortunately call “variables”). The researchers themselves were taken by surprise by one of the strategies that the children used and it helped them notice the novel solutions that the children found. That general conclusion makes sense.

What doesn’t make sense is extending that argument to say that students lose some sort of reasoning or scientific thinking ability as they get older because they struggle with abstract skills such as real variable manipulation. There is no evidence for that. Scientific thinking is abstract by definition because it is about underlying and generalizable knowledge. It is not the same thing as the concrete and situational problem solving reasoning that the children engaged in. It’s like comparing apples to the much more difficult and challenging quantum apples.

The authors of the original study do concede this in their conclusion, writing that “the ability to bring common principles of experimental design to bear on any task, regardless of the number of variables involved and the status of those variables with respect to their prior beliefs, requires an explicit awareness of the principles of experimental design that is, we presume, the exclusive purview of formal science” (p. 348). So while the foundations for this kind of thinking are found among children, there is a second level of complexity that moves this intuitive causal thinking towards becoming scientific thinking. Children do not have scientific thinking and then somehow lose it as adolescents.

Indeed, school children and teenagers continue to understand the basics of experimentation very well. There are several resources for teaching the concept of fair testing in science. They usually begin with intuitive ideas related to general fairness, like using the analogy of a race where everyone must start at the same place and take the same route. Even the idea of a fair test experiment, though, gives a very simplified introduction to scientific investigations. What is much more difficult is, for example, the idea of a variable. And here’s where I disagree not just with Sharon Begley but with the authors of the paper. By trying to isolate which blocks will make the toy work, the children are not isolating variables. There is only one variable – the blocks – and the children have found an innovative way to try to test one block at a time. A variable is an abstract place holder for a quality that is attributable or applicable to objects or systems of a particular type. Learning to use them fluently is hard, really hard. It takes explicit instruction and practice. Even simple variables like length are more challenging than they seem. It is one thing to measure the length of a particular piece of string, quite another to conceive of length as a general property that can be measured or manipulated in any object. This especially true because it is also somewhat arbitrary, requiring the person doing the experiment to choose an operational definition (e.g., by defining length as the measurement of the longest side). There is no concrete thing called length. It is an abstract word that describes a type of measurement. Understanding that is much harder than trying to find a way to measure it in specific objects, which is analogous to what the children are doing in trying to find a way to test each block individually.

This might seem like a subtle distinction but when it comes to taking steps to improve science education, it matters. The implication that students lose some ability to think scientifically because of school experiences or growing up is a misleading one. The headline especially reinforces it. The end of the paragraph gets closer to a real suggestion, which is that teachers need to better recognize the strength of young children’s reasoning and also recognize that learning the abstraction necessary for full scientific thinking is difficult. It requires better efforts to bridge concrete causal reasoning and abstract reasoning about variables and other scientific processes.

But just because that’s hard doesn’t mean that there is anything that students are losing. They just need more support to take the next steps.

Update (October 3): Shortly after I posted this, Matthew Francis added some great insight from his perspective as a scientist on his blog Galileo’s Pendulum.

Edit (September 22): I apologize for not including a link and reference for the original study in Cognition. A link has been added above and the reference information is as follows.
Cook C, Goodman ND, & Schulz LE (2011). Where science starts: spontaneous experiments in preschoolers’ exploratory play. Cognition, 120 (3), 341-9 PMID: 21561605

Further reading:

Bao, L. et al. (2009). Learning and scientific reasoning. Science, 323, 586-587.

Jones, M.G., Gardner, G., Taylor, A.R., Wiebe, E., & Forrester, J. (2011). Conceptualizing magnification and scale: The roles of spatial visualization and logical thinking. Research in Science Education, 41, 357-368.

Markovits, H., & Lortie-Forgues, H. (2011). Conditional reasoning with false premises facilitates the transition between familiar and abstract reasoning. Child Development, 82, 646-660.

Mercer, N., Dawes, L., Wegerif, R.,& Sams, C. (2004). Reasoning as a scientist: Ways of helping children to use language to learn science. British Educational Research Journal, 30, 359-377.

Watson, R., Goldsworthy, A., & Wood-Robinson, V. (2002). What is not fair with investigations? In S. Amos & R. Boohan (Eds.) Aspects of teaching secondary science: Perspectives on practice, London: Routledge Falmer (pp. 60-71).

Women, romantic goals and science: The evidence just isn’t there

One of the first things that came across my Twitter feed yesterday morning was a press release announcing that “Women’s Quest for Romance Conflicts with Scientific Pursuits, Study Finds”.  I’m usually pretty sceptical of press releases, especially ones that include the words “study finds.” This one, instead of eliciting mild annoyance though, made me feel nauseous. Women’s quest for romance? Really? At first glance, it sounded more like something out of Mad Men than a real study.

I’m going to put my bias right out there. I am passionate about engaging people in science and, as a result, also intrigued by what keeps them from participating. I began my academic career with the explicit purpose of exploring gender issues in science education (and I have the embarrassingly naive grad school entrance essays to prove it). A year into my master’s research, I became increasingly frustrated with research that sought to categorize women and girls and definitively assign them characteristics that interfere with their interest in science. Through my own classroom teaching experiences I was well aware of the diversity of strengths, weaknesses, desires and goals that both female and male students bring to the science classroom. Essentialist gender research not only covers up this diversity, it also misses male students, many of whom are also discouraged and excluded in science. This is why I study inclusion and exclusion in science through a lens of identity – looking at patterns in the ways that individuals define themselves (including in relation to masculine and feminine gender norms) and how these definitions come together to influence the decisions that male and female students make about studying science. All that is to say, that I am generally not inclined towards approaches that homogenize women and men but at the same time am open to the important influences of stereotypes and societal expectations that can have particular influences on science participation.

So while my first thought was “ugh”, I was willing to look openly at the data to see what they’d found. It only seems fair to tell you all of that up front. So I tracked down the full study to investigate further.

The paper announced in the press release (Park, Young, Troisi & Pinkus, 2011) is based on three consecutive small studies done to explore the connections between romantic goals and science interest. The authors explain the prevalence of gender stereotypes in Western society and how girls are typically socialized to act in gender-typical ways in (presumably heterosexual) romantic situations. They also note that women who act in ways that break gender norms tend to be viewed negatively by others. The assumption is that having an interest in science is viewed as a masculine characteristic and therefore something that women will downplay in romantic situations and when they are pursuing romantic goals.

There are many (many!) unsupported assumptions built into this logic. The most obvious issue is that scientific disciplines are not nearly equal in their association with masculinity. The authors do not make a note of this but in their questionnaire they only list those areas with significant under-representation of women (“Computer Science, Technology, Engineering, Math, Chemistry, Physics, etc.” p. 3). So already, this isn’t a study about science but about particularly masculine identified areas in science. Good to know. (And actually it’s more complicated than that because different areas of engineering vary widely in gender representation and in association with masculinity and femininity)

Study 1

The first study examined two possible goals types: wanting to be intelligent and wanting to be romantically desirable. They primed the participants (all university psychology students) to think about one or the other goal type by showing them images. They prompted intelligence goals by showing participants images of books, libraries and eyeglasses (eyeglasses? That just makes me think about going to the optometrist, but okay) and romantic goals by showing them pictures of sunsets, candles and restaurants. The authors checked these images with a follow-up study to be sure that they primed the right goals (and not, say, goals of going to the optometrist) and the men and women that they asked responded that the images made them want to be intelligent or romantic in the expected ways. The main study then brought 119 students (approximately balanced between women and men and with approximately equal interest in science) into the lab. The participants were asked to rate their interest in science and in pursuing a science-related major. They then looked at one set of images (either romantic or intelligent) and answered the two science interest questions again. When they compared the two groups for women and men, there were no differences for those who had looked at the intelligence images. The women and the men in this group had about equal interest in science and in pursuing a science degree. Women who has seen the romantic images (and presumably were feeling like they wanted to be romantically desirable) reported lower interest in science than men and than the women who had seen the intelligence images.

The researchers conclude “These results support our hypothesis that women, but not men, show less interest in STEM when exposed to cues related to romantic goals versus intelligence goals. We think that women may have distanced themselves from STEM because they experience conflict between the goals to be romantically desirable and intelligent in the male-stereotypes domains of STEM” (p. 5). Sounds convincing.

Or maybe not. There are two things that I find problematic. The first is that their model showed no significant effect for prior science interest. That means that students walked into the lab, answered two questions about their interest in science, saw some pictures, then answered the same questions again and the answers they gave the first time didn’t predict the answers they gave second time. Wow, that’s one heck of an intervention! That, or the measures are problematic and don’t have great test-retest reliability (meaning that even without the pictures, people would answer the questions slightly differently from one moment to the next because they are not well designed questions). This was actually one of the first problems I noticed with the study. The researchers have taken a complex idea – interest in science (which includes interest in math, logical thinking, the natural world, abstract and explanatory thinking and more) and reduced it to a single question “How interested are you in Math and Science?” A single question like this is very prone to shifting answers. One minute I might rate my interest at a 5/7, then next maybe 6/7 because what do those ratings really mean? If we’re talking about nature maybe I’m really interested but mathematical formulas for motion, not so much. To be honest my personal answers would be the opposite – I’m all about the physics but kind of bored by taxonomy. A much more meaningful measure would have used several items, developed and tested for both validity and reliability for measuring interest in science. Krapp and Prenzel’s recent review might be a great a place to start for anyone who is interested. Without a measure that is reliable and really captures what it means to be interested in science, it’s difficult to interpret the results meaningfully.

The second issue is what is really being measured. The authors support their assertion that the effect is due to goal priming by showing that both men and women responded with the appropriate goals when they looked at the romantic and intelligence pictures. Can we know for sure that it was the goals that caused the difference in interest though? No, we can’t. That uncertainty is a normal part of all social research and for that reason alternative explanations should always be explored. My prime contender in this case is stereotype threat.

Stereotype threat is the anxiety that people can feel when they are in situation in which they might confirm a negative stereotype about their gender, racial or cultural group. For example, when reminded of the stereotype that girls aren’t good at math, girls tend to do worse on math assessments than they would otherwise.  Stereotype threat doesn’t just impact achievement though. It effects motivation and interest too. Jessi Smith and her colleagues investigated interest and motivation in a computer science task under conditions where participants felt gender stereotype threats. When they suggested that women aren’t good at math and related tasks, the women in their study showed less interest for completing the tasks. Those women who were normally achievement and goal oriented also changed their motivation from striving to achieve goals to being nervous about failing, a motivation orientation that is associated with lower interest. So, when stereotypes about women are primed in science and math, it seems that interest and motivation goes down.

The central premise of this study is itself a stereotype about women: that they constantly seek romantic validation. Participating in this study itself might have primed stereotypes. I am speculating here but I would find those images to be stereotypically feminine and the whole idea of being romantically motivated to be a negatively-associated female stereotype within an academic environment (think of the derision associated with stories of women who go to university to find husbands – no such parallel exists for men). It seems at least possible to me that priming women with stereotypical romantic images in an academic setting would elicit stereotype threat, a threat that could possibly be related to decreased motivation and interest for science, another stereotypical characteristic. I’m not actually trying to make this claim, just to show that there are other possible explanations than what the authors of the study suggest.

Study 2

 This study was very similar to Study 1, except that instead of looking at pictures the participants overheard conversations that were meant to prime either romantic goals or intelligence goals. The researchers and their assistants stood in the hallway and had a short conversation that was either about doing well on a test or about a recent successful date. The study participants inside the study room could hear the conversations. The authors also did second version that used similar conversations but this time primed romantic goals and non-romantic friendship goals. The authors report that both versions replicated Study 1. Except that it isn’t quite true. In Study 1, men and women showed equal interest when they saw intelligence pictures and when men saw romantic pictures. The basic result was that women who saw romantic pictures expressed less interest than would be expected, all other things being equal. Study 2 shows something entirely different. In both versions, women and men who overheard romantic conversations expressed equal interest in science and so did the men who overheard intelligence or friendship conversations. The only difference was that women who overheard intelligence or friendship conversations showed HIGHER interest in science than would be expected. Let me say that again – women and men responded no differently to having romantic goals primed and the non-romantic goals (intelligence and friendship) led women to express stronger interest in science. When they overheard the intelligence related conversations, women also expressed stronger desire to pursue science related degrees. I don’t see how that replicates Study 1. It seems in my view to directly contradict it.

Study 3

The third study asked women (and only women) to keep track of their daily goals and their math activities (e.g., paying attention in class, doing math homework) and desirability activities (e.g., emailing/texting someone you are interested in or spending time with them). From the students’ diary entries, they found the following relationships:

  • On days that women reported pursuing romantic goals, they engaged in more romantic activities and fewer math activities. They also engaged in fewer math activities (but not more romantic activities) the day after.
  • On days that women reported pursuing academic goals (and the day after), they engaged in more math activities. These days had no impact on their romantic activities.

So, the participants in their study engaged in activities that met their goals for the day, whether academic or romantic. This isn’t a surprise at all and there is no evidence that this is gender specific because no men were included in this part of study. There is also no evidence that it is exclusive to romantic goals because no other goals were studied. It is just as conceivable that people who set interpersonal goals for the day will also engage in less math and more interpersonal activities. That kind of the point of having a goal for the day isn’t it?

But wait, there’s more:

  • On days when the women pursued romantic goals (and the day after) they also felt more romantically desirable.
  • On days when women pursued academic goals, there was no impact on their desirability (it neither increased nor decreased their feelings of desirability). Wait, what? So the whole study is built on the premise that wanting to be romantically desirable interferes with interest and participation in science but there is no evidence that feelings of desirability are in any way negatively affected by pursuing intelligence goals.

This means that the only finding from Study 3 is that activities can be predicted by daily goal setting (either romantic or intelligent).

Some final thoughts

So, what’s my overall assessment? I’m really troubled by the study, not because it disagrees with the way that I approach gender and science but because the evidence is extremely weak and seems to have been interpreted with the researchers’ own expectations heavily in play. The back and forth nature of the findings (in one study romantic goals had a negative effect and in another they didn’t) is not acknowledged and suggests to me that there is probably something else that is actually creating the effect. Not only that, it suggests that the measures (the questions themselves) are probably not stable. There is nothing in this study that convinces me that romantic goal pursuits are in any particular way responsible for women’s underrepresentation in science – not because it’s something I don’t want to be believe but because the evidence just isn’t there.

(As a side note, I’d like to add that the title of the article suggests that the authors did not spend much time with the literature related to science education persistence and participation. The title indicates that the study is about attitudes towards science, which is something entirely different from interest and not addressed at all in the study. I have also not addressed the secondary findings related to interest in English/languages because the primary claims are related to science and math.)

References and other reading

Krapp, A., & Prenzel, M. (2011). Research on interest in science: Theories, methods, and findings. International Journal of Science Education, 33, 27-50.

Park LE, Young AF, Troisi JD, & Pinkus RT (2011). Effects of everyday romantic goal pursuit on women’s attitudes toward math and science. Personality & social psychology bulletin, 37 (9), 1259-73 PMID: 21617021

Smith, J.L., Sansone, C., & White, P.H. (2007). The stereotyped task engagement process: The role of interest and achievement motivation. Journal of Educational Psychology, 99, 99-114.

Steele, C. M. (1997). A threat in the air: How stereotypes shape intellectual identity and performance. American Psychologist 52, 613–629

Science Education and Changing People’s Minds Part 2: Writing to convince

This summer project was inspired by a panel that I sat on at LogiCon this spring. The moderator, Desiree Schell asked us whether we would describe ourselves as evangelists for science and for scientific thinking. I answered that in my everyday dealings with friends and family I try to be a stealth evangelist, sharing my own enthusiasm as a gentle approach to encourage others to do the same. After the panel though, I felt that I’d cheated a little bit with my answer and not thought about my experience as a science education researcher. That’s what these posts are meant to do, take a look at the research literature in science education and ask what it might have to offer to communicating about science and scientific ideas more generally.

I find online science communication fascinating. I am enthusiastic about its possibilities and intrigued by its challenges. With an interest in online communication, comes an interest in text. While videos, animations and images are powerful too, the written word is often the simplest and the default mode of online communication–-think blog posts, tweets, status updates, and comment sections, mostly all written or at least including written elements. In the world of online science communication, these are all texts but what makes a text good for communicating about science and, in particular, what makes a text good for helping readers understand and accept scientific ideas about the world?

Science education has been kind of text (especially textbook) obsessed for a long time. In the late 19th century textbooks acted as de facto curricula for schools that aimed for some cohesion as they spread out across a North American landscape that was still being settled. And we’ve never quite been able to let that go. Questions about what makes texts good and what makes them convincing, have been a recurring theme.

Last week, Christie Wilcox began a series on her blog Science Sushi, part of the Scientific American blog network. She started with this introduction:

“People believe a lot of things that we have little to no evidence for, like that vikings wore horned helmets or that you can see the Great Wall of China from space. One of the things I like to do on my blogs is bust commonly held myths that I think matter. For example, I get really annoyed when I hear someone say sharks don’t get cancer (I’ll save that rant for another day). From now onward, posts that attack conventionally believed untruths will fall under a series I’m going to call ‘Mythbusting 101.’”

I read it and thought, “A-Ha, Inspiration!” (not like the A-Ha! tuna I was once offered at a restaurant, that’s another story)[i]. What guidance can the science education literature offer for doing this kind of blogging well? Are there ways to more effectively change readers’ minds about common misconceptions, myths and everyday notions that are less than scientific?

As I wrote in Part 1 of this series, changing peoples’ conceptions is hard, very hard. The way we understand the world is shaped by all of our interactions with it and with all of the people in our lives. We don’t just have a set of ideas that sit on a shelf like books and can easily be replaced one for another. Ideas about the world are more like tangled webs of connected information, experiences, and beliefs. A complex ecosystem is a better analogy than a bookshelf. This means that writing to bust myths, convince people about scientific evidence or change their minds takes more than just communicating clearly. If that were all it took, science teaching would be easy and there would be few public controversies about accepted scientific ideas.

Explainers, like Chris Rowan’s post about the Japanese earthquake, are excellent when the issue is missing information. For example, I think I have reasonably scientific views about earthquakes (I did fairly well in undergrad geology and have taught some very rudimentary earth science in schools) but my views are patchy in places. It’s not so much that I have serious misconceptions but instead holes in my understanding. Good explainers fill in these gaps with clear descriptions and new information. They aren’t usually narratives and they aren’t usually arguments. They are typically purely expository, and they are excellent for filling in patchy places in a reader’s understanding.  This process is sometimes described as assimilation – the new ideas are like a new species introduced into the ecosystem. If there’s a niche for them and they fit into the existing structure, they are assimilated with little conflict and change. (This analogy kind of breaks if you try to take it as far as invasive species). When the problem is misunderstanding though, explainers aren’t as helpful.

There have been two major reviews of research done on the ways that written texts can support conceptual change, the kind of conceptual change that causes the whole conceptual ecosystem to be altered. In 1993, Barbara Guzzetti and her colleagues published a statistical meta-analysis of studies up to that point, comparing the different approaches that had been used and the effect that they had on students from elementary school up to undergraduate classes.[ii] Christine Tippett updated their review last year with an overview and a thematic analysis.[iii] Both reviews show consistent evidence that explainers are not the best type of writing for conceptual change. The most effective texts were those that directly addressed and refuted common misconceptions.

Refutation texts always include at least two parts: a) a statement illustrating a common or likely misconception and b) direct statements that contradict the misconception and emphasize more scientific views. Usually there is some sort of refutation cue as well, such as labelling something as a myth or saying directly “but this is not true.” Tippett gives this example written for young children (the misconception is in red, the cue in blue and underlined and the refutation in green):

Some people believe that a camel stores water in its hump. They think that the hump gets smaller as the camel uses up water. But this is not true. The hump stores fat and grows smaller only if the camel has not eaten for a long time. A camel can also live for days without water because water is produced as the fat in its hump is used up.” (p. 952)

In her Mythbusting 101 post, Wilcox does something very similar. She lays out four myths and common beliefs and then carefully explains why each is not true or at least isn’t as simple as it first sounds. Her post has the structure of a refutation text, pointing out to the reader something that many believe to be true and then explicitly saying that it isn’t (colours are the same as the example from Tippett).

“Myth #1: Organic Farms Don’t Use Pesticides

When the Soil Association, a major organic accreditation body in the UK, asked consumers why they buy organic food, 95% of them said their top reason was to avoid pesticides. They, like many people, believe that organic farming involves little to no pesticide use. I hate to burst the bubble, but that’s simply not true. Organic farming, just like other forms of agriculture, still uses pesticides and fungicides to prevent critters from destroying their crops.”

Wilcox’s text also illustrates another element of effective conceptual change writing – straight and direct expository refutation. Sometimes education authors will try to explain science concepts through stories. The misconception is brought up as part of the narrative on the assumption that narratives are more comfortable, more interesting, and easier to understand. In Guzzetti’s analysis, though, only young children benefited from having narrative included as part of the refutation. High school and undergrad students reponded better to the straight expository texts. Tippett also points out that older students seem to prefer to read in this style.

Ok, so that’s two tips so far – direct refutation is important and it’s most effective when it’s straight expository refutation (except when it’s for young children). What about the context in which texts are read and the thinking processes of the reader?

Both Tippett and Guzzetti were able to look at several comparisons in how refutation texts were used: texts on their own, texts used with classroom discussions, texts read before and after classroom demonstrations, and texts used with writing activities. Given how powerful direct experiences can be, I was surprised that both of the reviews showed that the most effective strategies were always combinations that included text and that text on its own was more powerful that any of the other methods on their own (e.g., discussions and demos). This says a lot about the power of what we read.

Of course there are several possible explanations for this, not the least of which is that you can return to a text and read it several times to remind yourself of its content, something you can’t do with a discussion. The strength of text alone shouldn’t be taken as absolute as neither Tippett nor Guzzetti were able to make comparisons to videos and interactive animations which would presumably have some of those same benefits.

Given that texts are important, what made particular texts more effective than others? Across all of those combinations, the texts worked better when students had a chance to think about their own conceptions first (sometimes called activating or priming their prior conceptions) and then had their own ideas directly challenged. This makes sense from a conceptual change perspective, where the difficult task of rearranging and changing conceptions is thought to happen as a result of cognitive conflict or disequilibrium – creating an internal discrepancy.  The discussion around cognitive dissonance in relation climate change and evolution, for example, also views this conflict as potentially negative, where placing ideas side by side leads people to want to resolve the conflict, often by relying on their prior views and warping the new information to suit. At the same time real conceptual change is unlikely to happen unless this same conflict occurs.

Just asking people to think about or priming their prior knowledge without explicitly challenging it was not enough. The most effective texts (and text-activity combinations) asked students to think about and apply their own conceptions and then challenged them directly. In writing and blogging then, activating or priming misconceptions would mean more than just stating common misconceptions. Sometimes people don’t think they hold a particular misconception until you ask them to make a prediction, explain a particular situation or make a hypothetical decision. And it’s easier to gloss over or ignore mythbusting when you don’t think you hold the myth or that it doesn’t apply to you. Good activation asks the reader to recognize how they view the world, so then the writer can go on to refute it. The chance for a meaningful discrepancy between ideas (the myth and the scientific conception) is higher when the conflicting ideas are recognized as your own. In Part 1 I wrote about one of my favourite teaching techniques (the POE: Predict, Observe, Explain).[iv] It serves the same basic function. When presented with a situation, asking students to predict what will happen activates their prior knowledge and brings it forward to be challenged. It’s even better when you have them explain the reasoning behind their predictions. With a POE demo, the refutation comes when it doesn’t happen like they expected it would. In text, in comes from the refutation cue (“I hate to burst the bubble, but that’s simply not true”) and the scientific conception  presented by the author. The effect of activating the reader to think about their own prior conceptions can add to the chance that these refutations will work.

So let’s go back to Wilcox’s post for a moment. After the brief description of her mythbusting series that I copied above, there are two opening paragraphs that discuss organic foods generally and introduce the idea that there are a lot of myths out there about them. The one thing that might be missing, though, is a challenge to the reader to actually think about their own views, in other words a chance to activate their prior conceptions. I’ll admit it here: I was once (in what now seems like a past life) a vegan and committed to only eating natural foods. It’s taken a long time (and a lot of bacon) for me to sort through my conceptions of food and agriculture and to make sense of which ideas are supported by evidence and which are everyday notions that I still cling to. Wilcox’s mythbusting is directed exactly at someone like me and might be even more effective if those readers had an opportunity to bring their own ideas to the front of their minds to be recognized. From my own perspective, on the surface I don’t think that I subscribe to these myths anymore but I know deep down that there are pieces of them still there in the ways that I think. Good conceptual change activation would start by digging into these deeper patterns and challenging me to recognize where I too subscribe to some elements of these myths. One way might be to present a hypothetical decision making problem, for example asking the reader to examine fictitious statements from farmers at a farmers market on the topic of organic foods and decide which they would choose to buy from. This would ask the reader to commit, at least to a hypothetical degree, to their conceptions making them more likely to be challenged. When students have these opportunities in classrooms, they are more likely to change their minds towards more scientific conceptions.

So what hints are there in the conceptual change literature about writing to change people’s mind?

  1. When challenging difficult myths and misconceptions, direct refutation seems to work best.
  2. Refutations that are written in expository rather than narrative language seem to be both preferred and most effective.
  3. Refutations are especially useful when they not only state common misconceptions but activate the reader to think about and commit to their own views before having them challenged.

Of course people and their ideas are very complex. None of these strategies will guarantee that any reader will change their mind. There are many other factors involved, including motivation, interests,  and social relationships that are built on shared beliefs and ideological commitments. One of the studies in Tippett’s review that surprised me the most, though, asked if the students who were more committed to their conceptions experienced less conceptual change. To my surprise, the researchers didn’t find any relationship. Students who were strongly and weakly committed to their ideas we just as likely to change their minds. Much more important was students’ understanding of science processes and scientific evidence. Those with sophisticated views of science were, not surprisingly, more convinced by scientific evidence[v] – adding weight to ongoing efforts to emphasize the processes of science both in schools and public science outreach. This relationship is important to remember as a public communicator. No matter how well written and clear your explanation, no matter how direct your refutation, readers struggling to understanding scientific evidence will more likely struggle to be convinced by it. Communicating about scientific ideas is difficult, doing it with the intent of changing people’s minds even harder, but I hope that some of the lessons learned in science education might offer a few strategies for making that road a little bit easier to travel.

This is also cross-posted at the Scientific American Guest Blog.

[i] Thanks to Emily Willingham for reminding me of that inspiration later on Twitter.
[ii] Guzzetti, B.J., Snyder, T.E., Glass, G.V., & Gamas, W.S. (1993). Promoting conceptual change in science: A comparative meta-analysis of instructional interventions from reading education and science education. Reading Research Quarterly, 28, 117–155.

[iii] Tippett, C.D. (2010). Refutation text in science education: A review of two decades of research. International Journal of Science and Mathematics Education, 8, 951-970.

[iv] Interested in POE? My friend and colleague Mike Bowen’s great book Predict, Observe, Explain: Activities Enhancing Scientific Understanding just won a Book Design & Effectiveness Award from Washington Book Publishers.

[v] See also: Mason, L., & Gava, M. (2007). Effects of epistemological beliefs and learning text structure on conceptual change. In S. Vosniadou, A. Baltas, & X. Vamvakoussi (Eds.), Reframing the conceptual change approach in learning and instruction (pp. 165–197). Oxford, UK: Elsevier.

Science Education and Changing People’s Minds Part 1: Introduction to conceptual change

This is the first post in a series that looks at how science education research can contribute to discussions about the best way to change someone’s mind about scientific issues such as evolution, climate change, natural medicine and paranormal activities.

Monday, I introduced the idea for this series, saying that it had recently occurred to me that when I’ve been asked about what I do to convince my friends and family to think scientifically I haven’t made the connection to studies done in my own research area. It’s amazing how we can compartmentalize our lives that way. As a result, my plan is to take a summer tour through the science education literature to see how what we’ve learned about teaching children and youth scientific concepts can contribute to engaging with larger issues in the public sphere.

Conceptual change theory is one of the keystones of science education research and I think one of the areas that has the largest contribution to make to this discussion. To get going, I’d like to start with a short introduction to conceptual change, explaining some key ideas and their origins. If you’re familiar with learning literature from cognitive or developmental psychology, many of these ideas will be similar but maybe with some different names and subtle differences in definition and interpretation.

Conceptual change isn’t what I’d call a hot topic in science ed research at the moment (it was in the 80s and 90s) but it’s one of the constants, something that most science education researchers would probably identify as a foundation of their understanding of teaching and learning. It is one of the core topics of the science teacher education courses that I teach.

The main idea behind conceptual change research is that students are not blank slates when they enter our science classrooms (or empty buckets if you prefer that analogy)[i]. They have complex, interconnected and established understanding of the way the physical world works that they’ve developed through the interactions with it. It’s just that, often, that understanding either conflicts with a more scientific understanding or is missing significant elements. When students have developed these ideas mostly through their own informal observations and experiences, we tend to call their non-scientific ideas alternative conceptions or everyday conceptions. Some really common ones are that electricity only needs a single wire to flow or that balls carry with them some type of force that keeps them going after they’ve been thrown or kicked. There are also misunderstandings that students learn either directly from other sources or by misunderstanding something they hear or read (often specifically in previous science classes). We usually call these misconceptions, although this term is sometimes used to refer to all misunderstandings to.

In line with most cognitive psychology research, science educators take the perspective that contradictory evidence or experience is necessary for changing conceptions. We sometimes call these challenges discrepant events but the idea is closely related to Piaget’s idea of cognitive disequilibrium and more generally to cognitive dissonance. One of my favourite strategies for challenging misconceptions through discrepant events is the Predict, Observe, Explain cycle. The idea is to present a scenario that students will either try themselves or see as a demonstration, for example showing a vacuum tube with a feather and a penny inside it. Students are asked to make a prediction about what will happen and to support their prediction with an explanation. This brings students alternative conceptions explicitly out into the open (called activation). Students then get the chance to see the demo or try the activity and (if all goes according to plan) it should contradict what they predicted, leading to gasps of “oh wow!” “Wait, what? Why did that happen?” That’s the discrepant event – the result is the opposite of what the students expected (e.g., the feather and the penny fall at exactly the same rate). After the exclamations of surprise have settled down, teacher and students work together to construct new explanations to account for this experience. The cycle is best done repeatedly, always pushing the students a little bit further.

For this cycle to be effective, it is not only the discrepant event or information that matters. The content and form of the new scientific ideas is also important. George Posner and his colleagues at Cornell[ii] initially codified the idea of conceptual change as a way of thinking about science learning. They emphasized how difficult and complex the process is and based their analogy on the way that theory change happens in a scientific field, drawing on literature from sociology and history of science. They likened students’ new conceptions and the way students accept ideas from their teachers and peers to the process of creating new scientific ideas, to paradigm shifts and program change (to use the words of Kuhn and Lakatos). They proposed that to be accepted and integrated into students views of the world, new ideas (whether presented by their teacher or created by the students themselves) need to be:

  • Intelligible: It has to be graspable by the students. This can be as simple as using words that make sense but it can also mean that the causal reasoning has to be accessible to the them. Young students often see the world from the perspective of one way cause and effect – one thing always directly causes another. Teaching students about electric circuits for example can be difficult because charges and flow need to be thought of as both simultaneously cause and effect. If this isn’t clear, circuit concepts can be unintelligible.[iii]
  • Plausible: It has to fit with students’ previous experiences at least as well as their own ideas did. In a great vignette of conceptual change, Bruce Watson and Richard Kopnicek describe a teacher who challenges her students to understand that sweaters and mittens are insulators and not heat sources. They try several different scenarios, putting thermometers inside their warm layers to see what happens. The new idea (that they’re insulators) becomes plausible as the teacher helps them see that this explanation would also explain why sweaters feel warm when you wear them.
  • Fruitful: If students are going to make the difficult effort to change their ideas, there needs to be reason: The new idea needs to be fruitful. In particular, it should help them understand things that they couldn’t explain or understand with their own prior conceptions. For example, the idea that sweaters are insulators is fruitful because in addition to explaining why sweaters feel warm, it explains why the thermometers don’t show any increase in temperature when no one is wearing the sweater. Ideas can also be fruitful when they open up new questions and new avenues of investigation. The idea that sweater are insulators opens up the intriguing possibility that they can also keep things cool.

It doesn’t take a big stretch to think about how this cycle might apply beyond the classroom. When engaging in outreach and public education about climate change, for example, there is more to the process than providing contradictory evidence or information. How effective might public communication strategies be with more attention to this last phase, providing resources to build new conceptions of climate that are intelligible, plausible and fruitful?

Of course, nothing as difficult as changing someone’s ideas of the natural world is as simple as following a few steps. Conceptual change is more like ecological change that retail exchange. There is no such thing as simply swapping one conception for another like a sweater that doesn’t fit. Conceptual change is very difficult. Even when excellent and powerful contradictory evidence is provided and new ideas are intelligible, plausible and fruitful, change is messy and often temporary.

In the opening minutes of the documentary A Private Universe we are introduced to a science student who is describes by her teacher as an excellent student and someone likely to answer the questions correctly. The researchers then ask her explain what she knows about the seasons. She starts with some simple facts that she’s learned (e.g., the Earth takes 365 days to go around the sun) and then adds some explanatory details that she has developed (e.g., the position of the Earth in relation to the Sun helps explain the seasons) but when she is probed further, the explanation gets more and more convoluted. The Earth ends up with a strange and complicated looped orbit as she tries to connect what she has learned in school with the other ideas that she has. We see her struggling to make sense of what seems more like a bramble bush of ideas than a coherent understanding.

She is not unusual, our understandings (sometimes called mental models) usually contain mixes of thoughts, beliefs, learned ideas, and physical experiences. The term conceptual ecology is often used because it captures the complex way we think about things.[iv] Challenging one idea doesn’t always have the intended effect, just like adding an organism to an ecosystem to solve one problem often causes many others. Developing real and deep understanding takes many many experiences and opportunities to make sense of a new way of understanding the world. When we wonder about helping people accept evolution, we’re not just asking them to learn a new idea but instead to challenge a deep and interconnected web of understandings about the natural world.

And this is what I mean by changing people’s mind. Teaching science isn’t about transmitting ideas that students write down and then know. It is a difficult process of helping them see the world in a new way – connecting it to all other efforts to encourage critical and scientific thinking in the public sphere.

Now that you’ve got a sense of what I mean by conceptual change and I’ve introduced some of the key terms that I’ll use, the remaining posts in this series will look at conceptual change research. For example, what writing strategies are most effective in supporting conceptual change? With a significant amount of science communication being text based (from newspapers to blogs to twitter) what writing styles and text structures are most effective in changing conceptions? The acceptance and rejection of some scientific ideas is also tied into ideological movements and community solidarity. What then are the influences of social influences on conceptual change? What makes people motivated to change their minds or hang onto conceptions even in the face of strong contradictory evidence? These are just a few of the questions that I’d like to explore.

Do you have any suggestions for a question to explore or a particular paper to discuss? Drop me a note in the comments!


[i] This is a view descended from Piaget’s work with young children’s explanations of the world:

Piaget, J. The child’s conception of the world. New York: Harcourt Brace, 1929.

Piaget, J . The child’s conception of physical causality. London: Kegan Paul, 1930.

[ii] This paper was the first and seminal paper to explicitly think of science learning through the analogy of conceptual change.

Posner, G.J., Strike, K.A., & Hewson, P.A., & Gertzog, W.A. (1982). Accommodation of a scientific conception:

Toward a theory of conceptual change. Science Education, 66, 211-227.

[iii] If you’re interested in reading more about electric circuits and causal reasoning, Tina Grotzer and Margot Sudbury (Harvard University) conducting a great study exploring and categorizing the reasoning Grade 4 students use to explain circuits.

[iv] Posner et al. say that they borrowed the term from Stephen Toulmin but Piaget also made the analogy that when babies encounter new objects it is like a new species being introduced into an ecosystem.