How do students figure out whom to trust in a scientific controversy?

Scientific literacy is a difficult idea to pin down.[i] To some people it means having a basic level of scientific understanding, though nobody fully agrees on how much understanding is needed or even which specific ideas should be understood. To others, it is more important to understand the core processes of science, which can be applied to any area of science. Again the problem exists of figuring out exactly which processes are most important (and which are distinctly scientific).[ii]

Even when people disagree about what it means, there is almost always this common thread: scientific literacy somehow involves preparing students and adults for the science they will encounter outside of school, very often in media reports. George DeBoer highlighted this in his history of scientific literacy:

Science education should develop citizens who are able to critically follow reports and discussions about science that appear in the media and who can take part in conversations about science and science-related issues that are part of their daily experience. Individuals should be able to read and understand accounts of scientific discoveries, follow discussions having to do with the ethics of science, and communicate with each other about what has been read or heard. (DeBoer, 2000, p. 592-593)[iii]

Robert Hazen and James Trefil[iv]  put it bluntly in their 1991 book:

“If you can understand the news of the day as it relates to science, if you can take articles with headlines about stem cell research and the greenhouse effect and put them in a meaningful context—in short, if you can treat news about science in the same way that you treat everything else that comes over your horizon, then as far as we are concerned you are scientifically literate.” (p. xii)

There is wide agreement then that engaging with science media is an essential element of scientific literacy. But where do people develop actually develop these abilities? Do these skills receive enough attention in science education? Do people really have the chance to develop them in school before the end of their mandatory science education courses, usually around age 16? While studies in journals like Public Understanding of Science have often asked about adults’ relationship to science media, there are only a few that have stepped back to look at the relationship between those adult skills and the science media skills and knowledge that students develop during their final encounters with formal science education.

One that I often come back to is Stein Dankert Kolsto’s (2001) ‘To trust or not to trust,…’- pupils’ ways of judging information encountered in a socio-scientific issue. In it, Kolsto works with a group of 22 Norwegian Grade 10 students. They were drawn from four different classes and picked for their expressiveness in describing their reasoning and as representing a variety of views. It’s a selective sample but seems reasonably appropriate for exploring a wide range of views among students. This isn’t meant to compare different types of students or test any interventions but just to get a sense of where 16 year olds might stand in their engagement with science media. In particular, they were all taking a course for students not planning to study science any further. So it was very likely their last experience in formal science education.

One thing that sets this study apart from others is Kolsto’s desire to focus his attention to how students deal with a real controversy that they have likely already encountered in the media, rather than presenting them with a new and unknown controversy or one that has been created for classroom purposes. The issue involved a Norwegian company that wanted to upgrade an existing 150kV electrical transmission line to 300kV and later build a second new 300kV line that partially crossed residential areas. The plan had sparking fears of health issues such as a possible rise in childhood Leukemia rates. At the time, early epidemiological studies of high voltage lines were mixed in results and there wasn’t yet a consensus on the effects. Coverage in the media often reported on contradictory findings from different researchers. By interviewing the students after they had read and discussed several media articles on the proposed high voltage lines in class, Kolsto wanted to explore how the students judged the information that they read. How did they make their decisions about supporting or opposing the power lines, and how did they decide which sources of information and which specific claims were trustworthy?

One of main things that Kolsto noticed about their responses was that very few of the students attempted to assess the content of the claims being made by various parties (power companies, citizen groups, epidemiologists, etc.). They rarely used their own scientific knowledge to try to judge if the claims made sense or were congruent with their understanding of electricity and the human body. They spent most of their time concerned with evaluating the sources: were the people or organizations trustworthy? This isn’t necessarily a bad thing. Prior work by my former colleague Stephen Norris[v] even suggested that students should be encouraged to make judgements in this way because it would be impossible for them to have the specialist knowledge required to truly assess many scientific claims. But it is interesting to note that these students don’t even seem to attempt it, even when they have covered relevant material in their course. To me, it also calls into question why scientific literacy is so often thought of as a body of knowledge that everyone should learn. If people aren’t inclined to use that knowledge when they encounter controversies, maybe that’s not the most useful way think about preparing students for science outside of school. But that’s my conjecture, not something the Kolsto argues.

Kolsto points out that this reliance on evaluating the sources but not the claims can also be a problem. Once a source is accepted as trustworthy, the students were leaving all other judgements up to that source. They effectively treated all trustworthy sources as authorities, even when that may not be appropriate. For example, a researcher may be a very trustworthy source but he or she can maybe only speak authoritatively to some of the elements of the controversy. On the positive side though, almost all of the students were hesitant to give out trusted source status to many of the parties involved, especially those they felt had a vested interest (e.g., the power company and property owners’ groups) and they were most likely to describe scientists and researchers as trustworthy.

Unfortunately, this status also led to biggest challenge that the students faced: what does it mean when researchers (who are trusted sources) disagree? How do you decide which claims to trust then? About half of the students said explicitly that when researchers disagree, it is very difficult to know whom to trust.

so what did the students do? Kolsto found that when they tried to sort of disagreements among scientists, the students’ views were clouded by the way that science appears in schools. In school science, there is almost always a right answer. Even when a teacher lets students debate a solution or an explanation, at some point there is almost always a true answer that the teacher eventually shares or endorses. This is the one that students must then understand for tests and exams. In school science, laboratory activities are also supposed to be definitive. There is most often a correct result, one that illustrates or supports the right explanation that the teacher wants everyone to understand. And while differing results can spark interesting discussions about experimental error, that’s usually where the discussion stops. When everyone is following the same procedure, if you get a different answer from everyone else, the only possible explanation is that something went wrong. School science doesn’t always look like this but, especially in high stakes assessment contexts, it very often does[vi]. And it’s not necessarily always a bad thing, there are many settled and well understood ideas in science that can be well taught with strategies like this. The problem is that it gives students a very poor foundation for understanding science that isn’t settled yet.

The effects of “right answer” science teaching were clear in the way the students responded to disagreements among researchers. Their only resources for making sense of those disagreements were their school science experiences and their experiences with disagreements in everyday life. As a result, the students tended to see the disagreements as illustrating either incompetence or bias. Either: a) One or the other of the researchers had done their investigations incorrectly or maybe no one had done the “right” experiment because they didn’t know how or b) one or the other of the researchers was personally biased and letting that cloud their results. These are certainly both possible explanations but they ignore the fact that sometimes valid and well-conducted studies disagree, especially when the questions are about health effects that have to be observations. You can’t randomly assign people to live or not live near high voltage lines and experimentally control the voltages they are exposed to. Researchers’ only choice is to observe the health of people who live near and far from these lines. It takes a long time for a balance of evidence to emerge from numerous studies of health effects like this, and there is no definitive experiment that the researchers could or should have conducted to settle the matter and find the right answer.

But the students wanted the teacher and Kolsto to tell them who was right. They wanted to know what the truth really was, and they became suspicious of the various scientists for not knowing how to study the issue properly or for going in with biased preconceptions. One student said, “It is probably because they have made their own opinions. They might have different backgrounds and have come across different information. Maybe they have made up their mind in advance, and then found that their opinion is right and taken that as a starting point” (p. 884)

What made it especially difficult is that the students felt they had no way of knowing which researchers were highly biased and which were not. They wanted the researchers to be mostly neutral and objective, but they had few tools for figuring out which ones were. They did look for information about the background of the researchers (such as their area of specialty), which is a very good beginning strategy. As one student said, “I have more confidence in those who have put more work into the subject, researchers and people who have worked on it” (p. 895). They also, however, tended to be swayed by the claims that included the most numbers. It’s good that they were looking for supported evidence but this is also a strategy that can be manipulated if the audience isn’t careful about assessing the meaning of the numbers. And as Kolsto found, the students tended not to apply their understanding to assess the evidence provided by any of the parties. They didn’t evaluate the evidence and numbers but were still swayed when more were given.

Possibly more serious was their tendency to believe the more dire warnings. Researchers that claimed more serious effects were more often believed. If that’s the case, it’s easy to see how health scares (e.g., vaccines and autism) can quickly gather steam. One student said “In my opinion, they [the politicians] should listen to those [researchers] who say it’s dangerous. Because if you do something about it, and it is not dangerous, then there is no problem. But if it is dangerous, and they don’t do anything, then it will have harmful consequences” (p. 892). And while this might make some sense, it’s easy to miss out on weighing the costs of doing something when there is no risk. The risks of doing something about vaccines (e.g., encouraging people not to vaccinate their kids) have been severe, such as outbreaks of vaccine preventable diseases. Paying too much attention to dire interpretations (of flawed research in that case) has had severe consequences.

Overall Kolsto’s exploration showed some promising signs, such as students wanting to distinguish between trustworthy sources with expertise in the relevant field. These were overwhelmed though by the lack of resources that they had for following through on those good intentions. And because they lacked an understanding of the role of legitimate disagreement in science and abilities to dig into the content of the claims themselves, they had to fall back on superficial judgements. Students were swayed by the presentation of numbers and by those who made more worrisome claims. They thought that disagreeing scientists must either be personally biased or incompetent. And they tended to categorize expertise dichotomously: someone was either an expert to be believed or not, without noting that most experts have very small areas of deep expertise and varying degrees of expertise in other areas. Kolsto noticed though that the students felt that they were being very careful and critical in making up their minds. About half of the students made direct statements about the importance of autonomy in decision making, that one had to listen to both sides and then think for themselves.

And that was the main problem that Kolsto was left with. Students seemed to be leaving their compulsory science education with good and valuable ideas about what they should do when the encountered science in the media, but have few deeper skills to actually follow through.

“They wanted to listen to the disinterested and neutral researchers, but few of them expressed any ideas as to who that might be. They wanted to trust those risk estimates that several researchers agreed upon, but they did not indicate how they were to judge the level of agreement….The pupils’ basic problem, disagreement among the researchers, was not resolved by their analyses.” (p. 897)

And further, it was something that seemed to frustrate the students.

Kolsto acknowledges, and I agree with him, that it’s very hard to draw any firm recommendations from a small exploratory study like this. But he says that if there is one idea that should come out of it, it’s that students need much more exposure to real inconclusive and controversial science, not just contrived examples where the teacher has a right answer in mind. These students have learned that scientists can be biased, that they should be careful of information from sources that have vested interests (e.g., the power company), that they should look for agreement among scientists, but they are at a loss for what to do when there legitimately isn’t an agreement yet or, importantly, when science news is presented in a way that suggests that there isn’t agreement. Kolsto argues, and here I agree too, that there still needs to be more emphasis on the social processes of science in school, not just that scientists work together but exactly what that means. Before leaving compulsory science education, students need a much better understanding of how scientific consensus happens, how ideas go from contested and tentative to sometimes firm and widely supported and how arguments and disagreement can be an important part of getting to that place. They also need better ideas of where to look or whom to ask when media reports make it difficult to see where general agreement is. Kolsto’s study illustrates some very promising steps that have been made to helping students (and the adults they will become) to thoughtfully and critically engage with science media, but it also illustrates where more work needs to be done. There is a lot of agreement that skillfully navigating scientific news and controversies is very important, but I think it’s pretty clear that it still needs a lot more attention in school science and beyond if those visions of scientific literacy are ever to be realized.

Kolsto, S.D. (2001). ‘To trust or not to trust …’: Pupils’ ways of judging information encountered in a socio-scientific issue International Journal of Science Education, 23 (9), 877-901 DOI: 10.1080/09500690010016102


[i] These papers all offer historical overviews of the development of the term and the disagreements that have always surrounded it:

Hurd, P.D. (1998). Scientific literacy: New minds for a changing world. Science Education, 82(3), 407–416.

Roberts, D.A. (2007). Scientific literacy/Science literacy. In S.K. Abell & N.G. Lederman (Eds.), Handbook of research on science education (pp. 729–780). Mahwah, NJ: Lawrence Erlbaum.

Roberts, D.A. (2010). Competing visions of scientific literacy. In C. Linder, L. Ostman, D. A. Roberts, P. Wickman, G. Erickson, & A. MacKinnon (Eds.), Exploring the landscape of scientific literacy (pp. 11–27). London: Routledge.

[ii] Dijk, E. M. V. (2011). Portraying real science in science communication. Science Education, 95(6), 1086-1100.

[iv] Hazen, R.M., & Trefil, J.S. (1991). Science matters: Achieving scientific literacy. New York: Doubleday.

[vi] Millar and Abrahams give a very thorough overview of what laboratory work typically looks like in high schools: Abrahams, I., & Millar, R. (2008). Does practical work really work? A study of the effectiveness of practical work as a teaching and learning method in school science. International Journal of Science Education, 30(14), 1945-1969.

By:


9 responses to “How do students figure out whom to trust in a scientific controversy?”

  1. You might be interested in checking out an article I cowrote for Rockefeller University’s Incubator blog called 5 Steps to Separate Science from Hype, No PhD Required. (http://incubator.rockefeller.edu/?p=1123). I feel passionately that students (and working adults) need to be equipped to analyze scientific information and I hope that my guide may be a helpful resource.

  2. While ‘consensus’ is important from a social perspective and a political perspective, it has no weight in the Scientific method. From the Scientific perspective, it doesn’t matter how many distinguished and credentialed individuals support an explanation (insert stories about phlogiston and epicycles), what matters is that the proposed explanation agree with observations and predict new observations accurately. Richard Feynman put it best when he said “Science is the belief in the ignorance of experts.” Teaching Science as a method of validating explanations against the behavior of Nature would go a long way toward helping students gain an appreciation of the mechanisms at play in the Scientific method and help illuminate the differences between the Scientific method and the social behavior of scientists.

    In fact, the appearance of the term ‘consensus’ in discussions about a scientific matter is evidence that the Science isn’t yet solid in that area. It is an appeal to social proof, not Scientific proof. I think Michael Crichton put it well: “Finally, I would remind you to notice where the claim of consensus is invoked. Consensus is invoked only in situations where the science is not solid enough. Nobody says the consensus of scientists agrees that E = mc². Nobody says the consensus is that the sun is 93 million miles away. It would never occur to anyone to speak that way.”

    My view is that general Science education should make clear the difference between the practice of Science and the social behavior of scientists, and to do that students need exposure to real controversies and inconclusive results, as you say above.

Leave a Reply to Morning Feature – Scientific (Il)Literacy | BPI CampusCancel reply

Discover more from M-c Shanahan

Subscribe now to keep reading and get access to the full archive.

Continue reading