Objectivity and ambivalence: The case of the Apollo scientists

Last week, a friend sent me this quote from a climate change opinion letter. 

“While we may or may not be correct on any scientific topic, the only pertinent arguments are real scientific arguments involving honest evidence. The only satisfactory outcome is a completely objective analysis.”

She said she was happy to see such a strong statment about the differences between science and politics, but I’m not sure I agree with her. [i]

When discussing controversial issues like climate change and MMR, the boundary between scientific and non-scientific claims is always an important part of the conversation. Most often that boundary is defined by objectivity. The work of scientists is usually portrayed as completely objective, in contrast to those who would rely on emotional, religious, or political reasoning. When science is portrayed this way, though, any perceived lack of objectivity can become a weakness – a sign that science should not believed. And while this black and white distinction is mostly a rhetorical choice, it presents a problem. The climate gate emails, for example, made people uncomfortable because the scientists were acting in ways that didn’t seem completely objective. And if they aren’t totally objective, is there something wrong with their science?

Not necessarily. Consider this comment from a scientist involved in one of the most celebrated and iconic American scientific endeavours: “The emotionally disinterested scientist is a myth. Even if there were such a being, he probably wouldn’t be worth much as a scientist” Calvin Johnson[ii], Apollo mission scientist.

Dr. Johnson’s comments were made during a time that is often seen as a golden age for science. Following World War II, it had gained an important place in public dialogue. In North America, and especially in the United States, scientists were no longer thought of as solitary geniuses working on esoteric projects, but instead as elite citizens making contributions to industry and defence. This was the time of the Sputnik moment, inspiring passion for science and science education. This enthusiasm for science and scientists also created curiosity and even sometimes distrust: who were scientists, what exactly did they do and what made them different from others?

Robert K. Merton, a sociologist interested social roles and structures, was among the first to try to answer these questions. As a result, he is often credited as the first sociologist of science. In The normative structure of science, originally published in 1942,[iii] Merton created a set of norms and ideals that characterized scientists and their work, a list that established objectivity as the defining characteristic of science.

Merton depicted these objective scientists as emotionally disinterested in their work. They were expected to gain satisfaction from serving the scientific community not from seeing their hypotheses supported. They should be impersonal about making decisions, thinking only about the strength of the evidence and not about the personalities or affiliations of the people involved. They were also expected to share their results openly with a sense of communalism and never to work in secrecy.  These norms were taken up enthusiastically in science education and public writing about science. To this day when people describe scientists, this is what they usually say – scientists and non-scientists alike.

What is often forgotten, though, is that Merton went on to question whether these norms were able to fully capture researchers’ work and values. He wondered if, in the real world of scientists, these ideals were balance by equally important counter-ideals. In other words, he wondered if sometimes the exact opposite of objectivity was necessary. Dr. Johnson seems to be saying the same thing.

Taking inspiration from Merton, Ian Mitroff from the University of Pittsburgh decided to explore the idea of counter-norms with one of the most recognizable groups of mid-century scientists – the physicists, geologists and chemists of the Apollo moon missions. Based on interviews he conducted between Apollo missions 11, 12, 14, 15, 16, including one with Dr. Johnson, Mitroff wrote a fascinating description of the ambivalence that scientists held towards Merton’s ideals. They were ambivalent in the strict sense of having two competing views. Like Merton, they acknowledged the importance of working from objective evidence and contributing to the scientific community. But Mitroff also illustrates, through the scientists’ own words, the passion, bitterness, competitiveness and intensity of doing science and the value that the researchers placed on these characteristics.

At the beginning of the Apollo program the structure and geology of the moon remained largely unknown. None of the scientists were sure exactly what would be found in the samples that astronauts would bring back with them. Based on Merton’s initial norms then, we might imagine a patient team of scientists waiting for samples before committing to any particular view of the moon’s geology. The scientists, however, described themselves and their colleagues much differently.

“Xavier is so committed to the idea that the moon is [X][iv]” said one researcher, “that you could literally take the moon apart piece by piece, ship it back to Earth, reassemble it in Xavier’s backyard…and he would still continue to believe that the moon is [X]. His belief in [X] is unshakable. He refuses to listen to reason or to evidence. He’s so hopped up on the idea of [X] that I think he’s unbalanced.”

This might seem like a description of an outsider – a scientist who has broken the rules of science and is being rejected by his peers for his lack of objectivity (and possible imbalance!). The surprising thing, though, is that the three scientists perceived by their peers as the most committed to their hypotheses (Xavier included) were also judged by their peers to be among the most outstanding scientists participating in the program. They were the ones who drove the field forward and created genuinely new and exciting ideas.

This surprising judgment shouldn’t be taken to mean that scientists have no use for data or evidence. On the contrary, what Mitroff saw in the scientists was that they recognized the importance of a constant back and forth negotiation between objectivity and subjectivity. Merton’s norms weren’t false, they were just only one side of the coin. Scientists ought be objective and convinced by evidence but they also must be driven by personal commitment and willingness to argue for a possibly unsupported position. In his interview with Mitroff, Dr. Albert Masters contended that without personal commitment, science could not be done.

“Commitment,” said Masters, “even extreme commitment such as bias, has a role to play in science and it can serve science well. Part of the business [of science] is to sift the evidence and to come to the right conclusions, and to do this you must have people who argue for both sides of the evidence. This is the only way in which we can straighten the situation out. I wouldn’t like scientists to be without bias since a lot of the sides of the argument would never be presented. We must be emotionally committed to the things we do energetically. No one is able to do anything with liberal energy if there is no emotion connected with it.”

Masters was not alone in his assessment. Dr. Gordon Hereford, another Apollo scientist, went further, saying that only simplistic views of science can leave out passion and commitment to ideas. “You can’t understand science in terms of the simple-minded articles that appear in the journals” Hereford argued, “Science is an intensely personal enterprise. Every scientific idea needs a personal representative who will defend and nourish that idea so that it doesn’t suffer premature death. Most people don’t think of science in this way but that’s because the image they have of science only applies to the simplest, and for that reason almost non-existent, ideal cases where the evidence is clear-cut and it’s not a matter of scientists with different shades of opinion.”

Based on these interviews Mitroff’s conclusion was that science is fundamentally ambivalent about objectivity, not ambivalent as in undecided but instead meaning that the community holds simultaneously conflicting views about it. Objectivity and reliance on evidence are essential but so are personal commitment and bias, sometimes in the face of insufficient or contradictory evidence.

This ambivalence makes it challenging to defend science by saying things like “The only satisfactory outcome is a completely objective analysis.” When absolute objectivity is the only way that science is described, not only does that represent an incomplete picture, it also sets up an inevitable crisis when researchers are shown to be passionate, driven and subjective. This is important to keep in mind when considering how science and scientists are described to students and when science is defended and challenged in the public sphere. Instead of focusing on only objectivity, it is crucial to acknowledge the important role that personal commitment, bias and passion play in science. These characteristics are essential to science not a perversion of it. This is a challenging task but I would propose that understanding of science, scientists and contentious scientific issues will be the better for it.

So, what might it look like in science communication? How might this ambivalence be explained as a strength rather than a weakness?

[i] The letter happened to be from someone who is vocal in disagreeing with global climate change. The substance of the letter isn’t the part that I’m interested in though, it’s the claim to objectivity.

[ii] Mitroff, I.I. (1974). Norms and counter-norms in a select group of Apollo Moon scientists: A case study in the ambivalence of scientists. American Sociological Review, 39, 579-595. In Mitroff’s original paper, as was customary at the time, the individuals are noted only by letters (e.g., Scientist C). For readability, I have chosen to update this to the more contemporary practice (in my field) of naming individuals with pseudonyms. The names begin with the same letters that Mitroff used to identify them. For example, Calvin Johnson is Mitroff’s Scientist C.

[iii] Merton, R.K. (1942). The Normative Structure of Science In: Robert King Merton (1973). The Sociology of Science: Theoretical and Empirical Investigations. Chicago: University of Chicago Press.

[iv] For confidentially, the exact hypothesis that Xavier was committed to was removed by Mitroff. X was used as a placeholder to represent some particular belief about the moon.

Arsenic, cold fusion and the legitimacy of online critique

In a recent post on the Scientific American guest blog, I looked into some of the communications issues that arose as result of critical comments about the NASA research that were posted online. This post is a bit of a companion post, revisiting some of the questions that I asked there but looking at them from a slightly different perspective.

On January 6 at BoingBoing, Andrea James wrote about similarities that she saw between recent science news (such as NASA’s aresenic-DNA story) and Pons and Fleischmann’s announcement of cold fusion. She argued that the culture of ‘firsting’ – in this case, trying to be the first to get a new story into the public – can lead journalists to cast a less critical eye towards scientific announcements. This effect is amplified when science communication is carried out by press conference. Making a link between the cold fusion story and NASA’s announcements caught my eye because it is a comparison that has been rattling around in my head for a little while as well.

To be clear, I don’t think that James is comparing the quality of the science. The general retrospective view of Pons and Fleishmann’s work is that it can best described as pathological science  – exhibiting not necessarily fraud but delusion on a grand scale. The term comes from a talk given by Nobel laureate Irving Langmuir at the Knolls Research Laboratory in 1953 (and appropriately republished in Physics Today in 1989). Langmuir defined pathological science as “cases where there is no dishonesty involved but where people are tricked into false results by a lack of understanding about what human beings can do to themselves in the way of being led astray by subjective effects, wishful thinking or threshold interactions.”[i] That is a serious and damning view of the science of cold fusion. In making comparisons to the communication culture surrounding cold fusion, neither James nor I are attempting to make a similar claim about the NASA arsenic-DNA paper.

While the science may not be similar, the science communication issues that have emerged do seem to intersect.

Much of the early criticism of the arsenic-based DNA paper was communicated rapidly and disseminated widely through blogs, Twitter and other online forums. Some were written by scientists in the field responding directly on their research blogs (such as Rosie Redfield from the University of British Columbia) and others were written by journalists who sought out the comments of experts and reported their concerns. The authors of the paper and NASA spokespeople, however, argued that blogs and other online space were not appropriate for science criticism. They argued that any concerns should be addressed only through peer-reviewed letters to Science, the journal that had originally published the research. Hearing this gave me a twinge of recognition. I thought, “I’ve heard part of this story before…”

On March 23, 1989 the world’s press gathered for an excitement-filled press conference held by the University of Utah. Two scientists, Martin Fleischmann and Stanley Pons, claimed to have discovered a nuclear fusion process that could be produced simply and reliably in a regular laboratory – cold fusion.  Taking this finding to press conference before it was accepted for publication in a scientific journal was considered a breach of the usual practices of science but the researchers and their press officers argued that it was necessary because of the potentially lucrative patents involved and because there was another nearby research lab working on the same problem. While Pons and Fleischmann, and researchers from Brigham Young University, submitted their results to Nature for review, news reports continued to highlight positive results flowing in from around the world.

This success did not last long. By the time the American Physical Society met in May, several labs had announced that they were unable to reproduce the results and even worse that any positive results they had seen earlier seemed to have been caused by improperly calibrated instruments and incorrectly interpreted data. Pons and Fleischmann also withdrew their paper from Nature’s review process. The consensus against cold fusion grew from there, and the controversy was mostly dead by November 1989 when the Energy research Advisory Board released its final report recommending no further funding for the University of Utah’s cold fusion program[ii]. How did the star fall so quickly?

Well, 1989 might seem like the dark ages of online communication – no Twitter, no Facebook, no blogs – but it turns out that an informal network of electronic communications played a pivotal role in the quick and negative consensus. While the media frenzy continued and government committees were convened, scientists were using electronic back channels to communicate with each other. CERN physicist Douglas Morrison set up one of the many electronic newsletters read by enthusiasts and sceptics alike. His was called “Cold Fusion Newsletter” and it was sent by email to a network of colleagues and posted to the sci.physics.fusion newsgroup.

Similarly, announcements of supporting and contradictory findings were quickly distributed to relevant labs by fax. The Princeton Plasma Physics laboratory, for example, heard about Japanese findings matching those of Pons and Fleischmann when they were sent a faxed copy of a Japanese newspaper report with a short handwritten English translation: “4/1/89. Yomiuri Shimbun (Japanese biggest news paper). Tokyo Agriculture & Engineering University announced ‘The [Koganoi-shi] re-produced the Utah experiments’. They measured heat, gamma and tritium. Prof. Koyama.”[iii]

These informal networks were initially created to share information about the experimental details, providing tips and ideas for those trying to replicate it. Later they became the main distribution network for conflicting results and emerging questions about the plausibility of cold fusion. It was Morrison who used the Cold Fusion Newsletter to first turn his colleagues’ attention to Langmuir’s infamous lecture, suggesting there that cold fusion was a case of pathological science.

So how was this informal source of criticism received by the scientific community? While Pons and Fleischmann mostly avoided responding to these critiques (they also didn’t attend the American Physical Society meeting), the research community depended on this informal network to come to a consensus about cold fusion. The only serious communication criticisms were directed at Pons and Fleischmann for attempting to duck the peer review process and communicate their results through mass media instead. Tom Instrator, a nuclear engineer at the University of Wisconsin, was unequivocal, “that’s not the way you do science… If they want [other scientists] to not give them a hard time, they should give us enough to copy it. Or maybe they have given us enough, and it doesn’t work.”[iv]

This is something that I find interesting. It’s easy to understand why Pons and Fleischmann’s communication strategies were heavily criticised. They provided information almost exclusively through mass media and baulked when pressed to publish in peer reviewed journals. This resistance probably caused the electronic underground network. With this startling lack of technical data, it’s no surprise that scientists turned to other sources, but looking back I wonder why there weren’t also questions about whether this informal network was reliable. Electronic communications were relatively new, and unverified claims (such those of Japanese labs replicating the results) were passing quickly from person to person. The deluge of information from every angle (faxes, emails, newsletter) was described by many scientists as overwhelming – but no one suggested the it was illegitimate in the way that NASA suggested that the blogged and tweeted comments were. Why not?

I think the key factor is one of boundaries. The blogged comments were public, blurring boundaries that mark the usual sources and usual audiences for scientific critique. The cold fusion network, on the other hand, was insider only. To get access to the newsletters, faxes and emails, you had to be on the inside already. At the time, Morrison stressed that “they are an informal network, they are meant to be academically confidential…It’s only academic people. I don’t give it to the press.”[v]

NASA complained that the critiques of its arsenic-DNA study were inappropriate and that anyone with a serious critique would formally publish it through the peer review process. Morrison’s comment though suggests that it’s not so much the formal peer review process that matters so much but setting a clear boundary between those who have access to contributing and reading the critiques, something that blogs don’t do. This continues to make me wonder whether audience and author boundaries weren’t also the issue that NASA was addressing. Can we read between the lines to wonder if it wasn’t so much that the critiques weren’t peer reviewed, but that they were public?

Apporva Mandavilli makes a similar point in her essay in Nature, called “Peer review: Trial by twitter”. She points out these informal conversations have always happened but they’ve usually been in private – at conferences, in the lab and in correspondence between friends and colleagues. Putting them out in public, and in large quantities, can make them overwhelming, intimidating and difficult to filter. As public forums like Twitter and blogs continue to grow as a space for informal critique, there will continue to be cries of resistance and demands that the peer review critique process be respected. The case of cold fusion I think provides an important reminder about what this resistance is more likely related to – moving critique outside of the boundaries of the private community. I’m not arguing against the importance of peer review or of formal journal publications but just wondering about how we will make sense of the growing public media. To finding ways to work with and understand the impact and possibilities of online communications, do we have to dig a bit deeper to make sure that we know what the issues really are?

[i] Langmuir, I. (transcribed and ed., Robert N. Hall) (1989). Pathological science, Physics Today, 42(10), 36-48.

[ii] Taylor, C.A. (1994). Science as a cultural practice: A rhetorical perspective. Technical Communication Quarterly, 3, 67-81.

[iii] Lewenstein, B.V. (1995). From fax to facts: Communiction in the cold fusion saga. Social Studies of Science, 25, 403-436.

[iv] Taylor, op. cit. note ii.

[v] Gieryn, T.F. (1999). Cultural boundaries of science: Credibility on the line. Chicago: University of Chicago Press.

Looking forward to panels at Science Online 2011

Thursday morning I’ll be flying down to North Carolina to attend Science Online 2011 – the fifth annual international meeting on Science and the Web.  Not only will this provide some delightful respite from the cold and snow of Edmonton (left), but I am honoured and excited to be participating in two panels.

On Saturday morning, I will join Stacy Baker (an outstanding science teacher from Staten Island), eight of her students, and Sophia Collins (director of the online outreach program I’m a Scientist, Get me out of here!) for a panel to discuss the value of the online science community to science education. Specifically, I’ll be talking about the projects that my science education students completed last term using science blogs as inspiration for science lessons but mostly I’ll be listening and learning from Stacy, Sophia and all of the students.

Saturday, January 15th: 11:30am-12:30pm, Room D

Still Waiting for a Superhero – Science Education Needs YOU! – Stacy Baker, Marie-Claire Shanahan, Sophia Collins and 8 high school students:

Stacy Baker is bringing her students again to discuss online science and education. Her eight students ranging from age 14-17 will join a panel of educators and scientists to discuss the problems and possible solutions to the science illiteracy crisis in schools. For example, what does the importance and prominence of blogging etc. mean for students and teachers/professors? Are the processes and people of science more visible because of blogging? Does that matter? What would bloggers, journalists, and scientists want students to learn to read and engage in online science and online science communication? One approach is to realise that a real barrier in science education is students feeling science is ‘for boffins’ and ‘nothing to do with them’ – if you can change students’ feelings it makes all the difference. Showing students that scientists are real people (which you can all do, by showing your real selves in whatever medium), and giving them a say over something (as, for example, in I’m a Scientist, Get me out of Here!) can make all the difference.

Then on Saturday afternoon I get to indulge my other academic love – science communication –  in a session with science writers, journalists and communications scholars. In the panel session Blogs, Bloggers and Boundaries I will be discussing boundary work and blog audience boundaries with Alice Bell (senior teaching fellow in science communication at Imperial College, London), Ed Yong (science writer and blogger at Discover Magazine blogs), Martin Robbins (science writer and blogger at The Guardian) and Vivienne Raper (science editor at BioNews). I’m very excited for the diversity of perspectives on this panel – being a part of it will be as much a learning experience as anything else.

Saturday, January 15th: 2:00pm-3:00pm, Room D

Blogs, Bloggers and Boundaries? – Marie-Claire Shanahan, Alice Bell, Ed Yong, Martin Robbins and Viv Raper

Science blogging is often seen as an opportunity for science and science communication to be made more open and in doing so, help connect people. Blogging thus might be seen as a chance to break down cultural boundaries between science, science journalists, and various people formerly known as audiences. But do these traditional roles still affect blogs, bloggers and their readers? Are blogs still producing a rather traditional form of popular science, one that largely disseminates knowledge, maintaining a boundary between those who are knowledgeable and those who are not? Or do they provide new opportunities for these boundaries to be blurred? Similarly, do blogs help foster cross-disciplinary communication or simply allow bloggers to keep talking to ever more niche audiences? They allow science writers to connect with more people, but do they end up as an echo chamber where writers only talk to more of the same people? And how can bloggers tell if their writing is actually making a difference? This discussion will explore the boundaries that are maintained and blurred through science blogging, including the value of some of these boundaries and the importance of being aware of them.

I’m really looking forward to being a part of both of these sessions and to meeting everyone on the panels, most of whom I’ve only met online so far. Science Online is a very rich conference in terms of idea sharing and generation and I anticipate coming back with inspired new directions for both research and teaching (and hopefully not a cold like last year…)

Research Forum Series Talk: Today!

Research Forum Series: Expertise and interactions in online science commenting with Dr Marie-Claire Shanahan

The ever growing accessibility of public read/write internet spaces raises questions about the types of interactions that people, both inside and outside science, have with each other and with science texts. One aspect of those interactions is the claiming and attribution of expertise. Conventional boundaries mark those with scientific credentials as experts and those without as non-experts. Studies of public engagement and advocacy suggest, however, that those outside of science have much to contribute to scientific knowledge and some should rightly be recognized as experts. This includes individuals (e.g., patients and technicians) who have developed in-depth personal expertise that allows them to contribute to building new knowledge. The talk will explore how these attributions play out in open commenting spaces of the science section of a national newspaper, asking specifically: What types of expertise are claimed by commentersand how are those claims made? Further, how do these claims of expertise impact the types of interactions that commentershave with each other?

Date and Time

Thursday, November 25, 2010

3:30 PM – 4:30 PM


Education Centre
106 EducationSouth

Is IRE the best way to respond to blog comments?

[Scene: Middle school science classroom in an urban setting. Students sitting in their desks with their science notebooks open. The teacher is standing at the front of the room and has written the word WEATHER in the middle of the whiteboard.]

Mr. McNab: When we’ve been talking about weather, what are some of the different things that we can measure?  I know we looked at ‘Weather Underground’ to do some observing from their website and some different pieces of the weather.  What are some things that we can measure, what are things that we can measure in weather?

Camryn: Precipitation?

Mr. McNab: Precipitation, yah.  What’s a way to measure precipitation?

Camryn: Um, I don’t know, um like centimetres of precipitation on the ground?

Mr. McNab: Yah? How do you think I can measure that?  Vivian, do you have some ideas?

Vivian: Uh, you can, so you can put a bucket outside and check how much is in it.

Mr. McNab: You put a bucket outside and….

Vivian: …check how much is in it.

Mr. McNab: Check how much is in it.  What am I gonna check for?

Vivian: Rain, hail, snow.

Mr. McNab: Rain, hail, snow.  Okay.

This is an excerpt from a middle school science class that I visited last year. It’s classic example of classroom science talk. The teacher initiates a discussion topic, in this case measurements related to weather. He then asks students to respond to his guiding questions. “What are some things that we can measure?” Camryn responds, “Precipitation” and the teacher says, “Precipitation, yah” – acknowledging the student’s response and repeating it for emphasis. He probes for further detail. This time, in response to Camryn’s attempts to elaborate, the teacher says “Yah?” suggesting that isn’t quite the answer that he wanted. He moves on to another student. Vivian suggests putting a bucket outside. The teacher responds by repeating her suggestion – both confirming it and validating it for the class. His repetition says, basically, “Yes, this is what I was looking for and you should pay attention to it.”

This form of classroom talk is often called IRE (initiation, response, evaluation) or IRF (initiation, response, follow-up) and in observational studies of classrooms it is one of the most common ways that teachers interact with their classes. Teachers initiate a discussion topic (usually by posing a question), they solicit responses from students and they evaluate them, often acknowledging students’ efforts and participation and letting them know if their answer was correct or not. In science teacher education, it is sometimes introduced to beginning teachers as a way to help them begin to acknowledge students’ contributions to discussions and incorporate them usefully.

Outside initial teacher education though it is usually seen as a strategy that teachers rely on too much – a pattern that becomes a default mode of communication, keeping the teacher in control of the discussion and of what the acceptable answers are. While that might be desirable in some contexts (such as when discussions are used for informal assessment of students’ understanding), IRE also tends to stifle students’ attempts to ask questions and can discourage students from actually listening to each other’s responses. They just need to wait for the teacher to tell them which contributions are important. It is not a pattern that encourages or supports real two-way communication. Despite the two-way exchange, it is really a transmission from the teacher to the students.

Because of these limitations, I was really surprised by a case study (Davies, 2009) I read recently that illustrated IRE as the dominant pattern in a context where I wouldn’t have expected it – public engagement dialogues. In her article, Davies looks at what she calls science dialogue events – in this case, experts panel sessions hosted by a science outreach centre. These were meant to encourage engagement and two-way dialogue between experts and audience members.

One of the things she noticed was that during these dialogues the panel moderator played the teacher’s role and the event ended up taking on the characteristics of formal education. Audience members listened to the panelists and raised their hand to ask questions. The moderator responded to each of the questions asked by audience members, acknowledged their contributions and commended them on their participation. Davies noted that the moderator evaluated the audience member’s participation more than the correctness of their answers but the link to the classroom was still there. The moderator acknowledged contributions that were especially valuable and commented when audience members were being brave for speaking up. Basically, the audience members contributions were evaluated like a student’s might be.

Seeing the IRE form reproduced in this unexpected venue made me start thinking about science related blogs. Are they sometimes IRE too? Should they be?

This summer I wrote a guest post for Bora  at A Blog Around the Clock discussing the ethical controversy surrounding a plan at UC Berkeley to invite incoming students to contribute their genetic information as part of a lecture and seminar series. It was my first contribution to a blog and I was excited to hear what people said in the comments. Looking back though, I can see that I defaulted to an IRE position. When comments came in, I read them carefully and then responded to each one, thanking them for their interest (i.e., acknowledging and encouraging participation) and then evaluated their comments. For example, one commenter wrote:

I disagree that the ethical considerations facing educators and human subjects researchers are different. Consider a teacher who violated any of the principles of beneficence, justice, or non-maleficence, perhaps by teaching only what advanced the teacher’s own agenda, or choosing only to teach white students, or intentionally teaching falsehoods. Most people would be justifiably outraged. These are principles which seem to me like good guides for almost any relationship (though not necessarily the only ones), and especially those which involve power differentials. Human subjects research obsesses over them because of a history of abuses which violate them, not because of any special difference between research and other activities. I think that Berkeley’s use of informed consent language was an acknowledgment that this particular educational exercise would be unusual enough that our assumption that teachers are following the 3 principles I mentioned might be questioned, as well as an attempt to address those concerns. I think the problem here is the incorrect idea that research is fundamentally unlike activities we all engage in nearly every day.

Taking on the teacher role, I felt that this commenter had not quite understood what I was saying and so I responded by explaining that I agreed with his or her comment and then essentially corrected what I felt to be a misunderstanding of my point. I wrote:

Thanks for reading! If you’ll indulge me for a second, I think my response might run along the lines of ‘we agree more than you may think’. I very much agree with you that core ethical principles are no different in almost any site of interaction between people (especially those where power is involved, as is the case in both research science and science education). The examples that you give would outrage me too. My contention though, is not directed at ethical principles. Research science and science education are different activities – they have different key actors, different objectives, different rules and norms. Because these are different activities, the actions that we take or expect others to take to ensure beneficence, justice or non-maleficence can be different. Choosing to include only selected students in a research project is, in many cases, ethically appropriate. It may even serve the cause of justice for those or other students. Choosing to only teach certain students in a classroom isn’t. The ethical principles are the same, but the actions appropriate to meet those principles aren’t necessarily.
I also agree with you that the informed consent procedures that Berkeley used were appropriate given the potential sensitivity of the information. What I was trying to explore though is why this situation stills leaves some people feeling uneasy. Because there are mixed messages about which activity is really going on here (research science or science education) my hope was to explore the idea that it might feel a bit uncomfortable because we can’t fall back on our assumptions and make easy judgements about the appropriate actions that would meet our ethical principles.

Looking back I was clearly working in an IRE framework.  To honest, as I read it again now I can almost hear myself saying this as part of class discussion. I know that IRE is my default as a science teacher and university instructor and I try to think carefully before I respond to students’ contributions in class. I think about whether evaluating what they’ve said is the most productive thing for me to do at that moment. Sometimes it is, sometimes it isn’t. What hadn’t crossed my mind until I read Davies case study was to ask myself the same questions about blogging.

My experience probably isn’t universal – I was a complete beginner in the medium of blogs (I still am…). I know that blogs have many different purposes and there are lots of different views on the purpose of comments. But I don’t think my IRE experience is unique. I’ve received similar responses when I’ve commented on other blogs and overall I didn’t really like it. In all cases the exchanges were very polite but I felt that in correcting me or saying that the topic was more complicated than my comment suggested made me feel like a student again who hadn’t made the right contribution to the discussion. It’s probably just because I’m not used to it any more – I’m usually the one that gets to do the evaluating these days. But beyond feeling mildly slighted, the responses (mine above and the ones I’ve received from other bloggers) tellingly never led to further discussion. The evaluation comment was always the last word. And that’s the same problem that we find in teachers’ classrooms when they overemphasize IRE patterns.

So is there a better way for blog writers to respond to comments? Science related blogs are often trying to explain ideas and concepts and therefore have a relationship to the type of communication that goes on in science classrooms, but is evaluation-centred communication still the best approach? Taking it further, is a very conventional face-to-face default pattern of communication the best way to use the interactions that happen when people comment online?

Further reading:

Davies, S. (2009). Doing dialogue: Genre and flexibility in public engagement with science. Science as Culture, 18, 397-416.

Lemke, J. (1990). Talking science: Language, learning, and values. Westport, CT: Ablex.

van Zee, E.H., Iwasyk, M., Kurose, A., Simpson, D., & Wild, J. (2001). Student and teacher questioning during conversations about science. Journal of Research in Science Teaching, 38, 159-190.

Science scholars, science blogging and boundary work

The October 12 editorial in the journal Analytical Chemistry, written by Dr. Royce Murray, strongly argues that the “the current phenomenon of ‘bloggers’ should be of serious concern to scientists” because they do not have a stable employer who monitors qualifications and facilitates fact checking. Several others have commented thoughtfully on the misconceptions represented in the editorial, also noting that the editorial’s final conclusion, caveat emptor, is instead a sound policy when reading, listening to or watching any source.

What struck me most (and Egon Willighagen too), however, was that Murray seemed to engage in exactly what I must assume was his criticism of bloggers – using rhetorical strategy rather than evidence to strengthen factual claims. The first line set the tone for what the strategy would be: boundary work. “If you are a science scholar, you hope that all scientific articles that you read are grounded in fact.”

I suspect that anyone (not reading fiction of course) hopes that what they are reading is true. Some people are more diligent than others in checking to see if that is the case, but to say that only science scholars hope that what they read is grounded in fact is a far out claim. So why make it? – Because it defines ingroups and outgroups. The reader is directly addressed (“you”) and the author attempts to make a bond we them, essentially saying “You and I are science scholars and we have shared values of truth”. The second thing that this statement does is then cast “the other” – everyone who isn’t a science scholar – as not sharing this value of truth. The important thing is not that the readers are actually science scholars (because they probably are – this is a specialist publication and Murray is a prominent science scholar in every regard), it’s that he’s used this characterisation to frame his argument within a context of  “us” and “them” – those who share his value of truth and those who do not.

So who are they, this “them”. They are the “lay public—those with some to no science education”. There is a clear boundary set here – those who are scientists and those who are not. By defining them through their formal science education (using the nonspecific “some to no”) the boundary is set between those who know and those who are to be educated and presumably protected from misinformation. Aside from continuing the emotional valence created with the reader (defining the other in a way that is far from how the reader would see himself or herself), this statement makes the predicament seem more dire. There is no room in this argument for anyone in between, which in reality is a lot of people – those who might be called the interested public, those actively engaged in science education, science journalists, and most people who work in science related fields but are not scholars (e.g., engineers, many industrial scientists, technicians, and nurses). With a public that is already active and interested in science and science policy, the argument would have to be much more nuanced. This also ignores that scientists are sometimes argued to be lay in areas far outside of their own specialities, relying often on simplifications of others’ research. But instead of a nuanced argument, this editorial presents the simplistic statement that “We science scholars should care a great deal about how well the general public is served with reliable science news.” “We should protect them

The second piece of boundary work, is this clear dividing line that Murray proposes between “bloggers” and “journalists”. Bloggers have no qualifications (or least no one checks their qualifications) and they use the internet like a megaphone. Journalists work for reputable companies that fact check their work. As others have pointed out, this isn’t a line that really exists, especially in science blogging where many prominent blogs are written by established and emerging science journalists (e.g., David Dobbs, Maryn McKenna, Carl Zimmer, Ed Yong).

More importantly, although it’s not explicitly said, the real boundary that I see being created here is between those who do the science and those who communicate it, as evidenced by the following statement “The traditional pathways have been popular science monthly magazines, a few television and radio programs, and science columns in public newspapers. The quality of this flow of information, I believe, has been mostly high—as judged by its producers’ attention to factual reliability and impact.” Science scholars are not science communicators, that is the job of science journalists.

The question I’m left with is: why? Boundary work is typically seen in situations where science and scientists are under some threat and where they will benefit from a clear distinction. The term is attributed to Thomas Gieryn, published in a seminal paper in 1983 where he challenged his contemporary sociologists and philosophers, who were at the time searching for objective ways to demarcate science from non-science, by saying that it wasn’t the task of sociologists and philosophers to demarcate – scientists do it in practice all the time. He explained boundary work as a rhetorical strategy whereby scientists attribute “selected characteristics to the institution of science (i.e., to its practitioners, methods, stock of knowledge, values and work organization) for purposes of constructing a social boundary” (p. 782). The selected characteristics would change depending on what was most advantageous in the context. He describes John Tyndall’s work in Victorian England to distinguish science from religion and from engineering (seen at the time as a non-theoretical pursuit). Tyndall chose his words and his arguments carefully to ensure the impression of maximum difference between science and either religion or engineering. Both were threats to his efforts to have science given the public recognition and support he (and others of course) felt it deserved.

In another example, Sarah Parry illustrates the way that stem cell scientists engage in boundary work in public forums to, in some instances scientify their work to reduce sentimental reactions from the public (e.g., carefully defining blastocysts as masses of cells to defuse public perception of embryos as possessing fully human characteristics) and in other instances expressing solidarity with the ethical concerns the audience had regarding adult cloning. They worked hard in their speech to maintain cloning as a political and ethical issue – and one that does not involve them: They are not cloners. They set a clear ingroup of respectable stem cell scientists and a clear outgroup of renegade unethical scientists.

In both of these examples, the boundary work helped the scientists involved to protect their work from an external threat (to be clear, I’m not proposing these as negative or deceitful on the part of the scientists. It may not even have been conscious but it had the desired effect.)

What’s the threat here? Why must the public be cast as so unknowledgeable? This not to say that public scientific literacy isn’t an issue to be addressed, but casting all those who are not science scholars as an uneducated public is also to misrepresent that issue. Why must scientists be separated from those who communicate science? To me, that is one of the biggest of the most promising aspects of science blogging – scientists and graduate students communicating directly about their work. Why engage in boundary work that separates the two?

I might be naive, but I can’t imagine that the findings in Analytical Chemistry are being flagrantly misrepresented by unscrupulous bloggers bent on spreading misinformation. I would imagine this in not the threat – so what is? Hank Campbell at Science 2.0 offers some possible explanations in the opening of his post, suggesting blogs are a threat to traditional monetized publishing industry. But in the world of academic journals, I don’t think it’s the editors that make the money so I’m not entirely convinced that that’s it.

Is the threat just the possibility of scientists beginning to communicate their work directly to the public?


Gieryn, T.F. (1983). Boundary-work and the demarcation of science from non-science: Strains and interests in professional ideologies of scientists. American Sociological Review, 48, 781-795.

Parry, S. (2009). Stem cell scientists’ discursive strategies for cognitive authority. Science as Culture, 18, 89-114.