Naomi Oreskes: Feminist science is better science

Naomi Oreskes. Photograph by Kayana Szymczak.

Interview

Naomi Oreskes: Feminist science is better science

By Andrew Needham

Scroll to Article Content

American public life is rife with questions of scientific judgment. Does red meat really cause cancers and heart disease, or are such fears overblown? How can scientists tell that climate change is occurring and what the effects of global warming might be? And, perhaps most poignantly, why should lay people trust scientists, given the histories of scientific support for eugenics; of medical doctors doubting the experiences of female patients; and of conflict among scientists over the safety of atomic power, the effects of cigarette smoke, and numerous other matters?

Perhaps no writer is better suited to address these questions than Naomi Oreskes, a trained geologist and historian of science at Harvard University. Her 2010 book Merchants of Doubt, cowritten with Erik M. Conway, introduced readers to the means by which a small group of scientists, allied with industry and the right, generated doubt about “inconvenient truths” from acid rain to global warming. That book, and the 2014 documentary film it inspired, revealed public conflicts over scientific “truth” to be the result of finely crafted public-relations campaigns. “Doubt is our product,” the tobacco industry proudly claimed in the 1950s. Merchants of Doubt showed just how that product continues to be manufactured.

In her newest book, Why Trust Science?, Oreskes takes on an even thornier problem: the manufacture and maintenance of trust. Based on her Tanner Lectures on Human Values, delivered at Princeton University, the book explores the pursuit of scientific knowledge and consensus across the 20th and 21st centuries, the changing conception of science from an individual to a social pursuit, and the reasons for and responses to science going awry. It convincingly demonstrates that “we are not powerless to judge contemporary scientific claims” and offers a ringing defense of the social and intellectual diversity of scientific communities as a key measure of trustworthiness.

Oreskes recently sat down with Andrew Needham, a professor of history at New York University, to discuss Why Trust Science?


Andrew Needham (AN): Let’s start off by talking about how you came to write this new book.

Naomi Oreskes (NO): In 2010, Erik Conway and I published the book Merchants of Doubt, which looks at what I call serial contrarians: people who systematically sowed doubt about a set of environmental issues, including acid rain, the ozone hole, and climate change.

Erik and I were trying to understand why someone would do that, particularly because the people we were looking at were scientists. They didn’t appear to be shills for ExxonMobil. It seemed that there was some other, more complicated story going on.

One of our early discoveries was that these people had been affiliated with the tobacco industry, and that they had challenged the scientific evidence linking tobacco to cancer, emphysema, cardiovascular disease, et cetera, et cetera. In our book, we demonstrated the role of political ideology. We showed how the “merchants of doubt” were committed to what we now know as neoliberalism (that word wasn’t really being bandied about back in 2005, when we started the book), but what we called free market fundamentalism. It’s the idea that government intervention in the marketplace is bad, free markets are good, and therefore any science that would imply the need for government regulation of the marketplace should be heavily scrutinized if not rejected altogether.

AN: Right.

NO: When we wrote that book, one of the things that we took for granted—one of the things we didn’t think we needed to explain—was why we should trust the science behind these issues. We more or less took it as a presumption that if there was a body of well-documented, peer-reviewed scientific work; reports from the National Research Council; reports from eminent scientific organizations like the Royal Society—that if there was a robust body of knowledge of that sort, then we, as the authors of the book, didn’t really have to question or doubt or defend that science. We could take it for granted that if the National Academy of Sciences had reviewed the literature and done a consensus report, then that science was probably robust. So, the question for us was: Why would anyone doubt that?

After the book came out and the politics of the situation evolved, it became increasingly clear that we really couldn’t take that for granted. It began to be clear that were a lot of people in the United States, and in some other parts of the world as well, who were not shills for ExxonMobil or the meat industry, people who did not necessarily own stock in Chevron or Texaco or Saudi Aramco or Peabody Coal, and yet who were, for whatever reasons, somewhat skeptical or suspicious of science.

This came home to me in a very forceful way when I went on the lecture circuit after Merchants of Doubt came out. The implicit message of my talk was: climate change isn’t a fad, this isn’t something invented by Al Gore. This is long-standing, well-developed science. It was developed by scientists who for the most part were not environmentalists. They were ordinary, nonpartisan scientists who had stumbled upon something that we would now call an environmental problem. But that wasn’t really how they thought about it at the time.

The point of all this, as I said, was to say to people: “Look, this is not just a fad, this isn’t just the latest craze.” Because often people would say that they thought it was. But it was long-standing, well-established, hard-won scientific knowledge.

After one of these lectures, a man in the audience stood up very aggressively, puffed out his chest, put his hands on his hips, and said: “Well, that’s all very well and good, but why should we trust the science?” I remember this moment very clearly because I recall thinking, “Yeah, that’s a good question.”

One of the things that you learn when you’ve been in the classroom for a long time is that, as we always say to our students, “There’s no such thing as a stupid question.” We always want to be sympathetic and empathetic, and even if our students clearly haven’t done the homework, we try to take on board the questions.

AN: Of course.

NO: I realized that I needed to do the same thing in public lectures. I needed to assume that my audience was asking questions in good faith, and I think most of the time they were. Not always. But most of the questions really were more or less in good faith.

Even when I get a hostile or belligerent interlocutor, as this man was, I still think: even if the particular individual posing the question is hostile and belligerent, there might be other people in the audience who are not belligerent but who still might be wondering the same thing. They might be thinking, “Yeah, that’s a legitimate question.”

I went home and I started to think about it. What would an informed, thoughtful, nonjudgmental response to that question look like? Taking it as a legitimate question: not assuming that science deserves our trust, but really asking the question. Can we make a case, an intellectually robust case, in answer to that question? That’s what the new book tries to do.

AN: It’s very clear in Merchants of Doubt how doubt is produced: who pays to have it produced, as well as the vectors by which doubt enters into the conversation. Conversely, can you talk about what the vectors of trust look like, and how scientific trust is produced?

NO: That’s a great question. It’s not actually the question I answer in the book, except implicitly. There’s a huge social-science literature about trust and how bonds of trust are generated, and I’m not an expert in that literature.

One thing that we do know is that it’s extremely easy to be distrustful of something you don’t understand. Certainly, this has been my experience when I give public lectures. Sometimes when people are skeptical or hostile it’s because they feel that science is a black box, that scientists are arrogant, that scientists don’t explain things very patiently or very well. They’re being asked to accept a lot of things on faith.

If you think about it, that’s a legitimate criticism. Because often if there’s some pronouncement about a scientific finding, whether it’s from the National Academy of Sciences or a press release from your university, the declaration is always about the result. It’s almost never about how that result was obtained.

AN: Part of the book seems to be a kind of extended critique of this lack of public awareness of the means by which scientific consensus is produced. Also, you suggest that this is partially due to a failure to make these processes, the processes by which consensus arises, visible.

What are the strategies for opening the black box and making this not only apparent but comprehensible to a broader public?

NO: Here’s a recent example. A set of studies was published in 2019 claiming that there was little or no good evidence that eating large amounts of red and processed meat is bad for your health, and therefore recommending that Americans just keep on eating the same way they do. This recommendation was published in a peer reviewed journal, the Annals of Internal Medicine. But it is almost certainly incorrect.

We have a huge body of evidence telling us that, in general, you will be almost certainly healthier, overall, if you eat a diet with little, if any, red meat. And almost certainly you will be healthier if you avoid processed meat. The evidence for this is very extensive. Now, a set of papers, done by one group of researchers, is claiming that that’s not true, and that you should just keep eating red meat, it’s all fine. And journalists ran with it. It was all over the media, with many articles suggesting that conventional wisdom had been overturned.

The media coverage was misleading in several ways. Journalists should, right from the get-go, have thought, “That doesn’t strike me as highly plausible.” Even if you didn’t know anything about the authors, even if you didn’t know anything about the methodology, there should have been a pretty high bar before journalists ran with that story. Because it’s just not that plausible.

Moreover, it is extremely rare for one paper, or a set of papers by a group of associated researchers, to overturn stabilized knowledge in science. That’s not to say that there’s never a case in which one study could really make you rethink a lot of other stuff. That has happened in the history of science. But not very often.

When one looks closely—and I have looked closely at many cases where science has been overturned—one generally finds that it’s not just the one person, it’s not just the one study that does the overturning. Therefore, we should be suspicious of any one study that claims to overturn well-established science. In the red meat case, it turned out, lo and behold, that several of the authors did have ties to the food industry. Surprise, surprise. Journalists should have looked more closely before they ran with this as a story of science being overturned.

AN: This idea of the one study and the heroic individual scientist connects with a key moment in the book—when you talk about the philosophy of science moving from a focus on the search for an idealized scientific method toward the idea that scientific consensus is socially produced.

NO: If one looks at climate change denial—or any other kind of contrarian narratives that are promoted in popular culture—one almost always finds that the contrarians are drawing on an individualistic trope. They are drawing on the notion of the heroic individual, the one individual who can overturn a scientific consensus. They use the heroic individual trope to dismiss the consensus: it doesn’t matter if hundreds or even thousands of scientists agree, because this one man—me, or this guy I’m promoting—has a different view. He is right and they are all wrong. You see a lot of that in climate change denial.

This is one reason why retired MIT meteorologist Richard Lindzen gets so much attention. Why does anybody even listen to him? He’s one person, he’s retired, he’s old, he’s crabby, and, in my experience, he’s mean. He once wrote an op-ed attacking me in which he didn’t even get my name right. However, he did publish some important papers on cloud feedbacks back in the 1980s. His concerns were taken seriously at the time, but he was shown to be wrong. Those papers have been refuted now by three decades of scientific evidence. He’s been demonstrated to have been wrong on that issue. So why would anybody even pay attention to this guy?

We know why the Cato Institute pays attention to him. It’s because he’s telling a story that they want to hear, one that suits their political and ideological orientation and their economic interests. But why is he effective? Why are they able to use him, and why do journalists, for example, continue to interview him?

It’s because of the power of the individualistic vision of science. If you didn’t think that an individual could overturn 100 years of scientific work, you wouldn’t care what Richard Lindzen thought. You would dismiss him as the crank that in my opinion he is. But he gets enormous amounts of attention. The media can’t wait to interview him, and his arguments come up again and again and again. He’s a zombie scientist that you can’t ever get away from.

I think that’s where the damage comes from. The individualistic trope makes many of us think that we cannot afford to ignore Richard Lindzen. It makes us believe this one person could actually overturn decades of established science. I don’t want to say that it’s never the case that an individual is important in science. Clearly, that would be silly. There have been some individuals—Einstein, Darwin, Newton—who have changed science. There’s three of them, in the entire history of science. And even these men did not change science on their own.

Darwin, in particular, while often cast as an isolated genius, laboring in solitude in Kent, was in fact part of a large network of scientists working on the question of evolution and its mechanisms. It’s also a very gendered ideal, because the “hero” in Western mythology and narrative is almost always male.

Even many scientists buy into the idea of the heroic individual scientist, although they know that it’s false. When you talk to scientists in very competitive areas, particularly people in cancer research, they’ll sometimes say things like, “I can’t take the weekend off because there’s some guy who’s 20 minutes behind me.” But if that’s actually true, then it means it can’t possibly be the case that what they’re doing is uniquely important. Because if they don’t do it, then this other guy will do it half an hour from now.

AN: One of the really fascinating counterpoints to this heroic individual science that you offer in the book is the role of feminist scholars. Not just their role in challenging the heroic male individual notion of scientific knowledge, but also in acting to reconstitute the possibility of scientific rationality.

Can you explain what makes the efforts of these feminist scholars so compelling when put into this broader narrative of the development of the philosophy of science over the long 20th century? To what extent do you think that the arguments of feminist scholars have been embraced by the scientific community?

NO: I don’t think they’ve been broadly embraced. That’s one of the goals of this book, to make more people aware of these arguments and their power.

One of the things that was interesting about writing this book, one of the things that was fun about it, was that I had a chance to go back and revisit a lot of things that I had read over the course of my career—things I’d read in graduate school, things I’d read as an assistant professor. Having always been a feminist, feminist philosophy of science was something I knew about, it was something I taught, it was something that I sometimes used in my work. But it wasn’t what I did centrally.

In rereading this material, I had a key realization. When some of the original feminist philosophy of science was being was done, back in the 1980s, alongside feminist work in the social studies of science, many scientists took offense. They saw any social analysis of science as an attack on the objectivity of science. And, to be fair, some people in science studies were anti-science, or at least anti what they saw as the excessive authority of the natural sciences. They were anti the intellectual hegemony of the natural sciences in academia—and, in some cases, in our culture at large.

However, feminists such as Helen Longino always argued that their work was not anti-science. It was, rather, about making science stronger by making it more genuinely objective, by making it less subject to bias. It was in rereading Longino’s argument that I had the realization that if scientists hadn’t been so busy taking offense, they actually could have used feminist perspectives to their advantage.

AN: Right.

NO: What feminist philosophers like Sandra Harding, Helen Longino, and Evelyn Fox Keller have argued (particularly in those arguments developed in the ’80s and ’90s, even though it’s still an ongoing topic of conversation) is that it was a mistake to think of objectivity as something that’s vested in the character of an individual.

We typically talk about a person being objective, but these women argued that this is not the best way to think about it. We all come to a discussion of any topic with our prejudices, our beliefs, our preferences—and that’s just human. There’s no way that that could ever not be the case. If we ever met a person who didn’t have values—this is what I discuss at the end of the book—who didn’t have preferences, we would think they were some kind of automaton at best, and more likely a sociopath.

Therefore, the goal in science shouldn’t be to expunge individual preferences, since that cannot be achieved. Instead, the goal should be to have a sufficiently diverse community such that somebody could flag those preferences and point them out. If you say something that I think is sexist, I could say, “Hey, Andrew, you know what? That felt a little sexist to me.” Or if I notice whole areas of data that scientists are ignoring (philosopher Lisa Lloyd has done important work on this), then saying, “How come you guys are ignoring this information?” gives me the opportunity to point out blind spots.

This will work most effectively if a community is diverse, not just in terms of gender, but in different ways: economically, racially, ethnically, philosophically, demographically. The more diverse the community is, the more likely someone in the community is able to say, “Wait a minute, I want to question that assumption.”

These feminist philosophers were particularly interested in the way women scientists were pointing out blind spots, gaps, and disappearances that, when they were pointed out, helped to make the science better. Male scientists were missing important things. While, in theory, men might have pointed out what was being missed, in practice, it was generally women who did. Many of the things that these women scientists pointed out are now accepted and taken for granted. The argument is that science is stronger if the community is diverse. And recent history supports that.

Helen Longino talks about what she calls strong objectivity: placing objectivity in an individual results in weak objectivity, because it relies on the individual and makes objectivity into individual discipline. If you can rely on a diverse community of people to point out blind spots, that’s a better, stronger strategy.

AN: That’s really fascinating.

NO: This way of thinking hasn’t taken hold in the scientific community as much as it should. Implicitly, there is some kind of acknowledgement. For example, we are seeing that, in large-scale scientific assessments, including by the Intergovernmental Panel on Climate Change, scientists make an effort to have diverse author groups. I think most academic scientists who support diversity do so on moral grounds. They think it is the right thing to do, which, of course it is, but it’s also the right move, epistemically.

Science thrives when it is open to anyone who has the talent for it, and the taste for the hard work involved. And society thrives when our institutions are seen to be fair. But my argument is that the case for diversity is epistemic as well as moral. I’ve never heard a scientist say, “Yeah, it’s really great that the feminists pointed this out because once we understood the epistemic benefits of diversity, we realized that we could do better science.” My goal is that, in the future, scientists will say that.


This article was commissioned by Caitlin Zaloom and originally published by Public Books.