Shock waves in the human sciences! Six more of Brian Wansink’s published papers are being retracted, Cornell University announced September 20, bringing the total to 13, and the professor has resigned in disgrace. It is not just scientific peers who are affected as Brian Wansink’s flawed methodology is exposed and his papers are withdrawn from journals. Millions of ordinary people have also been influenced by his research on “mindless eating.” Nutritionists and marketers alike have also based decisions on his findings. But – what do these retractions mean for the methodology of the sciences? And – why should we seize on this example in Theory of Knowledge?
What kind of “shared knowledge” matters in the sciences?
As head of Cornell’s Food and Brand lab, Wansink appeared to combine scientific study of environmental factors that affect eating behaviour with skill in communicating with the public. His research attracted extensive popular attention as it seemed to illuminate everyday over-eating, with practical implications for controlling it. He has been an influential figure in both science and media: he has hundreds of scientific studies to his name and has held prestigious positions in US organizations for food and nutrition. He has also appeared in popular magazines, TED talks and the Oprah Winfrey show.
Personally, I’ve read his book Mindless Eating: Why We Eat More Than We Think from cover to cover, reading my favourite bits aloud to family members. I was delighted with his experiments using endlessly replenishing soup bowls in his restaurant lab: half the participants in the meal had normal bowls, but the other half had bowls that were rigged through tubes under the table to refill as the diners ate. For this zany experiment, he won a comic Ig Nobel Prize in 2007. I found his conclusions fascinating – that people will just keep eating if their bowls don’t empty:
“We found that the participants who were unknowingly eating from self–refilling bowls ate 73% more soup that those eating from normal bowls…. We conclude that the amount of food on a plate or in a bowl provides a visual cue or consumption norm that can influence how much one expects to consume and how much one eventually consumes.”
Science magazine identifies two studies that appear to contribute in a similar way to popular wisdom, but are now retracted:
“Among the papers retracted by The Journal of the American Medical Association on 19 September are one finding that people ate more calories while watching a stimulating action movie than a tame interview show and another concluding that people given bigger bowls at a Super Bowl party served themselves more calories.”
Like many others, I was intrigued by his analyses of the environmental cues that affect eating, or trigger over-eating. In my own efforts at weight control I went right out and bought smaller wine glasses and smaller dinner plates, based on his findings — and I was one among the hordes!
Clearly, Wansink has contributed to the “shared knowledge” about which we speak in TOK – widely, widely shared knowledge claims! But it’s the present retraction of his work that makes him a fine example for Theory of Knowledge, for a critical scrutiny of this central concept of our course.
In TOK we draw an essential distinction:
- Knowledge claims that are “shared” in the media may be widely disseminated, but “shared” in this sense means no more than “familiar to the public” or “popular”. True and false knowledge claims alike can travel widely, and plenty of “buzz” doesn’t mean plenty of credibility!
- Knowledge claims that are “shared” in the sciences, however, are expected to be communicated within a rigorous process of testing, peer review, and replication. What gives scientific knowledge claims their credibility is the careful methodology that generates them and then demands that they be perpetually open to further questioning and revision in the face of new evidence. At least, this is the ideal.
In Wansink’s case, however, the ideal seems to have broken down. The Provost of Cornell University issued the following statement September 20, 2018:
“Consistent with the university’s Academic Misconduct policy, a faculty committee conducted a thorough investigation into Professor Wansink’s research. The committee found that Professor Wansink committed academic misconduct in his research and scholarship, including misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship. As provided in Cornell policy, these findings were thoroughly reviewed by and upheld by Cornell’s dean of the faculty.”
Professor Wansink has tendered his resignation and will be retiring from Cornell at the end of this academic year. He has been removed from all teaching and research.
Does the re-evaluation of Wansink’s work demonstrate failure in the scientific process? If so, is his individual failure to follow careful scientific procedures the important point for science, or is the real story the shared failure involved in inadequate peer review for so many years?
OR, on the contrary, do retraction and academic disciplining demonstrate the scientific process in action, as correctives to earlier failings? After all, other scientists did pick up on some of his manipulative use of statistics (so called “p-hacking”) and started to ask questions that ultimately led to the full university investigation.
“True” and “justified”: Is a retracted finding necessarily false?
The advantages for TOK of this example of Brian Wansink, however, don’t stop here. After all, the Cornell University announcement of his research failings do not include any comment on whether or not his conclusions are accurate. Wansink himself denies deliberate wrong doing, and declares that his findings will turn out to be right:
“The university’s accusations, he wrote in a statement, ‘can be debated, and I did so for a year without the success I expected. There was no fraud, no intentional misreporting, no plagiarism, or no misappropriation.’ He added, ‘I believe all of my findings will be either supported, extended, or modified by other research groups.’ ”
Indeed, it’s possible that he could turn out to have reached true conclusions about people’s behaviour around eating and their responses to environmental cues. Some of his conclusions seem to be intuitively and imaginatively persuasive. Myself, I’m not about to go back to using big wine glasses and big plates!
But that’s the thing. The scientific process might start with intuition and imagination (TOK WOK) – a canny guess or an insight into relationships among variables. But it doesn’t stop there. The guesses have to be framed as hypotheses and subjected to testing (WOK sense perception/observation and reasoning). It’s the process of science – the methodology – that disciplines human beings to put aside their prior guesses and forces them to examine what the evidence says. It forces them (we hope), even sometimes with understandable human reluctance, to lay aside conjectures that simply are not supported by the data.
The cognitive sciences tell us so much – so very, very much – about confirmation bias, our tendency to notice whatever supports what we believe already and to screen out whatever contradicts it. Scientists don’t stop being human as they enter their labs, but rely on the demands of a careful methodology to compel them to look at what is really there.
In short, it is the methodology of science that makes its conclusions reliable. Those conclusions may not turn out to be true eventually, and may be overturned by future evidence, thereby forcing revisions. But they are justified. That is, they are supported by evidence and the whole process of looking for it, finding it, interpreting it, sharing it. This distinction between TRUTH and JUSTIFICATION is core to Theory of Knowledge
Brian Wansink’s response to having his research invalidated – having it retracted from peer journals and having his work repudiated by Cornell University – seems to indicate that he doesn’t fully accept the requirements of science. He seems to express a dismissive attitude toward some of the stuffy rigour of the method, as he writes to James Hablin of The Atlantic:
“You can do research for other academics, or you can do research to solve problems,” [Wansink] wrote to me. “Doing it for academics is more prestigious, but doing it to solve real problems in the real world is more gratifying—enriching, as I said. Having people say, ‘I do something differently because of your research, and it works’ takes away the sting of someone pointing out the degrees of freedom in an F-test were wrong.”
It’s easy to be sympathetic to his expressed feeling that the petty details of an “F-test” (whatever that is!) are unimportant compared with appreciation of the results of his work. But…but…but….
but….if expected scientific procedures have not been followed, or if statistical results have been inappropriately interpreted, then why ever should we accept the conclusions? They may turn out to be right, or they may turn out to be wrong – but without a sound scientific methodology behind them, we have no reason to accept them, no reason at all.
This is bad news for other food researchers whose own work is built on trusting his, bad news for nutritional guidance that has used his results, and bad news for public trust in the processes and institutions of science.
And what about replication?
Finally, the example of Brian Wansink leads in Theory of Knowledge to a further look at the scientific demand for review and replication of results. In recent years, the human sciences, in particular psychology, have been struggling with a number of problems in these areas, facing charges that most articles published in peer journals do not stand up to replication.
Kiera Butler, writing in Mother Jones, gives a good thumbnail of one of the problems facing peer review:
“To see how Wansink’s work eluded the scientific gatekeepers, it helps to understand how journals decide which studies are worthy of publication. Most people know about the system of peer review, wherein research papers are vetted by the author’s academic peers prior to publication. But before that happens, the studies have to attract the attention of a journal editor. That step is key, according to Brian Nosek, a University of Virginia psychology professor who directs the scientific integrity advocacy group Center for Open Science. ‘Novel, exciting, and sexy results are simply much more likely to get published,’ he says…
‘Wansink is exceptional in that way…His results are unfailingly interesting.'”
For a more extensive treatment of the issues and debate that surround replication, I refer you to a post I did in this blog nearly three years ago in response to findings of the Reproducibility Project of the Open Science Collaboration. This article is particularly useful for TOK teachers in that it frames replication in the terms of the Theory of Knowledge course: “Reliability in psychological science: methodology in crisis?”, Oxford Education Blog. November 16, 2015.
Conclusion: Case Study
What’s bad for the sciences or other areas of knowledge is often very good for TOK. The problems that practitioners face in an area of knowledge — and their own debates about methodology — often provide us with some lively discussions and stories for class. The current news about Brian Wansink gives us a particular good case study: his studies were interesting and easily grasped for their everyday implications; and the retraction of his work illustrates, arguably, both a (short term) failure and a (long term) success in the processes of the sciences. Moreover, an examination of the process of peer review, publication, and retraction allows us in TOK to examine closely the concepts of “shared knowledge” and “justification”, with a stress on the essential role of methodology. A terrific example altogether. But I wouldn’t be surprised if you, like me, also felt a little sad.
“Brian Wansink, researcher behind 100-calorie snacks, discredited after 13 papers retracted”, The Current, CBC radio. September 21, 2018. https://www.cbc.ca/listen/shows/the-current/segment/15602312
“Cornell Scientist Resigns”, Wochit News, September 2018. https://www.youtube.com/watch?v=p0_9FcYx7z0
Kiera Butler, “This Cornell Food Researcher Has Had 13 Papers Retracted. How WereThey Published in the First Place?” Mother Jones, September 25, 2018. https://www.motherjones.com/food/2018/09/cornell-food-researcher-brian-wansink-13-papers-retracted-how-were-they-published/
Eileen Dombrowski, “Reliability in psychological science: methodology in crisis?”, Oxford Education Blog. November 16, 2015. https://educationblog.oup.com/theory-of-knowledge/reliability-in-psychological-science-methodology-in-crisis
James Hablin, “A Credibility Crisis in Food ScienceThe Atlantic. September 24, 2018. https://www.theatlantic.com/health/archive/2018/09/what-is-food-science/571105/
Stephanie M. Lee, “Cornell University Food Scientist Brian Wansink Just Had Six More Studies Retracted”, BuzzFeed. September 20, 2018. https://www.buzzfeednews.com/article/stephaniemlee/brian-wansink-jama-six-retractions-cornell
Eli Rosenberg and Herman Wong, “This Ivy League food scientist was a media darling. He just submitted his resignation, the school says.” Washington Post, September 20, 2018. https://www.washingtonpost.com/health/2018/09/20/this-ivy-league-food-scientist-was-media-darling-now-his-studies-are-being-retracted/?noredirect=on&utm_term=.d1fc1056e6c0
Kelly Servick, “Cornell nutrition scientist resigns after retractions and research misconduct finding”, Science, September 21, 2018. https://www.sciencemag.org/news/2018/09/cornell-nutrition-scientist-resigns-after-retractions-and-research-misconduct-finding
Brian Wansick, “Bottomless Bowls: Why Visual Cues of Portion Size May Influence Intake”, Food and Brand Lab, Cornell University. December 31, 2012. https://foodpsychology.cornell.edu/research/bottomless-bowls-why-visual-cues-portion-size-may-influence-intake
Portrait of Brian Wansink, Wikimedia Commons. https://en.wikipedia.org/wiki/Brian_Wansink#/media/File:BRIAN435S6556_copy.jpg