Study released on Dutch researcher’s “culture of fraud”

In a previous Math Drudge blog, on the growth in scientific fraud, we described the case of Netherlands social psychologist Diederik Stapel, who, based on an initial investigation, had been accused of serious and serial fraud in his work in the field of social psychology.

Now a more detailed report has been released on the affair. As summarized in a November 29,  2012 article in Science, the report paints a picture not only of widespread fraud, but more generally asserts  that  “from the bottom to the top there was a general neglect of fundamental scientific standards and methodological requirements.”

Some of the specific findings of the report include:

  1. The panel found clear-cut fraud in 55 of Stapel’s 137 papers, and also in 10 Ph.D. theses written by students that he had supervised.
  2. Fraud was suspected in another 10 papers, due to irregularities found in statistics and data.
  3. Stapel often designed studies with others, but then insisted on collecting the data himself.
  4. Stapel had a reputation as a “Golden Boy,” almost beyond criticism, and did not tolerate critical questions about his data.
  5. Panelists found instances of “verification” bias, for example by repeating an experiment until it produced the desired outcome, or by tossing out results that seemed discordant.
  6. Panelists found instances where the research procedures described in published papers were different from those actually used in the research.
  7. Statisticians on the panel found “countless flaws” in statistical methods. Stapel’s co-authors in most cases were unfamiliar with elementary statistics.

The report was particularly critical of the broad  social psychology community’s failure to notice (or perhaps care about)  rather obvious mathematical and statistical irregularities:

It is almost inconceivable that co-authors who analysed the data intensively, or reviewers of the international “leading journals”, who are deemed to be experts in their field, could have failed to see that a reported experiment would have been almost infeasible in practice, did not notice the reporting of impossible statistical results, … and did not spot values identical to many decimal places in entire series of means in the published tables.

The world of scientific research already has a very difficult public relations task. In spite of some improvements in scientific education, large fractions of the public, even in technologically advanced nations such as the United States, remain deeply skeptical of such basic scientific notions as old-earth geology, evolution and global warming. Scientific research budgets have already been cut in some first-world nations, and may be cut further as nations struggle—rightly or wrongly—to stem the flow of red ink.

Thus if many more of these cases of outright fraud come to light (and there are indications that this may be merely the tip of an iceberg), we may see a major “tipping point” in the public’s perception and trust of science. This makes the job of the responsible science journalist all the more crucial. If we do not succeed in keeping scientific research relatively clean and perceived so to be, we will all be losers.

[Note: Readers may also be interested in this Conversation article, which expresses concern for the general level of methodological rigor in the social sciences in general, and in psychology in particular.]

[Added 29 Apr 2013:] A New York Times article explores the Stapel case in detail, and discusses its implications for modern social science.

Comments are closed.