Re: Study examines researchers’ practices
http://www.nature.com/nature/journal/v435/n7043/full/435737a.html
Nature 435, 737-738 (9 June 2005) | doi: 10.1038/435737a
Scientists behaving badly
Brian C. Martinson1, Melissa S. Anderson2 and Raymond de Vries3
Brian C. Martinson is at the HealthPartners Research Foundation, 8100 34th Avenue South, PO Box 1524, Mailstop 21111R, Minneapolis, Minnesota 55440-1524, USA.
Melissa S. Anderson is at the University of Minnesota, Educational Policy and Administration, 330 Wulling Hall, Minneapolis, Minnesota 55455, USA.
Raymond de Vries is at the University of Minnesota, Center for Bioethics, N504 Boynton, Minneapolis, Minnesota 55455, USA.
Top of pageAbstractTo protect the integrity of science, we must look beyond falsification, fabrication and plagiarism, to a wider range of questionable research practices, argue Brian C. Martinson, Melissa S. Anderson and Raymond de Vries.
Serious misbehaviour in research is important for many reasons, not least because it damages the reputation of, and undermines public support for, science. Historically, professionals and the public have focused on headline-grabbing cases of scientific misconduct, but we believe that researchers can no longer afford to ignore a wider range of questionable behaviour that threatens the integrity of science.
We surveyed several thousand early- and mid-career scientists, who are based in the United States and funded by the National Institutes of Health (NIH), and asked them to report their own behaviours. Our findings reveal a range of questionable practices that are striking in their breadth and prevalence (Table 1). This is the first time such behaviours have been analysed quantitatively, so we cannot know whether the current situation has always been the case or whether the challenges of doing
Science today create new stresses. Nevertheless, our evidence suggests that mundane 'regular' misbehaviours present greater threats to the scientific enterprise than those caused by high-profile misconduct cases such as fraud.
Table 1: Percentage of scientists who say that they engaged in the behaviour listed within the previous three years (n=3,427)
Full table
As recently as December 2000, the US Office of
Science and Technology Policy (OSTP) defined research misconduct as "fabrication, falsification, or plagiarism (FFP) in proposing, performing, or reviewing research, or in reporting research results"1. In 2002, the Federation of American Societies for Experimental Biology and the Association of American Medical Colleges objected to a proposal by the US Office of Research Integrity (ORI) to conduct a survey that would collect empirical evidence of behaviours that can undermine research integrity, but which fall outside the OSTP's narrow definition of misconduct2, 3. We believe that a valuable opportunity was wasted as a result.
A proper understanding of misbehaviour requires that attention be given to the negative aspects of the research environment. The modern scientist faces intense competition, and is further burdened by difficult, sometimes unreasonable, regulatory, social, and managerial demands4. This mix of pressures creates many possibilities for the compromise of scientific integrity that extend well beyond FFP.
We are not the first to call attention to these issues — debates have been ongoing since questionable research practices and scientific integrity were linked in 1992 report by the National Academy of Sciences5. But we are the first to provide empirical evidence based on self reports from large and representative samples of US scientists that document the occurrence of a broad range of misbehaviours.
The few empirical studies that have explored misbehaviour among scientists rely on confirmed cases of misconduct6 or on scientists' perceptions of colleagues' behaviour7, 8, 9, or have used small, non-representative samples of respondents8, 9. Although inconclusive, previous estimates of the prevalence of FFP range from 1% to 2%. Our 2002 survey was based on large, random samples of scientists drawn from two databases that are maintained by the NIH Office of Extramural Research. The mid-career sample of 3,600 scientists received their first research-project (R01) grant between 1999 and 2001. The early-career sample of 4,160 NIH-supported postdoctoral trainees received either individual (F32) or institutional (T32) postdoctoral training during 2000 or 2001.
Getting data
To assure anonymity, the survey responses were never linked to respondents' identities. Of the 3,600 surveys mailed to mid-career scientists, 3,409 were deliverable and 1,768 yielded usable data, giving a 52% response rate. Of the 4,160 surveys sent to early-career scientists, 3,475 were deliverable, yielding 1,479 usable responses, a response rate of 43%.
Our response rates are comparable to those of other mail-based surveys of professional populations (such as a 54% mean response rate from physicians10). But our approach certainly leaves room for potential non-response bias; misbehaving scientists may have been less likely than others to respond to our survey, perhaps for fear of discovery and potential sanction. This, combined with the fact that there is probably some under-reporting of misbehaviours among respondents, would suggest that our estimates of misbehaviour are conservative.
Our survey was carried out independently of, but at around the same time as, the ORI proposal. The specific behaviours we chose to examine arose from six focus-group discussions held with 51 scientists from several top-tier research universities, who told us which misbehaviours were of greatest concern to them. The scientists expressed concern about a broad range of specific, sanctionable conducts that may affect the integrity of research.
To affirm the serious nature of the behaviours included in the survey, and to separate potentially sanctionable offences from less serious behaviours, we consulted six compliance officers at five major research universities and one independent research organization in the United States. We asked these compliance officers to assess the likelihood that each behaviour, if discovered, would get a scientist into trouble at the institutional or federal level. The first ten behaviours listed in Table 1 were seen as the most serious: all the officers judged them as likely to be sanctionable, and at least four of the six officers judged them as very likely to be sanctionable. Among the other behaviours are several that may best be classified as carelessness (behaviours 14 to 16).
Admitting to misconduct
Survey respondents were asked to report in each case whether or not ('yes' or 'no') they themselves had engaged in the specified behaviour during the past three years. Table 1 reports the percentages of respondents who said they had engaged in each behaviour. For six of the behaviours, reported frequencies are under 2%, including falsification (behaviour 1) and plagiarism (behaviour 5). This finding is consistent with previous estimates derived from less robust evidence about misconduct. However, the frequencies for the remaining behaviours are 5% or above; most exceed 10%. Overall, 33% of the respondents said they had engaged in at least one of the top ten behaviours during the previous three years. Among mid-career respondents, this proportion was 38%; in the early-career group, it was 28%. This is a significant difference (2=36.34, d.f.=1, P<0.001). For each behaviour where mid- and early-career scientists' percentages differ significantly, the former are higher than the latter.
Although we can only speculate about the observed sub-group differences, several explanations are plausible. For example, opportunities to misbehave, and perceptions of the likelihood or consequences of being caught, may change during a scientist's career. Or it may be that these groups received their education, training, and work experience in eras that had different behavioural standards. The mid-career respondents are, on average, nine years older than their early-career counterparts (44 compared with 35 years) and have held doctoral degrees for nine years longer.
Another possible explanation for sub-group differences is the under-reporting of misbehaviours by those in relatively tenuous, early-career positions. Over half (51%) of the mid-career respondents have positions at the associate-professor level or above, whereas 58% of our early-career sample are post-doctoral fellows.
Addressing the problem
Our findings suggest that US scientists engage in a range of behaviours extending far beyond FFP that can damage the integrity of science. Attempts to foster integrity that focus only on FFP therefore miss a great deal. We assume that our reliance on self reports of behaviour is likely to lead to under-reporting and therefore to conservative estimates, despite assurances of anonymity. With as many as 33% of our survey respondents admitting to one or more of the top-ten behaviours, the scientific community can no longer remain complacent about such misbehaviour.
Early approaches to scientific misconduct focused on 'bad apples'. Consequently, analyses of misbehaviour were limited to discussions of individual traits and local (laboratory and departmental) contexts as the most likely determinants. The 1992 academy report5 helped shift attention from individuals with 'bad traits' towards general scientific integrity and the 'responsible conduct of research.'
Over the past decade, government agencies and professional associations interested in promoting integrity have focused on responsible conduct in research5, 11, 12. However, these efforts still prioritize the immediate laboratory and departmental contexts of scientists' work, and are typically confined to 'fixing' the behaviour of individuals.
Missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures. A 2002 report from the Institute of Medicine directed attention to the environments in which scientists work, and recommended an institutional (primarily university-level) approach to promoting responsible research13. The institute's report also noted the potential importance of the broader scientific environment, including regulatory and funding agencies, and the peer-review system, in fostering or hindering integrity, but remained mostly silent on this issue owing to a dearth of evidence.
In our view, certain features of the research working environment may have unexpected and potentially detrimental effects on the ethical dimensions of scientists' work. In particular, we are concerned about scientists' perceptions of the functioning of resource distribution processes. These processes are embodied in professional societies, through peer-review systems and other features of the funding and publishing environment, and through markets for research positions, graduate students, journal pages and grants. In ongoing analyses, not yet published, we find significant associations between scientific misbehaviour and perceptions of inequities in the resource distribution processes in science. We believe that acknowledging the existence of such perceptions and recognizing that they may negatively affect scientists' behaviours will help in the search for new ways to promote integrity in science.
Little attention has so far been paid to the role of the broader research environment in compromising scientific integrity. It is now time for the scientific community to consider what aspects of this environment are most salient to research integrity, which aspects are most amenable to change, and what changes are likely to be the most fruitful in ensuring integrity in science.
Top of pageAcknowledgments
This research was supported by the Research on Research Integrity Program, an ORI/NIH collaboration, with financial support from the National Institute of Nursing Research and an NIH Mentored Research Scientist Award to R.d.V. We thank the three anonymous reviewers, Nick N. Steneck and M. Sheetz for their insightful input and responses to earlier drafts.
Top of pageReferences
OSTP Federal Policy on Research Misconduct
http://www.ostp.gov/html/001207_3.html (2005).
Teitelbaum, S. L. Nature 420, 739−740 (2002). | Article | PubMed | ChemPort |
Korn, D. Nature 420, 739 (2002). | Article | PubMed | ChemPort |
Freeman, R., Weinstein, E., Marincola, E., Rosenbaum, J. & Solomon, F.
Science 294, 2293−2294 (2001). | Article | PubMed | ISI | ChemPort |
Panel on Scientific Responsibility and the Conduct of Research (Natl Acad., Washington DC, 1992).
Steneck, N. H. ORI Introduction to the Responsible Conduct of Research (US Government Printing Office, Washington DC, 2004).
Swazey, J. M., Anderson, M. S. & Louis, K. S. Am. Sci. 81, 542−553 (1993).
Ranstam, J. et al. Control Clin. Trials 21, 415−427 (2000). | Article | PubMed | ChemPort |
Geggie, D. J. Med. Ethics 27, 344−346 (2001). | Article | ChemPort |
Asch, D. A., Jedrziewski, M. K. & Christakis, N. A. J. Clin Epidemiol. 50, 1129−1136 (1997). | Article | PubMed | ChemPort |
Committee on Science Engineering and Public Policy On Being a Scientist: Responsible Conduct in Research (Natl Acad., Washington DC, 1995).
Panel on Scientific Responsibility and the Conduct of Research (Natl Acad., Washington DC, 1993).
Institute of Medicine and National Research Council Committee on Assessing Integrity in Research Environments Integrity in Scientific Research: Creating an Environment that Promotes Responsible Conduct (Natl Acad., Washington DC, 2002).