Home Philosophy of Scientific Malpractice
Article Publicly Available

Philosophy of Scientific Malpractice

  • Hanne Andersen EMAIL logo
Published/Copyright: September 29, 2021

Abstract

This paper presents current work in philosophy of science in practice that focusses on practices that are detrimental to the production of scientific knowledge. The paper argues that philosophy of scientific malpractice both provides an epistemological complement to research ethics in understanding scientific misconduct and questionable research practices, and provides a new approach to how training in responsible conduct of research can be implemented.

1 Introducing Philosophy of Science in Practice

Over the last two decades, a new movement called “philosophy of science in practice” has developed within philosophy of science.[1] The double aim of this philosophy of science movement can be described through a distinction introduced by British philosopher of science John Dupré at the first conference for the Society for Philosophy of Science in Practice (SPSP; https://www.philosophy-science-practice.org/) in 2007. Here, Dupré described philosophy of science in practice as, on the one hand, a philosophy of science that engages in depth with the goals, tools, and social structures of the people and communities that produce science, and that examines how these contribute to the production of knowledge (philosophy of science-in-practice), and, on the other hand, a philosophy of science that interacts with practicing scientists, science educators, policy makers, and the general public in order to address philosophical problems in the sciences, as well as questions related to science education and the role and functioning of science in society (philosophy-of-science in practice).[2]

In both of these two forms, philosophy of science in practice has opened several new avenues of research for philosophy of science. Philosophy of science-in-practice extends philosophical analyses of how particular practices are conducive to the production of scientific knowledge to areas such as, for example, the funding or local organization of research. In this mode, philosophy of science in practice investigates questions such as, for example, why funding agencies tend to have a preference for research applications that frame their research description as hypotheses-testing (Haufe 2013; O’Malley et al. 2009), or which material and social conditions scientists need to consider when building a new research community (Ankeny and Leonelli 2016; Leonelli and Ankeny 2015). As philosophy-of-science in practice, philosophical reflections are brought into the sciences with the aim of improving scientific practices and the way in which they contribute to the production of knowledge. Often, there is a strong connection between the two modes of philosophy of science in practice. This implies that research from philosophy of science in practice may be published in philosophy as well as in science journals (see, for example, the dual outlet in the references above). It also implies that many philosophers of science in practice also engage in communities such as the Consortium for Socially Relevant Philosophy of/in Science and Engineering (SRPoiSE; http://srpoise.org/) or the Public Philosophy Network (https://www.publicphilosophynetwork.net/), and engage in science policy, in science education (see, for example, the topical collection on how to teach philosophy of science to science students forthcoming in the European Journal for Philosophy of Science), or in various types of research administration (for example, the author of this paper on scientific malpractice also serves on the Danish Committee on Scientific Misconduct). Similarly, because philosophy of science in practice builds on detailed studies of how the practices of science unfold in concrete cases, many philosophers of science also engage in integrated history and philosophy of science (&HPS; http://integratedhps.org/), in empirical philosophy of science (Wagenknecht, Nersessian, and Andersen 2015), or in social studies of science and STS.

This paper briefly presents and outlines one such new area of research that offers a strong connection between philosophy of (science-in-practice) and (philosophy-of-science) in practice, namely the study of practices that are detrimental rather than conducive to the production of scientific knowledge; what I shall here call “philosophy of scientific malpractice.”

In presenting philosophy of scientific malpractice as a new area of research, I shall argue that in examining how scientists’ actions and practices during science-in-the-making can be detrimental to the production of knowledge, philosophy of scientific malpractice provides an important epistemological complement to research ethics in understanding the phenomena usually referred to as scientific misconduct and questionable research practice. In this sense, philosophy of scientific malpractice is philosophy of science-in-practice. Further, I shall argue that in investigating how practices that are detrimental to the production of knowledge can be identified and remedied during science-in-the-making, philosophy of scientific malpractice can provide a new approach to how training in responsible conduct of research can be implemented. In this sense, philosophy of scientific malpractice is also philosophy-of-science in practice.

The development of such a new philosophy of scientific malpractice is still a work in progress. Hence, this article will differ from traditional philosophical publications in presenting the rough outlines of a research program rather than the details of a single, focused argument. In the following, I will attempt to sketch out a map of the philosophical landscape within which philosophy of scientific malpractice is situated, refer to other publications as a way of presenting this research program’s initial results, and finally describe how results from the research program can be fed back into scientific practice as a reform of the formalized training in research integrity and responsible conduct of research that is often offered to aspiring scientists.

2 Why is Scientific Malpractice an Important Topic for Philosophy?

Scientific misconduct and questionable research practices are phenomena that undermine the trustworthiness of science. Traditionally, scientific misconduct and questionable research practices have been perceived predominantly as questions about whether scientists report truthfully about their research, and hence as topics primarily of interest for research ethics. However, as I shall explain in more detail in the following, the actions of scientists who are ignorant, negligent, or careless may also undermine the trustworthiness of science, and it is the aim of philosophy of scientific malpractice to examine in which way such ignorance, negligence, or carelessness are detrimental to the production of scientific knowledge. In this sense, philosophy of scientific malpractice provides an epistemic complement to the current understanding of scientific misconduct and questionable research practices.

Over the last decades there has been an increased focus in public discourse as well as in science policy on scientific misconduct and questionable research practices, and the consequences that it has within the scientific community itself as well as for citizens, companies, and public agencies that rely on scientific knowledge in their decision-making. Since the 1970s, numerous news reports and popular books, especially from the US, have described a wealth of spectacular cases of scientists forging data, lying about credentials, or stealing the work of others (see, e.g., Hixson (1976) and Broad and Wade (1983) for some of the first overviews). In many of these cases, there seemed to have been little attempt from the scientific community to stop this behavior. In some cases, investigations seemed to have progressed remarkably slowly. In others, perpetrators were allowed to go unpunished and take up new positions at other institution where they could continue along the same line. Sometimes, whistle-blowers were silenced rather than supported. The US was the first country that attempted to improve the situation by introducing regulations that stipulated how institutions should handle allegations of misconduct, and, gradually, more and more countries have implemented similar regulations (see, e.g., Steneck (1994, 1999 for accounts of the US history, Stainthorpe (2007) for a brief review of regulations in the EU, or Apel (2009) for detailed comparisons of regulations in selected countries).

Most regulations have focused primarily on scientific misconduct; usually defined as plagiarism and as fabrication or falsification of data (Price 1994). However, many surveys indicate that while scientists engage only rarely in outright misconduct in the form of fabrication, falsification, and plagiarism, a wide range of questionable research practices are much more widespread. One frequently cited US study found that 0.3% of the scientists who replied to the survey had engaged in the falsification of data, 1.4% had used the ideas of others without obtaining permission or giving due credit, while 6% had failed to present data that contradicted their own research, and 12.5% had overlooked other’s use of flawed data or questionable interpretation of data (Martinson et al. 2005). Another study from the Netherlands showed that around half of the respondents reported engaging frequently in practices that can be classified as questionable. Among the most frequently reported of these practices were insufficient reporting of flaws and the limitations of presented studies as well as insufficient attention to equipment, skills, or expertise (Gopalakrishna et al. 2021). Similarly, a meta-analysis has estimated that while 2% admitted having “fabricated, falsified or modified data or results at least once” (Fanelli 2009, e5738), more than 33% admitted having engaged in various forms of questionable research practices. When asked about the behavior of colleagues, numbers rose, and 14% reported that they knew of colleagues who had falsified data, while 72% reported to know of colleagues engaging in questionable research practices (Fanelli 2009). Given that questionable research practices therefore seems to be a much more widespread phenomenon than outright misconduct, it is not surprising that, in interview studies, scientists primarily express concern about gray-zone behaviors (De Vries et al. 2006).

The high prevalence of gray-zone behaviors indicate that the scientific community needs better tools for effective prevention of questionable research practices and for early detection of questionable practices when they occur. Admittedly, more and more institutions have implemented dedicated training in research integrity and responsible conduct of research, especially for young scholars. However, there is still little evidence that these initiatives are having much effect (Resnik 2014). In this situation, philosophy of scientific malpractice can provide new and detailed analyses of how questionable practices develop during science-in-the-making. Thus, focusing on the process of science-in-the-making rather than on the final products and the science produced, philosophy of scientific malpractice can provide a deeper understanding of when and why questionable research practices are employed, and use this knowledge to provide new tools for prevention at an earlier stage of the research process.

3 Philosophy of Scientific Malpractice as Philosophy of Science-in-Practice

In examining the epistemic dimensions of how malpractice in the form of ignorance, negligence, and carelessness can undermine the trustworthiness of science, philosophy of scientific malpractice starts from analyses of what trust in science requires, and then examines various forms of noncompliance with these conditions.

In an influential analysis of trust in scientists and in the knowledge that they produce, Hardwig (1985, 1991 has argued that this trust requires not only that scientists report truthfully about their research; it requires also that individual scientists have good reasons for the scientific claims that they present, that they are knowledgeable about what constitutes good reasons within the domain of their expertise and have kept themselves up to date with those reasons, that they have performed their research carefully and thoroughly, and that they are capable of epistemic self-assessment and are aware of the extent and the limits of their knowledge, its reliability, and its applicability to the question at hand. Hence, one set of questions for philosophy of scientific malpractice to examine is how individual scientists may fail to meet these conditions, not only, as has been the traditional focus of research ethics, by being untruthful but also by being either ignorant of (some of) the scientific knowledge, skills, and competences that are constitutive for good reasoning within the area of research they are dealing with, or by being negligent in not making adequate use of the knowledge, skills, and competences that they possess when carrying out their research. The ignorance or negligence of the individual scientist is therefore one part of philosophy of scientific malpractice that I will briefly outline as research themes in the sections below.

Trust in science also requires that science, as a social system, is capable of identifying and correcting erroneous claims, whether they are the result of honest mistakes or of fraud or sloppiness.[3] Hence, internally in the scientific community, scientists need to be able to assess whether other scientists are not only truthful in their reporting but also whether they are knowledgeable and work carefully and thoroughly. For this reason, another set of questions for philosophy of scientific malpractice to examine is how scientists assess each other’s moral and epistemic character, what they see as warning signs that trust in a peer is unjustified, and how they react to these warning signs. The collective act of mutual calibration of trust is therefore another part of philosophy of scientific malpractice that I will briefly outline as research themes in the sections below.

In the final section of the paper, after I have introduced two particular research themes within the larger research program of philosophy of scientific malpractice, I shall show how these two areas of philosophy of scientific malpractice can be used to reform instruction in responsible conduct of research.

4 Malpractice as Ignorance or Negligence

As described above, one set of issues for philosophy of scientific malpractice to examine is how trust in science can be challenged by scientists’ ignorance or negligence. That scientists need to perform their research thoroughly and conscientiously may sound like truisms. However, an increasing number of retractions (Steen et al. 2013) and a growing reproducibility crisis (Baker 2016) indicate that things may be more complicated than that. Similarly, that scientists should be experts within their areas of research may also sound like a truism. However, cases like the one about the political scientist Bjørn Lomborg and his book on environmental and sustainability science, The Skeptical Environmentalist, shows that, again, things may be more complicated than that.

In this latter case, Lomborg had first argued in a series of feature articles in a major Danish newspaper that the standard litany of a growing environmental crisis was wrong. The feature articles were first transformed into a monograph in Danish (Lomborg 1998), and later into a monograph in English, published by Cambridge University Press (Lomborg 2001). A year later, complaints against Lomborg were lodged with the Danish Committees for Scientific Misconduct. In their verdict, the Committees found that, on the one hand, the monograph was systematically biased. But on the other hand, the Committees also stressed that it dealt with an extraordinarily wide range of scientific topics in which Lomborg did not have any special scientific expertise, and that therefore it could not be established that Lomborg had committed scientific misconduct by creating this bias intentionally or due to gross negligence. This verdict was controversial. Many Danish researchers signed petitions that either supported or criticized the Committees and the way they worked. Some found that Lomborg was ignorant of environmental and sustainability science, and that he did not understand the limitations of his own work. Others found that Lomborg’s critics were ignorant of economics, and that they did not understand Lomborg’s overall agenda. Some pointed to details in which Lomborg overlooked conceptual or theoretical constraints and argued that, taken together, the many flaws invalidated his conclusions. Others argued that there had been provided no overall “smoking gun” which showed that Lomborg’s argument was wrong. Some found that Lomborg had been highly selective in the material he included in his analyses and that this made his conclusions heavily biased. Others found it legitimate for researchers to emphasize research that primarily supports their own conclusions and said that their critics could do just the same. Hence, regardless of one’s inclinations towards Lomborg and his work, this case shows some of the many issues that can arise in the gray zone where scientists transgress their usual area of expertise, deviate from usual standards of care, or both at the same time.

In analyzing ignorance, philosophy of scientific malpractice can build on existing analyses of expertise as a foil. Here, philosophers, sociologists, and historians of science have argued for a distinction between contributory and interactional expertise (Collins and Evans 2008; Goddiksen 2014; Plaisance and Kennedy 2014), and examined the nature and function of the pidgin and creole languages that scientists develop in order to communicate in the trading zone between different areas of expertise (Andersen 2010; Galison 1999; Gorman 2010), as well as the way in which epistemic authority is negotiated in this trading zone (Andersen 2013). Similarly, philosophers of science have examined how experts may engage in scientific imperialism by imposing conventions or procedures from one field onto others, or by seeking to explain phenomena that are traditionally seen as belonging to another field’s domain (Mäki, Walsh, and Pinto 2017). Complementing this work, philosophy of scientific malpractice examines issues such as the epistemic consequences of miscommunication or lack of communication between related domains, of the failure to recognize serious limitations to the research produced, or of the failure to establish agreement on epistemic authority (see, e.g., Andersen (2009) as well as work in progress presented as “Failed Interdisciplinarity as a Foil” by Andersen at the workshop “Inter- und Transdisziplinarität: Neue Herausforderungen für die Wissenschaft,” Greifswald 2017).

In analyzing negligence, philosophy of scientific malpractice can build on existing analyses of scientific methods as a foil. Obviously, this literature is vast, and the following will therefore only serve as a sample. Some philosophers of science have argued that science is characterized by a particularly high systematicity in its descriptions, explanations, predictions, defense of knowledge claims, critical discourse, epistemic connectedness, ideal of completeness, and generation and representation of knowledge (Hoyningen-Huene 2013). Complementing this work, philosophy of scientific malpractice investigates the gray zones, for example, of scientific publications that do not display the degree of systematicity expected for scientific research, for example, by relying on anecdotal rather than systematically collected evidence, only relate weakly to existing literature in the domain, or do not consider important limitations to the way the research has been carried out. On this basis, philosophy of scientific malpractice can inform discussions about the varying quality of research literature, or the distinctions between, for example, research publication and public debate (see, e.g., Andersen forthcoming). Similarly, philosophers of science have argued for a taxonomy of error types into material, observational, conceptual, and discursive errors that can support error analysis during science-in-the-making (Allchin 2001). Complementing this work, philosophy of scientific malpractice examines the distinction between honest errors and errors that result from lack of care or from failures to perceive risks (work in progress presented as “Disentangling Discourse on Scientific Misconduct” Andersen [in collaboration with Allchin] at SPSP 2016).

5 Detecting Malpractice

Another set of issues for philosophy of scientific malpractice to examine is how scientists mutually calibrate their trust in each other, and the warning signs that may lead to distrust. That scientists should react to reports that sound too good to be true with some skepticism may seem obvious. However, reports from scientists who have encountered misconduct (Medawar 1976), as well as analyses of major misconduct cases (Andersen 2014a, 2014b), indicate that when researchers engage in fraudulent practices, even of the most glaring kind, it is often not recognized by their peers.

In analyzing how distrust can arise, philosophy of scientific malpractice can build on existing work on trust in science as a foil. Here, philosophers of science have argued that experts calibrate trust in other experts either directly, when both work within the same area of expertise, by comparing research outputs, or, when working in different areas, indirectly by drawing on the expert judgement of others in a chain that starts with direct calibration (Kitcher 1992, 1993). As the areas of expertise become more distant, they may rely on such indicators as argumentative ability, agreement with other experts in the field, appraisal by meta-experts, or past track record (Goldman 2001). Complementing this work, philosophy of scientific malpractice examines what happens when the preconditions for trust are violated and on how distrust in a peer’s truthfulness (moral character) or competence (epistemic character) can emerge and develop. As I have argued at length elsewhere (Andersen 2014a, 2014b), important parts of the processes through which trust among colleagues is calibrated have previously remained opaque. First, scientists can only assess the epistemic character of peers relative to their own expertise. On the one hand, the epistemic asymmetry between junior and senior scientists is an integral and explicit part of normal practices, where senior scientists are expected, first, in their role as supervisors, to monitor the work of their junior peers and to intervene if it deviates from accepted standards, second, in their roles as supervisors and mentors, to act as references for their junior colleagues and in this role carefully assess their command of the field and the thoroughness of their work.[4] On the other hand, mutual assessments of epistemic character between equal peers is usually much more intricate. While positive assessments form part of standard practices in the form of prizes and rewards, negative assessments are rarely addressed explicitly, let alone officially. Instead, they may transform into rumors and whisperings in the corridors. Similarly intricate is the assessments of peers’ moral character. Traditionally, it has been tacitly expected that scientists report truthfully, or, at least, training in research integrity as ethical decision-making will remind aspiring scientists of their duties to do so. However, in practice, scientists in their collaboration with peers do notice such indicators of moral character as honesty, loyalty, cooperativeness, fairness, consideration for others, and so on (Frost-Arnold 2013), but these considerations are usually not made explicit, and instead remain opaque. By contributing to making the mutual calibration of trust among colleagues more transparent, philosophy of scientific malpractice can clarify in which way failures to calibrate trust in peers can be seen as an epistemic failure, on a par with insufficient calibration and control of experimental set-ups (Allchin 1999).

6 Philosophy of Scientific Malpractice as Philosophy-of-Science in Practice

As declared in the opening of this article, philosophy of scientific malpractice is both philosophy of science-in-practice and philosophy-of-science in practice. The sections above have described how philosophy of scientific malpractice works as philosophy of science-in-practice by focusing on practices that are conducive to the production of scientific knowledge, and on how deviations from these practices can instead be detrimental to the production of scientific knowledge. But philosophy of scientific malpractice can also work as philosophy-of-science in practice by providing the theoretical framework for the practical training of scientists in research integrity and responsible conduct of research.

Traditionally, dedicated training in research integrity and responsible conduct of research tends to focus on scientific misconduct and on the moral obligation to report truthfully. Often, such training is focused on ethical decision-making (see, e.g., Shamoo and Resnik 2009), and on existing rules and regulations (see, e.g., Jensen et al. 2020). However, since misconduct is a relatively rare phenomenon, while the gray-zone questionable research practices are much more widespread, it will be useful for integrity training to focus more on the gray-zone behaviors’ origin in lack of expertise, lack of care, or failure to perceive risks. Further, at many institutions training in research integrity and responsible conduct of research have been implemented in response to some of the rare but spectacular cases of misconduct characterized by outright fraud. However, it is questionable whether such cases could have been prevented simply by having informed the perpetrator, for example, through integrity training, that lying and stealing is wrong. In contrast, training in how to detect and react to warning signs that something is “strange,” “off,” or “too good to be true” may be much more effective in protecting scientists from becoming associated with a major research scandal due to the wrongdoing of a colleague.

Presenting philosophical theories on expertise and epistemic authority, and engaging aspiring scientists in questions such as

  • – what is the literature to which you are primarily contributing,

  • – how strong is your background within this literature,

  • – who are currently the main authorities in this literature and where do you stand in comparison,

  • – are your contributions surprising, given the standard literature,

  • – or how do you explain deviations from received views,

philosophy of scientific malpractice can prompt aspiring scientists to reflect critically on expertise in research and help them articulate when they lack expertise themselves or detect ignorance in others.

Similarly, presenting philosophical theories on methods and error analysis, and by engaging aspiring scientists in questions such as

  • – by which principles have you collected your evidence,

  • – are there any deviations from these principles,

  • – what are the principles by which you have covered the relevant literature,

  • – have you covered views opposing to your own,

  • – have you covered potential criticism,

  • – what are the limitations to your methods and are these limitations made transparent to others,

  • – and how thoroughly have you examined the potential for material, observational, conceptual, and discursive errors,

philosophy of scientific malpractice can prompt aspiring scientists to reflect critically on standards of care and help them articulate it when they encounter practices that deviate from these standards.

Finally, by presenting philosophical theories on trust and epistemic dependence, and by engaging aspiring scientists in questions such as

  • – how well do you know your collaborators,

  • – what do you know about their level of expertise and about their previous track record,

  • – how cooperative, considerate, and fair are you as well as your individual collaborators in your collaboration,

  • – how far do you as well as your collaborators ask each other if there is something you do not understand,

  • – and how do you react when presented with a result that is surprising or with a result that fits your expectations perfectly,

philosophy of scientific malpractice can prompt aspiring scientists to reflect critically on epistemic dependence and help them articulate how they calibrate trust in others and what they see as warning signs that can call this trust into question.

In this way, philosophy of scientific malpractice provides exactly the kind of philosophical background that is needed for changing training in responsible conduct of science, first, to focus much more on the gray-zone behaviors, and, second, to focus more on when – and when not – to trust a peer.[5]


Corresponding author: Hanne Andersen, Department of Science Education, University of Copenhagen, Copenhagen, Denmark, E-mail:

References

Allchin, D. 1999. “Do We See through a Social Microscope? Credibility as a Vicarious Selector.” Philosophy of Science 66 (Supplement): S287–98, https://doi.org/10.1086/392732.Search in Google Scholar

Allchin, D. 2001. “Error Types.” Perspectives on Science 9: 38–59, https://doi.org/10.1162/10636140152947786.Search in Google Scholar

Allchin, D. 2016. “Correcting the “Self-Correcting” Mythos of Science.” Filosofia e Historia da Biologia 10 (1): 19–35.Search in Google Scholar

Andersen, H. 2009. “Unexpected Discoveries, Graded Structures, and the Difference Between Acceptance and Neglect.” In Models of Discovery and Creativity, edited by J. Meheus, and T. Nickles, 1–27. Dordrecht: Springer.10.1007/978-90-481-3421-2_1Search in Google Scholar

Andersen, H. 2010. “Joint Acceptance and Scientific Change: A Case Study.” Episteme 7 (3): 248–65, https://doi.org/10.3366/epi.2010.0206.Search in Google Scholar

Andersen, H. 2013. “Bridging Disciplines.” In Classification and Evolution in Biology, edited by H. Fangerau, H. Geisler, T. Halling, and W. Martin, 33–44. Stuttgart: Franz Steiner Verlag.Search in Google Scholar

Andersen, H. 2014a. “Co-author Responsibility. Distinguishing Between the Moral and Epistemic Aspects of Trust.” EMBO Reports 15: 915–8, https://doi.org/10.15252/embr.201439016.Search in Google Scholar

Andersen, H. 2014b. “Epistemic Dependence in Contemporary Science: Practices and Malpractices.” In Science After the Practice Turn in the Philosophy, History and Sociology of Science, edited by L. Soler, 161–73. New York: Routledge.Search in Google Scholar

Andersen, H. 2014c. Responsible Conduct of Research: Collaboration. RePoss #30, Research Publications in Science Studies, Centre for Science Studies. Aarhus University. https://css.au.dk/fileadmin/reposs/reposs-030.pdf.Search in Google Scholar

Andersen, H. Forthcoming. “Hvad er en (god) forskningspublikation?” Tidsskrift for Professionsstudier.10.7146/tfp.v17i33.129161Search in Google Scholar

Ankeny, R. A., and S. Leonelli. 2016. “Repertoires: A Post-Kuhnian Perspective on Scientific Change and Collaborative Research.” Studies in History and Philosophy of Science Part A 60: 18–28, https://doi.org/10.1016/j.shpsa.2016.08.003.Search in Google Scholar

Apel, L.-M. 2009. Verfahren und Institutionen zum Umgang mit Fällen wissenschaftlichen Fehlverhaltens. Baden: Nomos Verlag.10.5771/9783845220871Search in Google Scholar

Baker, M. J. N. 2016. “Reproducibility Crisis.” Nature 533 (26): 353–66, https://doi.org/10.1038/533452a.Search in Google Scholar

Broad, W., and N. Wade. 1983. Betrayers of the Truth. Oxford: Oxford University Press.Search in Google Scholar

Collins, H., and R. Evans. 2008. Rethinking Expertise. Chicago: University of Chicago Press.10.7208/chicago/9780226113623.001.0001Search in Google Scholar

De Vries, R., M. Anderson, and B. C. Martinson. 2006. “Normal Misbehavior: Scientists Talk About the Ethics of Research.” Journal of Empirical Research on Human Research Ethics 1 (1): 43–50, https://doi.org/10.1525/jer.2006.1.1.43.Search in Google Scholar

Fanelli, D. 2009. “How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-analysis of Survey Data.” PLoS ONE 4 (5): 1–11, https://doi.org/10.1371/journal.pone.0005738.Search in Google Scholar

Frost-Arnold, K. 2013. “Moral Trust & Scientific Collaboration.” Studies in History and Philosophy of Science 44: 301–10, https://doi.org/10.1016/j.shpsa.2013.04.002.Search in Google Scholar

Galison, P. 1999. “Trading Zone: Coordinating Action and Belief.” The Science Studies Reader: 137–60.Search in Google Scholar

Goddiksen, M. P. 2014. “Clarifying Interactional and Contributory Expertise.” Studies in History and Philosophy of Science Part A 47: 111–7, https://doi.org/10.1016/j.shpsa.2014.06.001.Search in Google Scholar

Goldman, A. I. 2001. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63 (1): 85–110, https://doi.org/10.1111/j.1933-1592.2001.tb00093.x.Search in Google Scholar

Gopalakrishna, G., G. ter Riet, M. J. Cruyff, G. Vink, I. Stoop, J. Wicherts, and L. Bouter. 2021. Prevalence of Questionable Research Practices, Research Misconduct and their Potential Explanatory Factors: A Survey Among Academic Researchers in The Netherlands. MetaArXiv Preprints. Also available at https://osf.io/preprints/metaarxiv/vk9yt/.10.31222/osf.io/vk9ytSearch in Google Scholar

Gorman, M. E. 2010. Trading Zones and Interactional Expertise. Cambridge: MIT Press.10.7551/mitpress/9780262014724.001.0001Search in Google Scholar

Hardwig, J. 1985. “Epistemic Dependence.” Journal of Philosophy 82 (7): 335–49, https://doi.org/10.2307/2026523.Search in Google Scholar

Hardwig, J. 1991. “The Role of Trust in Knowledge.” Journal of Philosophy 88 (12): 693–708, https://doi.org/10.2307/2027007.Search in Google Scholar

Haufe, C. 2013. “Why Do Funding Agencies Favor Hypothesis Testing?” Studies in History and Philosophy of Science Part A 44 (3): 363–74, https://doi.org/10.1016/j.shpsa.2013.05.002.Search in Google Scholar

Hixson, J. S. 1976. The Patchwork Mouse. Garden City: Anchor Press.Search in Google Scholar

Hoyningen-Huene, P. 2013. Systematicity. The Nature of Science. Oxford: Oxford University Press.10.1093/acprof:oso/9780199985050.001.0001Search in Google Scholar

Jensen, K., M. Machman, L. Whiteley, and P. Sandøe. 2020. RCR – A Danish Textbook in Responsible Conduct of Research. https://static-curis.ku.dk/portal/files/247440631/RCR_4_ed_2020_update.pdf (accessed August 10, 2021).Search in Google Scholar

Kitcher, P. 1992. “Authority, Deference, and the Role of Individual Reason.” In The Social Dimensions of Science, edited by E. McMullin, 244–271. South Bend: University of Notre Dame Press.Search in Google Scholar

Kitcher, P. 1993. The Advancement of Science. Oxford: Oxford University Press.Search in Google Scholar

Leonelli, S., and R. A. Ankeny. 2015. “Repertoires: How to Transform a Project into a Research Community.” BioScience 65 (7): 701–8, https://doi.org/10.1093/biosci/biv061.Search in Google Scholar

Lomborg, B. 1998. Verdens Sande Tilstand. Viby J: Centrum.Search in Google Scholar

Lomborg, B. 2001. The Skeptical Environmentalist: Measuring the Real State of the World. Cambridge: Cambridge University Press.10.1017/CBO9781139626378Search in Google Scholar

Mäki, U., A. Walsh, and M. F. Pinto. 2017. Scientific Imperialism: Exploring the Boundaries of Interdisciplinarity. Milton Park: Routledge.10.4324/9781315163673Search in Google Scholar

Martinson, B. C., M. S. Anderson, and R. De Vries. 2005. “Scientists Behaving Badly.” Nature 435: 737–8, https://doi.org/10.1038/435737a.Search in Google Scholar

Medawar, P. B. 1976. “The Strange Case of the Spotted Mouse, The New York Review.” The New York Review.Search in Google Scholar

O’Malley, M., K. C. Elliott, C. Haufe, and R. M. Burian. 2009. “Philosophies of Funding.” Cell 138: 611–5, https://doi.org/10.1016/j.cell.2009.08.008.Search in Google Scholar

Osbeck, L. M., N. J. Nersessian, K. R. Malone, and W. C. Newstetter. 2011. Science as Psychology. Sense-Making and Identity in Science Practice. Cambridge: Cambridge University Press.10.1017/CBO9780511933936Search in Google Scholar

Plaisance, K. S., and E. B. Kennedy. 2014. “Interactional Expertise: A Pluralistic Approach.” Studies in History and Philosophy of Science Part A 47: 60–8, https://doi.org/10.1016/j.shpsa.2014.07.001.Search in Google Scholar

Price, A. R. 1994. “Definitions and Boundaries of Research Misconduct: Perspectives from a Federal Government Viewpoint.” The Journal of Higher Education 65 (3): 286–97, https://doi.org/10.1080/00221546.1994.11778501.Search in Google Scholar

Resnik, D. B. 2014. “Editorial: Does RCR Education Make Students More Ethical, and is This the Right Question to Ask?” Accountability in Research 21 (4): 211–7, https://doi.org/10.1080/08989621.2013.848800.Search in Google Scholar

Shamoo, A. E., and D. B. Resnik. 2009. Responsible Conduct of Research. Oxford: Oxford University Press.10.1093/acprof:oso/9780195368246.001.0001Search in Google Scholar

Stainthorpe, A. C. 2007. Integrity in Research – A Rationale for Community Action. Also available at http://www.eurosfaire.prd.fr/7pc/doc/1192441146_integrity_in_research_ec_expert_group_final_report_en1.pdf.Search in Google Scholar

Steen, R. G., A. Casadevall, and F. C. Fang. 2013. “Why Has the Number of Scientific Retractions Increased?” PLoS One 8 (7): e68397, https://doi.org/10.1371/journal.pone.0068397.Search in Google Scholar

Steneck, N. H. 1994. “Research Universities and Scientific Misconduct: History, Policies, and the Future.” Journal of Higher Education Policy and Management 65 (3): 310–30, https://doi.org/10.2307/2943970.Search in Google Scholar

Steneck, N. H. 1999. “Confronting Misconduct in Science in the 1980s and 1990s: What Has and Has Not Been Accomplished?” Science and Engineering Ethics 5: 161–76, https://doi.org/10.1007/s11948-999-0005-x.Search in Google Scholar

Teaching Philosophy of Science to Students from Other Disciplines, Topical Collection. 2021. European Journal for Philosophy of Science, https://doi.org/10.1007/s13194-021-00393.Search in Google Scholar

Wagenknecht, S., N. J. Nersessian, and H. Andersen. 2015. Empirical Philosophy of Science. Introducing Qualitative Methods into Philosophy of Science. London: Springer.10.1007/978-3-319-18600-9Search in Google Scholar

Published Online: 2021-09-29
Published in Print: 2021-11-25

© 2021 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 1.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/sats-2021-2017/html
Scroll to top button