Science has a problem. Here is how you can help.
[I have gotten numerous requests by people who want to share Appendix C of my book. The content is copyrighted, of course, but my publisher kindly agreed that I can make it publicly available. You may use this text for non-commercial purposes, so long as you add the copyright disclaimer, see bottom of post.]
Both bottom-up and top-down measures are necessary to improve the current situation. This is an interdisciplinary problem whose solution requires input from the sociology of science, philosophy, psychology, and – most importantly – the practicing scientists themselves. Details differ by research area. One size does not fit all. Here is what you can do to help.
As a scientist:
Learn about social and cognitive biases: Become aware of what they are and under which circumstances they are likely to occur. Tell your colleagues.
Prevent social and cognitive biases: If you organize conferences, encourage speakers to not only list motivations but also shortcomings. Don’t forget to discuss “known problems.” Invite researchers from competing programs. If you review papers, make sure open questions are adequately mentioned and discussed. Flag marketing as scientifically inadequate. Don’t discount research just because it’s not presented excitingly enough or because few people work on it.
Beware the influence of media and social networks: What you read and what your friends talk about affects your interests. Be careful what you let into your head. If you consider a topic for future research, factor in that you might have been influenced by how often you have heard others speak about it positively.
Build a culture of criticism: Ignoring bad ideas doesn’t make them go away, they will still eat up funding. Read other researchers’ work and make your criticism publicly available. Don’t chide colleagues for criticizing others or think of them as unproductive or aggressive. Killing ideas is a necessary part of science. Think of it as community service.
Say no: If a policy affects your objectivity, for example because it makes continued funding dependent on the popularity of your research results, point out that it interferes with good scientific conduct and should be amended. If your university praises its productivity by paper counts and you feel that this promotes quantity over quality, say that you disapprove of such statements.
As a higher ed administrator, science policy maker, journal editor, representative of funding body:
Do your own thing: Don’t export decisions to others. Don’t judge scientists by how many grants they won or how popular their research is – these are judgements by others who themselves relied on others. Make up your own mind, carry responsibility. If you must use measures, create your own. Better still, ask scientists to come up with their own measures.
Use clear guidelines: If you have to rely on external reviewers, formulate recommendations for how to counteract biases to the extent possible. Reviewers should not base their judgment on the popularity of a research area or the person. If a reviewer’s continued funding depends on the well-being of a certain research area, they have a conflict of interest and should not review papers in their own area. That will be a problem because this conflict of interest is presently everywhere. See next 3 points to alleviate it.
Make commitments: You have to get over the idea that all science can be done by postdocs on 2-year fellowships. Tenure was institutionalized for a reason and that reason is still valid. If that means fewer people, then so be it. You can either produce loads of papers that nobody will care about 10 years from now, or you can be the seed of ideas that will still be talked about in 1000 years. Take your pick. Short-term funding means short-term thinking.
Encourage a change of field: Scientists have a natural tendency to stick to what they know already. If the promise of a research area declines, they need a way to get out, otherwise you’ll end up investing money into dying fields. Therefore, offer reeducation support, 1-2 year grants that allow scientists to learn the basics of a new field and to establish contacts. During that period they should not be expected to produce papers or give conference talks.
Hire full-time reviewers: Create safe positions for scientists specialized in providing objective reviews in certain fields. These reviewers should not themselves work in the field and have no personal incentive to take sides. Try to reach agreements with other institutions on the number of such positions.
Support the publication of criticism and negative results: Criticism of other people’s work or negative results are presently underappreciated. But these contributions are absolutely essential for the scientific method to work. Find ways to encourage the publication of such communication, for example by dedicated special issues.
Offer courses on social and cognitive biases: This should be mandatory for anybody who works in academic research. We are part of communities and we have to learn about the associated pitfalls. Sit together with people from the social sciences, psychology, and the philosophy of science, and come up with proposals for lectures on the topic.
Allow a division of labor by specialization in task: Nobody is good at everything, so don’t expect scientists to be. Some are good reviewers, some are good mentors, some are good leaders, and some are skilled at science communication. Allow them to shine in what they’re good at and make best use of it, but don’t require the person who spends their evenings in student Q&A to also bring in loads of grant money. Offer them specific titles, degrees, or honors.
As a science writer or member of the public, ask questions:
You’re used to asking about conflicts of interest due to funding from industry. But you should also ask about conflicts of interest due to short-term grants or employment. Does the scientists’ future funding depend on producing the results they just told you about?
Likewise, you should ask if the scientists’ chance of continuing their research depends on their work being popular among their colleagues. Does their present position offer adequate protection from peer pressure?
And finally, like you are used to scrutinize statistics you should also ask whether the scientists have taken means to address their cognitive biases. Have they provided a balanced account of pros and cons or have they just advertised their own research?
You will find that for almost all research in the foundations of physics the answer to at least one of these questions is no. This means you can’t trust these scientists’ conclusions. Sad but true.
Reprinted from Lost In Math by Sabine Hossenfelder. Copyright © 2018. Available from Basic Books, an imprint of Perseus Books, a division of PBG Publishing, LLC, a subsidiary of Hachette Book Group, Inc.