Chapter 9 What happens if you don’t have a hypothesis?

So far, we have concentrated on thesis chapters or manuscripts that have hypotheses to test. I have emphasised that there is a need to focus all of the sections of your writing to helping frame and understand the hypothesis (in the Introduction), explain your approach in collecting and testing the variables (in the Materials and Methods), provide the results of your tests (in the Results), and respond to the hypothesis (in the Discussion) together with placing it in context. But what, I hear you cry, if you don’t have a hypothesis to test?

There are lots of reasons why manuscripts don’t have hypotheses, and the chances are that in a PhD thesis (for example) you will have at least one chapter where there isn’t a hypothesis tested. So, given all the emphasis above, what can be done?

9.1 Central problematic

If you have a particular problem that your research is trying to solve, this may result in a new methodology or perspective rather than testing a hypothesis. This work is still suitable for a journal manuscript (and there are specific journals dedicated to new methodologies). Instead of introducing your hypothesis, you can simply introduce the problem that your methodology applies to. A new methodology will probably best fill an existing gap, and so your introduction will likely point out what this gap is, provide evidence for why existing methodologies don’t fill the gap, and outline the variables in your novel approach.

Similarly, if you have a question but not a hypothesis, that your research tries to answer, you can take exactly the same approach to introduce your question.

Hence, no matter what your central problem is, you still need to identify this and put it at the heart of your manuscript so that all of your sections address this point. Of course, there may be more than one, and if so, you will need to clearly articulate each one and explain how they are related to each other. If you are unsure about how to move forward, the best thing to do is to look to the literature for other examples of what you’re trying to do. Someone has very likely done something similar before. Don’t forget to chat about it with your advisor, as they may have ideas about where to look.

9.2 Does it matter that you don’t have a hypothesis?

Having a hypothesis makes your work stronger than if you only have a question or prediction. In each case, while your results might help answer your question or confirm your prediction, they won’t test the mechanism (so that you can’t explain why you have the results that you do), and they may not be falsifiable/refutable. Of course, Popper would argue that if you don’t have a null hypothesis, then your methodology isn’t scientific.

To me, it is important for you to know exactly what your study is before you start writing about it, or even before you do the experiments. It is essential for you to frame the context of your study. The central reason why you are writing the study will lie at the heart of the manuscript. In the formulaic approach, it will be verbalised at the end of the introduction. All parts of the methodology will explain how you approached this central aim. So if you can’t pin down what this is, then make sure you discuss it with your advisor.

9.3 Avoid HARKing

Hypothesising After Results are Known (HARKing) is the practice of creating your hypothesis once the results are in and you’ve done some preliminary data analyses to see what is significant (Forstmeier, Wagenmakers & Parker, 2017). HARKing has become prevalent in science because of the confirmation bias by journals (only accepting papers that can statistically accept the alternative hypothesis), especially those with higher impact factor. Increasing chances of confirmation bias, or a Type I error, is not desirable as you are more likely to accept the alternative hypothesis when it is not correct. Alternatively, you should seriously consider preregistering your aims to remain transparent.

References

Forstmeier W, Wagenmakers E-J, Parker TH. 2017. Detecting and avoiding likely false-positive findings–a practical guide. Biological Reviews 92:1941–1968. DOI: https://doi.org/10.1111/brv.12315.