2.1. Before a policy can be designed, it must be built upon a solid foundation of evidence. This chapter will explore the different types of evidence available to a policy analyst, from the most rigorous to the least, and provide a framework for evaluating their quality and relevance.
2.2. The Hierarchy of Evidence
2.2.1. Not all evidence is created equal. The “Hierarchy of Evidence” is a framework used to rank the validity and reliability of different research methodologies, helping to determine the level of confidence one can have in the findings. While this concept originated in medicine, it’s highly applicable to policy analysis.
2.2.2. At the top of the pyramid are the most rigorous and reliable forms of evidence, typically those with the least risk of bias. At the bottom are the least reliable forms.

2.3. Systematic Reviews and Meta-Analyses
2.3.1. At the very top, a systematic review rigorously synthesises all available research on a topic, while a meta-analysis statistically combines the results of multiple studies to produce a single, more powerful finding. They represent the strongest evidence as they are based on a large body of research and are designed to minimise bias.
2.4. Randomised Controlled Trials (RCTs)
2.4.1. Often considered the “gold standard” for evaluating the effectiveness of an intervention. Participants are randomly assigned to a “treatment group” (receiving the policy or programme) or a “control group” (receiving no intervention or a placebo). This design allows analysts to determine a causal link between the policy and the outcome.
2.5. Cohort Studies and Case-Control Studies
2.5.1. These are observational studies. In a cohort study, a group of people is followed over time to see who develops an outcome. A case-control study works in reverse, comparing a group with a specific outcome to a similar group without it. These are useful for identifying correlations, but they cannot prove causation.
2.6. Cross-Sectional Surveys
2.6.1. These are snapshots in time. They collect data from a group of people at a single point to describe the prevalence of a characteristic or the relationship between variables. They’re good for understanding a situation but are limited in their ability to establish cause and effect.
2.7. Qualitative Studies
2.7.1. Methods like in-depth interviews, focus groups, and case studies fall into this category. They provide rich, detailed insights into people’s experiences, beliefs, and behaviours. While they don’t produce generalisable data, they are invaluable for understanding the “why” behind a problem and for identifying unanticipated consequences.
2.8. Expert Opinion and Anecdotal Evidence
2.8.1. At the bottom of the pyramid. While expert opinion can be valuable for framing a problem, it’s based on individual knowledge rather than systematic data. Anecdotal evidence, based on personal stories or experiences, is the least reliable as it’s subject to a high degree of bias and isn’t representative of the broader population.
2.9. Navigating Data: Quantitative, Qualitative, and Mixed Methods
2.9.1. Evidence for policy comes in many forms, often categorised as either quantitative or qualitative.
2.9.2. Quantitative Data
This is numerical data that can be counted, measured, and analysed statistically. It includes things like economic indicators (GDP, inflation), social statistics (unemployment rates, population demographics), and survey results. Quantitative data is excellent for measuring the scale of a problem and for evaluating the magnitude of a policy’s impact. It provides objective, empirical facts.
2.9.3. Qualitative Data
This is non-numerical information that provides insight into human behaviour, emotions, and motivations. It’s collected through interviews, focus groups, and observation. While it can’t be used to prove a policy’s effectiveness on a large scale, it’s crucial for understanding the lived experience of a policy’s target audience. For example, a quantitative study might show that a new welfare policy has reduced unemployment, while a qualitative study might reveal that the policy is causing significant stress for beneficiaries.
2.9.4. Mixed Methods
The most robust policy analysis often uses a mixed-methods approach, combining both quantitative and qualitative data. This allows for a more holistic understanding of a policy problem. For instance, a policy analyst might use a quantitative survey to identify the scale of a problem and then follow up with qualitative interviews to understand the reasons behind the numbers.
2.10. The Critical Appraisal of Evidence
2.10.1. Simply having evidence isn’t enough; it is necessary to be able to critically appraise it. This involves asking a series of questions to determine its quality, relevance, and potential for bias.
• Is the evidence credible?
• Where does it come from?
• Was it published in a peer-reviewed journal or by a reputable research institution?
• What is the leaning of the academic institute?
• Has the institute been funded by a party interested in the outcome?
• Are there indications of evidence from sources with a vested interest.
• Is the evidence relevant?
• Does it apply to your specific policy context? (Note: A study from a different country or time period might not be relevant to the problem).
• Are there any biases? (Note: All research has some level of bias).
• Is there a chance the researchers’ funding source or personal beliefs influenced the findings?
• Is the sample size of the study too small to be representative?
• Is the evidence recent?
• Is it based on up-to-date data and research? (Note: Old data may no longer reflect the current reality)
2.11. By developing a critical eye and understanding the strengths and limitations of different types of evidence, it is possible to build a policy that isn’t just well-intentioned but also well-informed.