• Open access
  • Published: 27 November 2020

Designing process evaluations using case study to explore the context of complex interventions evaluated in trials

  • Aileen Grant 1 ,
  • Carol Bugge 2 &
  • Mary Wells 3  

Trials volume  21 , Article number:  982 ( 2020 ) Cite this article

12k Accesses

13 Citations

5 Altmetric

Metrics details

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail, and whether they can be transferred to other settings and populations. However, historically, context has not been sufficiently explored and reported resulting in the poor uptake of trial results. Therefore, suitable methodologies are needed to guide the investigation of context. Case study is one appropriate methodology, but there is little guidance about what case study design can offer the study of context in trials. We address this gap in the literature by presenting a number of important considerations for process evaluation using a case study design.

In this paper, we define context, the relationship between complex interventions and context, and describe case study design methodology. A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. We describe each of these in detail and highlight with examples from recently published process evaluations.

Conclusions

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation. We provide a comprehensive overview of the issues for process evaluation design to consider when using a case study design.

Trial registration

DQIP - ClinicalTrials.gov number, NCT01425502 - OPAL - ISRCTN57746448

Peer Review reports

Contribution to the literature

We illustrate how case study methodology can explore the complex, dynamic and uncertain relationship between context and interventions within trials.

We depict different case study designs and illustrate there is not one formula and that design needs to be tailored to the context and trial design.

Case study can support comparisons between intervention and control arms and between cases within arms to uncover and explain differences in detail.

We argue that case study can illustrate how components have evolved and been redefined through implementation.

Key issues for consideration in case study design within process evaluations are presented and illustrated with examples.

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail and whether they can be transferred to other settings and populations. However, historically, not all trials have had a process evaluation component, nor have they sufficiently reported aspects of context, resulting in poor uptake of trial findings [ 1 ]. Considerations of context are often absent from published process evaluations, with few studies acknowledging, taking account of or describing context during implementation, or assessing the impact of context on implementation [ 2 , 3 ]. At present, evidence from trials is not being used in a timely manner [ 4 , 5 ], and this can negatively impact on patient benefit and experience [ 6 ]. It takes on average 17 years for knowledge from research to be implemented into practice [ 7 ]. Suitable methodologies are therefore needed that allow for context to be exposed; one appropriate methodological approach is case study [ 8 , 9 ].

In 2015, the Medical Research Council (MRC) published guidance for process evaluations [ 10 ]. This was a key milestone in legitimising as well as providing tools, methods and a framework for conducting process evaluations. Nevertheless, as with all guidance, there is a need for reflection, challenge and refinement. There have been a number of critiques of the MRC guidance, including that interventions should be considered as events in systems [ 11 , 12 , 13 , 14 ]; a need for better use, critique and development of theories [ 15 , 16 , 17 ]; and a need for more guidance on integrating qualitative and quantitative data [ 18 , 19 ]. Although the MRC process evaluation guidance does consider appropriate qualitative and quantitative methods, it does not mention case study design and what it can offer the study of context in trials.

The case study methodology is ideally suited to real-world, sustainable intervention development and evaluation because it can explore and examine contemporary complex phenomena, in depth, in numerous contexts and using multiple sources of data [ 8 ]. Case study design can capture the complexity of the case, the relationship between the intervention and the context and how the intervention worked (or not) [ 8 ]. There are a number of textbooks on a case study within the social science fields [ 8 , 9 , 20 ], but there are no case study textbooks and a paucity of useful texts on how to design, conduct and report case study within the health arena. Few examples exist within the trial design and evaluation literature [ 3 , 21 ]. Therefore, guidance to enable well-designed process evaluations using case study methodology is required.

We aim to address the gap in the literature by presenting a number of important considerations for process evaluation using a case study design. First, we define the context and describe the relationship between complex health interventions and context.

What is context?

While there is growing recognition that context interacts with the intervention to impact on the intervention’s effectiveness [ 22 ], context is still poorly defined and conceptualised. There are a number of different definitions in the literature, but as Bate et al. explained ‘almost universally, we find context to be an overworked word in everyday dialogue but a massively understudied and misunderstood concept’ [ 23 ]. Ovretveit defines context as ‘everything the intervention is not’ [ 24 ]. This last definition is used by the MRC framework for process evaluations [ 25 ]; however; the problem with this definition is that it is highly dependent on how the intervention is defined. We have found Pfadenhauer et al.’s definition useful:

Context is conceptualised as a set of characteristics and circumstances that consist of active and unique factors that surround the implementation. As such it is not a backdrop for implementation but interacts, influences, modifies and facilitates or constrains the intervention and its implementation. Context is usually considered in relation to an intervention or object, with which it actively interacts. A boundary between the concepts of context and setting is discernible: setting refers to the physical, specific location in which the intervention is put into practice. Context is much more versatile, embracing not only the setting but also roles, interactions and relationships [ 22 ].

Traditionally, context has been conceptualised in terms of barriers and facilitators, but what is a barrier in one context may be a facilitator in another, so it is the relationship and dynamics between the intervention and context which are the most important [ 26 ]. There is a need for empirical research to really understand how different contextual factors relate to each other and to the intervention. At present, research studies often list common contextual factors, but without a depth of meaning and understanding, such as government or health board policies, organisational structures, professional and patient attitudes, behaviours and beliefs [ 27 ]. The case study methodology is well placed to understand the relationship between context and intervention where these boundaries may not be clearly evident. It offers a means of unpicking the contextual conditions which are pertinent to effective implementation.

The relationship between complex health interventions and context

Health interventions are generally made up of a number of different components and are considered complex due to the influence of context on their implementation and outcomes [ 3 , 28 ]. Complex interventions are often reliant on the engagement of practitioners and patients, so their attitudes, behaviours, beliefs and cultures influence whether and how an intervention is effective or not. Interventions are context-sensitive; they interact with the environment in which they are implemented. In fact, many argue that interventions are a product of their context, and indeed, outcomes are likely to be a product of the intervention and its context [ 3 , 29 ]. Within a trial, there is also the influence of the research context too—so the observed outcome could be due to the intervention alone, elements of the context within which the intervention is being delivered, elements of the research process or a combination of all three. Therefore, it can be difficult and unhelpful to separate the intervention from the context within which it was evaluated because the intervention and context are likely to have evolved together over time. As a result, the same intervention can look and behave differently in different contexts, so it is important this is known, understood and reported [ 3 ]. Finally, the intervention context is dynamic; the people, organisations and systems change over time, [ 3 ] which requires practitioners and patients to respond, and they may do this by adapting the intervention or contextual factors. So, to enable researchers to replicate successful interventions, or to explain why the intervention was not successful, it is not enough to describe the components of the intervention, they need to be described by their relationship to their context and resources [ 3 , 28 ].

What is a case study?

Case study methodology aims to provide an in-depth, holistic, balanced, detailed and complete picture of complex contemporary phenomena in its natural context [ 8 , 9 , 20 ]. In this case, the phenomena are the implementation of complex interventions in a trial. Case study methodology takes the view that the phenomena can be more than the sum of their parts and have to be understood as a whole [ 30 ]. It is differentiated from a clinical case study by its analytical focus [ 20 ].

The methodology is particularly useful when linked to trials because some of the features of the design naturally fill the gaps in knowledge generated by trials. Given the methodological focus on understanding phenomena in the round, case study methodology is typified by the use of multiple sources of data, which are more commonly qualitatively guided [ 31 ]. The case study methodology is not epistemologically specific, like realist evaluation, and can be used with different epistemologies [ 32 ], and with different theories, such as Normalisation Process Theory (which explores how staff work together to implement a new intervention) or the Consolidated Framework for Implementation Research (which provides a menu of constructs associated with effective implementation) [ 33 , 34 , 35 ]. Realist evaluation can be used to explore the relationship between context, mechanism and outcome, but case study differs from realist evaluation by its focus on a holistic and in-depth understanding of the relationship between an intervention and the contemporary context in which it was implemented [ 36 ]. Case study enables researchers to choose epistemologies and theories which suit the nature of the enquiry and their theoretical preferences.

Designing a process evaluation using case study

An important part of any study is the research design. Due to their varied philosophical positions, the seminal authors in the field of case study have different epistemic views as to how a case study should be conducted [ 8 , 9 ]. Stake takes an interpretative approach (interested in how people make sense of their world), and Yin has more positivistic leanings, arguing for objectivity, validity and generalisability [ 8 , 9 ].

Regardless of the philosophical background, a well-designed process evaluation using case study should consider the following core components: the purpose; the definition of the intervention, the trial design, the case, and the theories or logic models underpinning the intervention; the sampling approach; and the conceptual or theoretical framework [ 8 , 9 , 20 , 31 , 33 ]. We now discuss these critical components in turn, with reference to two process evaluations that used case study design, the DQIP and OPAL studies [ 21 , 37 , 38 , 39 , 40 , 41 ].

The purpose of a process evaluation is to evaluate and explain the relationship between the intervention and its components, to context and outcome. It can help inform judgements about validity (by exploring the intervention components and their relationship with one another (construct validity), the connections between intervention and outcomes (internal validity) and the relationship between intervention and context (external validity)). It can also distinguish between implementation failure (where the intervention is poorly delivered) and intervention failure (intervention design is flawed) [ 42 , 43 ]. By using a case study to explicitly understand the relationship between context and the intervention during implementation, the process evaluation can explain the intervention effects and the potential generalisability and optimisation into routine practice [ 44 ].

The DQIP process evaluation aimed to qualitatively explore how patients and GP practices responded to an intervention designed to reduce high-risk prescribing of nonsteroidal anti-inflammatory drugs (NSAIDs) and/or antiplatelet agents (see Table  1 ) and quantitatively examine how change in high-risk prescribing was associated with practice characteristics and implementation processes. The OPAL process evaluation (see Table  2 ) aimed to quantitatively understand the factors which influenced the effectiveness of a pelvic floor muscle training intervention for women with urinary incontinence and qualitatively explore the participants’ experiences of treatment and adherence.

Defining the intervention and exploring the theories or assumptions underpinning the intervention design

Process evaluations should also explore the utility of the theories or assumptions underpinning intervention design [ 49 ]. Not all theories underpinning interventions are based on a formal theory, but they based on assumptions as to how the intervention is expected to work. These can be depicted as a logic model or theory of change [ 25 ]. To capture how the intervention and context evolve requires the intervention and its expected mechanisms to be clearly defined at the outset [ 50 ]. Hawe and colleagues recommend defining interventions by function (what processes make the intervention work) rather than form (what is delivered) [ 51 ]. However, in some cases, it may be useful to know if some of the components are redundant in certain contexts or if there is a synergistic effect between all the intervention components.

The DQIP trial delivered two interventions, one intervention was delivered to professionals with high fidelity and then professionals delivered the other intervention to patients by form rather than function allowing adaptations to the local context as appropriate. The assumptions underpinning intervention delivery were prespecified in a logic model published in the process evaluation protocol [ 52 ].

Case study is well placed to challenge or reinforce the theoretical assumptions or redefine these based on the relationship between the intervention and context. Yin advocates the use of theoretical propositions; these direct attention to specific aspects of the study for investigation [ 8 ] can be based on the underlying assumptions and tested during the course of the process evaluation. In case studies, using an epistemic position more aligned with Yin can enable research questions to be designed, which seek to expose patterns of unanticipated as well as expected relationships [ 9 ]. The OPAL trial was more closely aligned with Yin, where the research team predefined some of their theoretical assumptions, based on how the intervention was expected to work. The relevant parts of the data analysis then drew on data to support or refute the theoretical propositions. This was particularly useful for the trial as the prespecified theoretical propositions linked to the mechanisms of action on which the intervention was anticipated to have an effect (or not).

Tailoring to the trial design

Process evaluations need to be tailored to the trial, the intervention and the outcomes being measured [ 45 ]. For example, in a stepped wedge design (where the intervention is delivered in a phased manner), researchers should try to ensure process data are captured at relevant time points or in a two-arm or multiple arm trial, ensure data is collected from the control group(s) as well as the intervention group(s). In the DQIP trial, a stepped wedge trial, at least one process evaluation case, was sampled per cohort. Trials often continue to measure outcomes after delivery of the intervention has ceased, so researchers should also consider capturing ‘follow-up’ data on contextual factors, which may continue to influence the outcome measure. The OPAL trial had two active treatment arms so collected process data from both arms. In addition, as the trial was interested in long-term adherence, the trial and the process evaluation collected data from participants for 2 years after the intervention was initially delivered, providing 24 months follow-up data, in line with the primary outcome for the trial.

Defining the case

Case studies can include single or multiple cases in their design. Single case studies usually sample typical or unique cases, their advantage being the depth and richness that can be achieved over a long period of time. The advantages of multiple case study design are that cases can be compared to generate a greater depth of analysis. Multiple case study sampling may be carried out in order to test for replication or contradiction [ 8 ]. Given that trials are often conducted over a number of sites, a multiple case study design is more sensible for process evaluations, as there is likely to be variation in implementation between sites. Case definition may occur at a variety of levels but is most appropriate if it reflects the trial design. For example, a case in an individual patient level trial is likely to be defined as a person/patient (e.g. a woman with urinary incontinence—OPAL trial) whereas in a cluster trial, a case is like to be a cluster, such as an organisation (e.g. a general practice—DQIP trial). Of course, the process evaluation could explore cases with less distinct boundaries, such as communities or relationships; however, the clarity with which these cases are defined is important, in order to scope the nature of the data that will be generated.

Carefully sampled cases are critical to a good case study as sampling helps inform the quality of the inferences that can be made from the data [ 53 ]. In both qualitative and quantitative research, how and how many participants to sample must be decided when planning the study. Quantitative sampling techniques generally aim to achieve a random sample. Qualitative research generally uses purposive samples to achieve data saturation, occurring when the incoming data produces little or no new information to address the research questions. The term data saturation has evolved from theoretical saturation in conventional grounded theory studies; however, its relevance to other types of studies is contentious as the term saturation seems to be widely used but poorly justified [ 54 ]. Empirical evidence suggests that for in-depth interview studies, saturation occurs at 12 interviews for thematic saturation, but typically more would be needed for a heterogenous sample higher degrees of saturation [ 55 , 56 ]. Both DQIP and OPAL case studies were huge with OPAL designed to interview each of the 40 individual cases four times and DQIP designed to interview the lead DQIP general practitioner (GP) twice (to capture change over time), another GP and the practice manager from each of the 10 organisational cases. Despite the plethora of mixed methods research textbooks, there is very little about sampling as discussions typically link to method (e.g. interviews) rather than paradigm (e.g. case study).

Purposive sampling can improve the generalisability of the process evaluation by sampling for greater contextual diversity. The typical or average case is often not the richest source of information. Outliers can often reveal more important insights, because they may reflect the implementation of the intervention using different processes. Cases can be selected from a number of criteria, which are not mutually exclusive, to enable a rich and detailed picture to be built across sites [ 53 ]. To avoid the Hawthorne effect, it is recommended that process evaluations sample from both intervention and control sites, which enables comparison and explanation. There is always a trade-off between breadth and depth in sampling, so it is important to note that often quantity does not mean quality and that carefully sampled cases can provide powerful illustrative examples of how the intervention worked in practice, the relationship between the intervention and context and how and why they evolved together. The qualitative components of both DQIP and OPAL process evaluations aimed for maximum variation sampling. Please see Table  1 for further information on how DQIP’s sampling frame was important for providing contextual information on processes influencing effective implementation of the intervention.

Conceptual and theoretical framework

A conceptual or theoretical framework helps to frame data collection and analysis [ 57 ]. Theories can also underpin propositions, which can be tested in the process evaluation. Process evaluations produce intervention-dependent knowledge, and theories help make the research findings more generalizable by providing a common language [ 16 ]. There are a number of mid-range theories which have been designed to be used with process evaluation [ 34 , 35 , 58 ]. The choice of the appropriate conceptual or theoretical framework is, however, dependent on the philosophical and professional background of the research. The two examples within this paper used our own framework for the design of process evaluations, which proposes a number of candidate processes which can be explored, for example, recruitment, delivery, response, maintenance and context [ 45 ]. This framework was published before the MRC guidance on process evaluations, and both the DQIP and OPAL process evaluations were designed before the MRC guidance was published. The DQIP process evaluation explored all candidates in the framework whereas the OPAL process evaluation selected four candidates, illustrating that process evaluations can be selective in what they explore based on the purpose, research questions and resources. Furthermore, as Kislov and colleagues argue, we also have a responsibility to critique the theoretical framework underpinning the evaluation and refine theories to advance knowledge [ 59 ].

Data collection

An important consideration is what data to collect or measure and when. Case study methodology supports a range of data collection methods, both qualitative and quantitative, to best answer the research questions. As the aim of the case study is to gain an in-depth understanding of phenomena in context, methods are more commonly qualitative or mixed method in nature. Qualitative methods such as interviews, focus groups and observation offer rich descriptions of the setting, delivery of the intervention in each site and arm, how the intervention was perceived by the professionals delivering the intervention and the patients receiving the intervention. Quantitative methods can measure recruitment, fidelity and dose and establish which characteristics are associated with adoption, delivery and effectiveness. To ensure an understanding of the complexity of the relationship between the intervention and context, the case study should rely on multiple sources of data and triangulate these to confirm and corroborate the findings [ 8 ]. Process evaluations might consider using routine data collected in the trial across all sites and additional qualitative data across carefully sampled sites for a more nuanced picture within reasonable resource constraints. Mixed methods allow researchers to ask more complex questions and collect richer data than can be collected by one method alone [ 60 ]. The use of multiple sources of data allows data triangulation, which increases a study’s internal validity but also provides a more in-depth and holistic depiction of the case [ 20 ]. For example, in the DQIP process evaluation, the quantitative component used routinely collected data from all sites participating in the trial and purposively sampled cases for a more in-depth qualitative exploration [ 21 , 38 , 39 ].

The timing of data collection is crucial to study design, especially within a process evaluation where data collection can potentially influence the trial outcome. Process evaluations are generally in parallel or retrospective to the trial. The advantage of a retrospective design is that the evaluation itself is less likely to influence the trial outcome. However, the disadvantages include recall bias, lack of sensitivity to nuances and an inability to iteratively explore the relationship between intervention and outcome as it develops. To capture the dynamic relationship between intervention and context, the process evaluation needs to be parallel and longitudinal to the trial. Longitudinal methodological design is rare, but it is needed to capture the dynamic nature of implementation [ 40 ]. How the intervention is delivered is likely to change over time as it interacts with context. For example, as professionals deliver the intervention, they become more familiar with it, and it becomes more embedded into systems. The OPAL process evaluation was a longitudinal, mixed methods process evaluation where the quantitative component had been predefined and built into trial data collection systems. Data collection in both the qualitative and quantitative components mirrored the trial data collection points, which were longitudinal to capture adherence and contextual changes over time.

There is a lot of attention in the recent literature towards a systems approach to understanding interventions in context, which suggests interventions are ‘events within systems’ [ 61 , 62 ]. This framing highlights the dynamic nature of context, suggesting that interventions are an attempt to change systems dynamics. This conceptualisation would suggest that the study design should collect contextual data before and after implementation to assess the effect of the intervention on the context and vice versa.

Data analysis

Designing a rigorous analysis plan is particularly important for multiple case studies, where researchers must decide whether their approach to analysis is case or variable based. Case-based analysis is the most common, and analytic strategies must be clearly articulated for within and across case analysis. A multiple case study design can consist of multiple cases, where each case is analysed at the case level, or of multiple embedded cases, where data from all the cases are pulled together for analysis at some level. For example, OPAL analysis was at the case level, but all the cases for the intervention and control arms were pulled together at the arm level for more in-depth analysis and comparison. For Yin, analytical strategies rely on theoretical propositions, but for Stake, analysis works from the data to develop theory. In OPAL and DQIP, case summaries were written to summarise the cases and detail within-case analysis. Each of the studies structured these differently based on the phenomena of interest and the analytic technique. DQIP applied an approach more akin to Stake [ 9 ], with the cases summarised around inductive themes whereas OPAL applied a Yin [ 8 ] type approach using theoretical propositions around which the case summaries were structured. As the data for each case had been collected through longitudinal interviews, the case summaries were able to capture changes over time. It is beyond the scope of this paper to discuss different analytic techniques; however, to ensure the holistic examination of the intervention(s) in context, it is important to clearly articulate and demonstrate how data is integrated and synthesised [ 31 ].

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation [ 38 ]. Case study can enable comparisons within and across intervention and control arms and enable the evolving relationship between intervention and context to be captured holistically rather than considering processes in isolation. Utilising a longitudinal design can enable the dynamic relationship between context and intervention to be captured in real time. This information is fundamental to holistically explaining what intervention was implemented, understanding how and why the intervention worked or not and informing the transferability of the intervention into routine clinical practice.

Case study designs are not prescriptive, but process evaluations using case study should consider the purpose, trial design, the theories or assumptions underpinning the intervention, and the conceptual and theoretical frameworks informing the evaluation. We have discussed each of these considerations in turn, providing a comprehensive overview of issues for process evaluations using a case study design. There is no single or best way to conduct a process evaluation or a case study, but researchers need to make informed choices about the process evaluation design. Although this paper focuses on process evaluations, we recognise that case study design could also be useful during intervention development and feasibility trials. Elements of this paper are also applicable to other study designs involving trials.

Availability of data and materials

No data and materials were used.

Abbreviations

Data-driven Quality Improvement in Primary Care

Medical Research Council

Nonsteroidal anti-inflammatory drugs

Optimizing Pelvic Floor Muscle Exercises to Achieve Long-term benefits

Blencowe NB. Systematic review of intervention design and delivery in pragmatic and explanatory surgical randomized clinical trials. Br J Surg. 2015;102:1037–47.

Article   CAS   PubMed   Google Scholar  

Dixon-Woods M. The problem of context in quality improvement. In: Foundation TH, editor. Perspectives on context: The Health Foundation; 2014.

Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13(1):95.

Article   PubMed   PubMed Central   Google Scholar  

Grant A, Sullivan F, Dowell J. An ethnographic exploration of influences on prescribing in general practice: why is there variation in prescribing practices? Implement Sci. 2013;8(1):72.

Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med. 2007;49(3):355–63.

Article   PubMed   Google Scholar  

Ward V, House AF, Hamer S. Developing a framework for transferring knowledge into action: a thematic analysis of the literature. J Health Serv Res Policy. 2009;14(3):156–64.

Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

Yin R. Case study research and applications: design and methods. Los Angeles: Sage Publications Inc; 2018.

Google Scholar  

Stake R. The art of case study research. Thousand Oaks, California: Sage Publications Ltd; 1995.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, Moore L, O’Cathain A, Tinati T, Wight D, et al. Process evaluation of complex interventions: Medical Research Council guidance. Br Med J. 2015;350.

Hawe P. Minimal, negligible and negligent interventions. Soc Sci Med. 2015;138:265–8.

Moore GF, Evans RE, Hawkins J, Littlecott H, Melendez-Torres GJ, Bonell C, Murphy S. From complex social interventions to interventions in complex social systems: future directions and unresolved questions for intervention development and evaluation. Evaluation. 2018;25(1):23–45.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, Greaves F, Harper L, Hawe P, Moore L, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4.

Moore G, Cambon L, Michie S, Arwidson P, Ninot G, Ferron C, Potvin L, Kellou N, Charlesworth J, Alla F, et al. Population health intervention research: the place of theories. Trials. 2019;20(1):285.

Kislov R. Engaging with theory: from theoretically informed to theoretically informative improvement research. BMJ Qual Saf. 2019;28(3):177–9.

Boulton R, Sandall J, Sevdalis N. The cultural politics of ‘Implementation Science’. J Med Human. 2020;41(3):379-94. h https://doi.org/10.1007/s10912-020-09607-9 .

Cheng KKF, Metcalfe A. Qualitative methods and process evaluation in clinical trials context: where to head to? Int J Qual Methods. 2018;17(1):1609406918774212.

Article   Google Scholar  

Richards DA, Bazeley P, Borglin G, Craig P, Emsley R, Frost J, Hill J, Horwood J, Hutchings HA, Jinks C, et al. Integrating quantitative and qualitative data and findings when undertaking randomised controlled trials. BMJ Open. 2019;9(11):e032081.

Thomas G. How to do your case study, 2nd edition edn. London: Sage Publications Ltd; 2016.

Grant A, Dreischulte T, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: case study evaluation of adoption and maintenance of a complex intervention to reduce high-risk primary care prescribing. BMJ Open. 2017;7(3).

Pfadenhauer L, Rohwer A, Burns J, Booth A, Lysdahl KB, Hofmann B, Gerhardus A, Mozygemba K, Tummers M, Wahlster P, et al. Guidance for the assessment of context and implementation in health technology assessments (HTA) and systematic reviews of complex interventions: the Context and Implementation of Complex Interventions (CICI) framework: Integrate-HTA; 2016.

Bate P, Robert G, Fulop N, Ovretveit J, Dixon-Woods M. Perspectives on context. London: The Health Foundation; 2014.

Ovretveit J. Understanding the conditions for improvement: research to discover which context influences affect improvement success. BMJ Qual Saf. 2011;20.

Medical Research Council: Process evaluation of complex interventions: UK Medical Research Council (MRC) guidance. 2015.

May CR, Johnson M, Finch T. Implementation, context and complexity. Implement Sci. 2016;11(1):141.

Bate P. Context is everything. In: Perpesctives on Context. The Health Foundation 2014.

Horton TJ, Illingworth JH, Warburton WHP. Overcoming challenges in codifying and replicating complex health care interventions. Health Aff. 2018;37(2):191–7.

O'Connor AM, Tugwell P, Wells GA, Elmslie T, Jolly E, Hollingworth G, McPherson R, Bunn H, Graham I, Drake E. A decision aid for women considering hormone therapy after menopause: decision support framework and evaluation. Patient Educ Couns. 1998;33:267–79.

Creswell J, Poth C. Qualiative inquiry and research design, fourth edition edn. Thousan Oaks, California: Sage Publications; 2018.

Carolan CM, Forbat L, Smith A. Developing the DESCARTE model: the design of case study research in health care. Qual Health Res. 2016;26(5):626–39.

Takahashi ARW, Araujo L. Case study research: opening up research opportunities. RAUSP Manage J. 2020;55(1):100–11.

Tight M. Understanding case study research, small-scale research with meaning. London: Sage Publications; 2017.

May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalisation process theory. Sociology. 2009;43:535.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice. A consolidated framework for advancing implementation science. Implement Sci. 2009;4.

Pawson R, Tilley N. Realist evaluation. London: Sage; 1997.

Dreischulte T, Donnan P, Grant A, Hapca A, McCowan C, Guthrie B. Safer prescribing - a trial of education, informatics & financial incentives. N Engl J Med. 2016;374:1053–64.

Grant A, Dreischulte T, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: active and less active ingredients of a multi-component complex intervention to reduce high-risk primary care prescribing. Implement Sci. 2017;12(1):4.

Dreischulte T, Grant A, Hapca A, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: quantitative examination of variation between practices in recruitment, implementation and effectiveness. BMJ Open. 2018;8(1):e017133.

Grant A, Dean S, Hay-Smith J, Hagen S, McClurg D, Taylor A, Kovandzic M, Bugge C. Effectiveness and cost-effectiveness randomised controlled trial of basic versus biofeedback-mediated intensive pelvic floor muscle training for female stress or mixed urinary incontinence: protocol for the OPAL (Optimising Pelvic Floor Exercises to Achieve Long-term benefits) trial mixed methods longitudinal qualitative case study and process evaluation. BMJ Open. 2019;9(2):e024152.

Hagen S, McClurg D, Bugge C, Hay-Smith J, Dean SG, Elders A, Glazener C, Abdel-fattah M, Agur WI, Booth J, et al. Effectiveness and cost-effectiveness of basic versus biofeedback-mediated intensive pelvic floor muscle training for female stress or mixed urinary incontinence: protocol for the OPAL randomised trial. BMJ Open. 2019;9(2):e024153.

Steckler A, Linnan L. Process evaluation for public health interventions and research; 2002.

Durlak JA. Why programme implementation is so important. J Prev Intervent Commun. 1998;17(2):5–18.

Bonell C, Oakley A, Hargreaves J, VS, Rees R. Assessment of generalisability in trials of health interventions: suggested framework and systematic review. Br Med J. 2006;333(7563):346–9.

Article   CAS   Google Scholar  

Grant A, Treweek S, Dreischulte T, Foy R, Guthrie B. Process evaluations for cluster-randomised trials of complex interventions: a proposed framework for design and reporting. Trials. 2013;14(1):15.

Yin R. Case study research: design and methods. London: Sage Publications; 2003.

Bugge C, Hay-Smith J, Grant A, Taylor A, Hagen S, McClurg D, Dean S: A 24 month longitudinal qualitative study of women’s experience of electromyography biofeedback pelvic floor muscle training (PFMT) and PFMT alone for urinary incontinence: adherence, outcome and context. ICS Gothenburg 2019 2019. https://www.ics.org/2019/abstract/473 . Access 10.9.2020.

Suzanne Hagen, Andrew Elders, Susan Stratton, Nicole Sergenson, Carol Bugge, Sarah Dean, Jean Hay-Smith, Mary Kilonzo, Maria Dimitrova, Mohamed Abdel-Fattah, Wael Agur, Jo Booth, Cathryn Glazener, Karen Guerrero, Alison McDonald, John Norrie, Louise R Williams, Doreen McClurg. Effectiveness of pelvic floor muscle training with and without electromyographic biofeedback for urinary incontinence in women: multicentre randomised controlled trial BMJ 2020;371. https://doi.org/10.1136/bmj.m3719 .

Cook TD. Emergent principles for the design, implementation, and analysis of cluster-based experiments in social science. Ann Am Acad Pol Soc Sci. 2005;599(1):176–98.

Hoffmann T, Glasziou P, Boutron I, Milne R, Perera R, Moher D. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Br Med J. 2014;348.

Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? Br Med J. 2004;328(7455):1561–3.

Grant A, Dreischulte T, Treweek S, Guthrie B. Study protocol of a mixed-methods evaluation of a cluster randomised trial to improve the safety of NSAID and antiplatelet prescribing: Data-driven Quality Improvement in Primary Care. Trials. 2012;13:154.

Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. 2006;12(2):219–45.

Thorne S. The great saturation debate: what the “S word” means and doesn’t mean in qualitative research reporting. Can J Nurs Res. 2020;52(1):3–5.

Guest G, Bunce A, Johnson L. How many interviews are enough?: an experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Guest G, Namey E, Chen M. A simple method to assess and report thematic saturation in qualitative research. PLoS One. 2020;15(5):e0232076.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015;24(3):228–38.

Rycroft-Malone J. The PARIHS framework: a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;4:297-304.

Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):103.

Cresswell JW, Plano Clark VL. Designing and conducting mixed methods research. Thousand Oaks: Sage Publications Ltd; 2007.

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

Craig P, Ruggiero E, Frohlich KL, Mykhalovskiy E, White M. Taking account of context in population health intervention research: guidance for producers, users and funders of research: National Institute for Health Research; 2018. https://www.ncbi.nlm.nih.gov/books/NBK498645/pdf/Bookshelf_NBK498645.pdf .

Download references

Acknowledgements

We would like to thank Professor Shaun Treweek for the discussions about context in trials.

No funding was received for this work.

Author information

Authors and affiliations.

School of Nursing, Midwifery and Paramedic Practice, Robert Gordon University, Garthdee Road, Aberdeen, AB10 7QB, UK

Aileen Grant

Faculty of Health Sciences and Sport, University of Stirling, Pathfoot Building, Stirling, FK9 4LA, UK

Carol Bugge

Department of Surgery and Cancer, Imperial College London, Charing Cross Campus, London, W6 8RP, UK

You can also search for this author in PubMed   Google Scholar

Contributions

AG, CB and MW conceptualised the study. AG wrote the paper. CB and MW commented on the drafts. All authors have approved the final manuscript.

Corresponding author

Correspondence to Aileen Grant .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval and consent to participate is not appropriate as no participants were included.

Consent for publication

Consent for publication is not required as no participants were included.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Grant, A., Bugge, C. & Wells, M. Designing process evaluations using case study to explore the context of complex interventions evaluated in trials. Trials 21 , 982 (2020). https://doi.org/10.1186/s13063-020-04880-4

Download citation

Received : 09 April 2020

Accepted : 06 November 2020

Published : 27 November 2020

DOI : https://doi.org/10.1186/s13063-020-04880-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Process evaluation
  • Case study design

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

case study design in evaluation

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of bmcmedicine

Case study research for better evaluations of complex interventions: rationale and challenges

Sara paparini.

1 Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK

Judith Green

2 Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Chrysanthi Papoutsi

Jamie murdoch.

3 School of Health Sciences, University of East Anglia, Norwich, UK

Mark Petticrew

4 Public Health, Environments and Society, London School of Hygiene & Tropical Medicin, London, UK

Trish Greenhalgh

Benjamin hanckel.

5 Institute for Culture and Society, Western Sydney University, Penrith, Australia

Associated Data

Not applicable (article based on existing available academic publications)

The need for better methods for evaluation in health research has been widely recognised. The ‘complexity turn’ has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might link interventions and outcomes. We argue that case study research—currently denigrated as poor evidence—is an under-utilised resource for not only providing evidence about context and transferability, but also for helping strengthen causal inferences when pathways between intervention and effects are likely to be non-linear.

Case study research, as an overall approach, is based on in-depth explorations of complex phenomena in their natural, or real-life, settings. Empirical case studies typically enable dynamic understanding of complex challenges and provide evidence about causal mechanisms and the necessary and sufficient conditions (contexts) for intervention implementation and effects. This is essential evidence not just for researchers concerned about internal and external validity, but also research users in policy and practice who need to know what the likely effects of complex programmes or interventions will be in their settings. The health sciences have much to learn from scholarship on case study methodology in the social sciences. However, there are multiple challenges in fully exploiting the potential learning from case study research. First are misconceptions that case study research can only provide exploratory or descriptive evidence. Second, there is little consensus about what a case study is, and considerable diversity in how empirical case studies are conducted and reported. Finally, as case study researchers typically (and appropriately) focus on thick description (that captures contextual detail), it can be challenging to identify the key messages related to intervention evaluation from case study reports.

Whilst the diversity of published case studies in health services and public health research is rich and productive, we recommend further clarity and specific methodological guidance for those reporting case study research for evaluation audiences.

The need for methodological development to address the most urgent challenges in health research has been well-documented. Many of the most pressing questions for public health research, where the focus is on system-level determinants [ 1 , 2 ], and for health services research, where provisions typically vary across sites and are provided through interlocking networks of services [ 3 ], require methodological approaches that can attend to complexity. The need for methodological advance has arisen, in part, as a result of the diminishing returns from randomised controlled trials (RCTs) where they have been used to answer questions about the effects of interventions in complex systems [ 4 – 6 ]. In conditions of complexity, there is limited value in maintaining the current orientation to experimental trial designs in the health sciences as providing ‘gold standard’ evidence of effect.

There are increasing calls for methodological pluralism [ 7 , 8 ], with the recognition that complex intervention and context are not easily or usefully separated (as is often the situation when using trial design), and that system interruptions may have effects that are not reducible to linear causal pathways between intervention and outcome. These calls are reflected in a shifting and contested discourse of trial design, seen with the emergence of realist [ 9 ], adaptive and hybrid (types 1, 2 and 3) [ 10 , 11 ] trials that blend studies of effectiveness with a close consideration of the contexts of implementation. Similarly, process evaluation has now become a core component of complex healthcare intervention trials, reflected in MRC guidance on how to explore implementation, causal mechanisms and context [ 12 ].

Evidence about the context of an intervention is crucial for questions of external validity. As Woolcock [ 4 ] notes, even if RCT designs are accepted as robust for maximising internal validity, questions of transferability (how well the intervention works in different contexts) and generalisability (how well the intervention can be scaled up) remain unanswered [ 5 , 13 ]. For research evidence to have impact on policy and systems organisation, and thus to improve population and patient health, there is an urgent need for better methods for strengthening external validity, including a better understanding of the relationship between intervention and context [ 14 ].

Policymakers, healthcare commissioners and other research users require credible evidence of relevance to their settings and populations [ 15 ], to perform what Rosengarten and Savransky [ 16 ] call ‘careful abstraction’ to the locales that matter for them. They also require robust evidence for understanding complex causal pathways. Case study research, currently under-utilised in public health and health services evaluation, can offer considerable potential for strengthening faith in both external and internal validity. For example, in an empirical case study of how the policy of free bus travel had specific health effects in London, UK, a quasi-experimental evaluation (led by JG) identified how important aspects of context (a good public transport system) and intervention (that it was universal) were necessary conditions for the observed effects, thus providing useful, actionable evidence for decision-makers in other contexts [ 17 ].

The overall approach of case study research is based on the in-depth exploration of complex phenomena in their natural, or ‘real-life’, settings. Empirical case studies typically enable dynamic understanding of complex challenges rather than restricting the focus on narrow problem delineations and simple fixes. Case study research is a diverse and somewhat contested field, with multiple definitions and perspectives grounded in different ways of viewing the world, and involving different combinations of methods. In this paper, we raise awareness of such plurality and highlight the contribution that case study research can make to the evaluation of complex system-level interventions. We review some of the challenges in exploiting the current evidence base from empirical case studies and conclude by recommending that further guidance and minimum reporting criteria for evaluation using case studies, appropriate for audiences in the health sciences, can enhance the take-up of evidence from case study research.

Case study research offers evidence about context, causal inference in complex systems and implementation

Well-conducted and described empirical case studies provide evidence on context, complexity and mechanisms for understanding how, where and why interventions have their observed effects. Recognition of the importance of context for understanding the relationships between interventions and outcomes is hardly new. In 1943, Canguilhem berated an over-reliance on experimental designs for determining universal physiological laws: ‘As if one could determine a phenomenon’s essence apart from its conditions! As if conditions were a mask or frame which changed neither the face nor the picture!’ ([ 18 ] p126). More recently, a concern with context has been expressed in health systems and public health research as part of what has been called the ‘complexity turn’ [ 1 ]: a recognition that many of the most enduring challenges for developing an evidence base require a consideration of system-level effects [ 1 ] and the conceptualisation of interventions as interruptions in systems [ 19 ].

The case study approach is widely recognised as offering an invaluable resource for understanding the dynamic and evolving influence of context on complex, system-level interventions [ 20 – 23 ]. Empirically, case studies can directly inform assessments of where, when, how and for whom interventions might be successfully implemented, by helping to specify the necessary and sufficient conditions under which interventions might have effects and to consolidate learning on how interdependencies, emergence and unpredictability can be managed to achieve and sustain desired effects. Case study research has the potential to address four objectives for improving research and reporting of context recently set out by guidance on taking account of context in population health research [ 24 ], that is to (1) improve the appropriateness of intervention development for specific contexts, (2) improve understanding of ‘how’ interventions work, (3) better understand how and why impacts vary across contexts and (4) ensure reports of intervention studies are most useful for decision-makers and researchers.

However, evaluations of complex healthcare interventions have arguably not exploited the full potential of case study research and can learn much from other disciplines. For evaluative research, exploratory case studies have had a traditional role of providing data on ‘process’, or initial ‘hypothesis-generating’ scoping, but might also have an increasing salience for explanatory aims. Across the social and political sciences, different kinds of case studies are undertaken to meet diverse aims (description, exploration or explanation) and across different scales (from small N qualitative studies that aim to elucidate processes, or provide thick description, to more systematic techniques designed for medium-to-large N cases).

Case studies with explanatory aims vary in terms of their positioning within mixed-methods projects, with designs including (but not restricted to) (1) single N of 1 studies of interventions in specific contexts, where the overall design is a case study that may incorporate one or more (randomised or not) comparisons over time and between variables within the case; (2) a series of cases conducted or synthesised to provide explanation from variations between cases; and (3) case studies of particular settings within RCT or quasi-experimental designs to explore variation in effects or implementation.

Detailed qualitative research (typically done as ‘case studies’ within process evaluations) provides evidence for the plausibility of mechanisms [ 25 ], offering theoretical generalisations for how interventions may function under different conditions. Although RCT designs reduce many threats to internal validity, the mechanisms of effect remain opaque, particularly when the causal pathways between ‘intervention’ and ‘effect’ are long and potentially non-linear: case study research has a more fundamental role here, in providing detailed observational evidence for causal claims [ 26 ] as well as producing a rich, nuanced picture of tensions and multiple perspectives [ 8 ].

Longitudinal or cross-case analysis may be best suited for evidence generation in system-level evaluative research. Turner [ 27 ], for instance, reflecting on the complex processes in major system change, has argued for the need for methods that integrate learning across cases, to develop theoretical knowledge that would enable inferences beyond the single case, and to develop generalisable theory about organisational and structural change in health systems. Qualitative Comparative Analysis (QCA) [ 28 ] is one such formal method for deriving causal claims, using set theory mathematics to integrate data from empirical case studies to answer questions about the configurations of causal pathways linking conditions to outcomes [ 29 , 30 ].

Nonetheless, the single N case study, too, provides opportunities for theoretical development [ 31 ], and theoretical generalisation or analytical refinement [ 32 ]. How ‘the case’ and ‘context’ are conceptualised is crucial here. Findings from the single case may seem to be confined to its intrinsic particularities in a specific and distinct context [ 33 ]. However, if such context is viewed as exemplifying wider social and political forces, the single case can be ‘telling’, rather than ‘typical’, and offer insight into a wider issue [ 34 ]. Internal comparisons within the case can offer rich possibilities for logical inferences about causation [ 17 ]. Further, case studies of any size can be used for theory testing through refutation [ 22 ]. The potential lies, then, in utilising the strengths and plurality of case study to support theory-driven research within different methodological paradigms.

Evaluation research in health has much to learn from a range of social sciences where case study methodology has been used to develop various kinds of causal inference. For instance, Gerring [ 35 ] expands on the within-case variations utilised to make causal claims. For Gerring [ 35 ], case studies come into their own with regard to invariant or strong causal claims (such as X is a necessary and/or sufficient condition for Y) rather than for probabilistic causal claims. For the latter (where experimental methods might have an advantage in estimating effect sizes), case studies offer evidence on mechanisms: from observations of X affecting Y, from process tracing or from pattern matching. Case studies also support the study of emergent causation, that is, the multiple interacting properties that account for particular and unexpected outcomes in complex systems, such as in healthcare [ 8 ].

Finally, efficacy (or beliefs about efficacy) is not the only contributor to intervention uptake, with a range of organisational and policy contingencies affecting whether an intervention is likely to be rolled out in practice. Case study research is, therefore, invaluable for learning about contextual contingencies and identifying the conditions necessary for interventions to become normalised (i.e. implemented routinely) in practice [ 36 ].

The challenges in exploiting evidence from case study research

At present, there are significant challenges in exploiting the benefits of case study research in evaluative health research, which relate to status, definition and reporting. Case study research has been marginalised at the bottom of an evidence hierarchy, seen to offer little by way of explanatory power, if nonetheless useful for adding descriptive data on process or providing useful illustrations for policymakers [ 37 ]. This is an opportune moment to revisit this low status. As health researchers are increasingly charged with evaluating ‘natural experiments’—the use of face masks in the response to the COVID-19 pandemic being a recent example [ 38 ]—rather than interventions that take place in settings that can be controlled, research approaches using methods to strengthen causal inference that does not require randomisation become more relevant.

A second challenge for improving the use of case study evidence in evaluative health research is that, as we have seen, what is meant by ‘case study’ varies widely, not only across but also within disciplines. There is indeed little consensus amongst methodologists as to how to define ‘a case study’. Definitions focus, variously, on small sample size or lack of control over the intervention (e.g. [ 39 ] p194), on in-depth study and context [ 40 , 41 ], on the logic of inference used [ 35 ] or on distinct research strategies which incorporate a number of methods to address questions of ‘how’ and ‘why’ [ 42 ]. Moreover, definitions developed for specific disciplines do not capture the range of ways in which case study research is carried out across disciplines. Multiple definitions of case study reflect the richness and diversity of the approach. However, evidence suggests that a lack of consensus across methodologists results in some of the limitations of published reports of empirical case studies [ 43 , 44 ]. Hyett and colleagues [ 43 ], for instance, reviewing reports in qualitative journals, found little match between methodological definitions of case study research and how authors used the term.

This raises the third challenge we identify that case study reports are typically not written in ways that are accessible or useful for the evaluation research community and policymakers. Case studies may not appear in journals widely read by those in the health sciences, either because space constraints preclude the reporting of rich, thick descriptions, or because of the reported lack of willingness of some biomedical journals to publish research that uses qualitative methods [ 45 ], signalling the persistence of the aforementioned evidence hierarchy. Where they do, however, the term ‘case study’ is used to indicate, interchangeably, a qualitative study, an N of 1 sample, or a multi-method, in-depth analysis of one example from a population of phenomena. Definitions of what constitutes the ‘case’ are frequently lacking and appear to be used as a synonym for the settings in which the research is conducted. Despite offering insights for evaluation, the primary aims may not have been evaluative, so the implications may not be explicitly drawn out. Indeed, some case study reports might properly be aiming for thick description without necessarily seeking to inform about context or causality.

Acknowledging plurality and developing guidance

We recognise that definitional and methodological plurality is not only inevitable, but also a necessary and creative reflection of the very different epistemological and disciplinary origins of health researchers, and the aims they have in doing and reporting case study research. Indeed, to provide some clarity, Thomas [ 46 ] has suggested a typology of subject/purpose/approach/process for classifying aims (e.g. evaluative or exploratory), sample rationale and selection and methods for data generation of case studies. We also recognise that the diversity of methods used in case study research, and the necessary focus on narrative reporting, does not lend itself to straightforward development of formal quality or reporting criteria.

Existing checklists for reporting case study research from the social sciences—for example Lincoln and Guba’s [ 47 ] and Stake’s [ 33 ]—are primarily orientated to the quality of narrative produced, and the extent to which they encapsulate thick description, rather than the more pragmatic issues of implications for intervention effects. Those designed for clinical settings, such as the CARE (CAse REports) guidelines, provide specific reporting guidelines for medical case reports about single, or small groups of patients [ 48 ], not for case study research.

The Design of Case Study Research in Health Care (DESCARTE) model [ 44 ] suggests a series of questions to be asked of a case study researcher (including clarity about the philosophy underpinning their research), study design (with a focus on case definition) and analysis (to improve process). The model resembles toolkits for enhancing the quality and robustness of qualitative and mixed-methods research reporting, and it is usefully open-ended and non-prescriptive. However, even if it does include some reflections on context, the model does not fully address aspects of context, logic and causal inference that are perhaps most relevant for evaluative research in health.

Hence, for evaluative research where the aim is to report empirical findings in ways that are intended to be pragmatically useful for health policy and practice, this may be an opportune time to consider how to best navigate plurality around what is (minimally) important to report when publishing empirical case studies, especially with regards to the complex relationships between context and interventions, information that case study research is well placed to provide.

The conventional scientific quest for certainty, predictability and linear causality (maximised in RCT designs) has to be augmented by the study of uncertainty, unpredictability and emergent causality [ 8 ] in complex systems. This will require methodological pluralism, and openness to broadening the evidence base to better understand both causality in and the transferability of system change intervention [ 14 , 20 , 23 , 25 ]. Case study research evidence is essential, yet is currently under exploited in the health sciences. If evaluative health research is to move beyond the current impasse on methods for understanding interventions as interruptions in complex systems, we need to consider in more detail how researchers can conduct and report empirical case studies which do aim to elucidate the contextual factors which interact with interventions to produce particular effects. To this end, supported by the UK’s Medical Research Council, we are embracing the challenge to develop guidance for case study researchers studying complex interventions. Following a meta-narrative review of the literature, we are planning a Delphi study to inform guidance that will, at minimum, cover the value of case study research for evaluating the interrelationship between context and complex system-level interventions; for situating and defining ‘the case’, and generalising from case studies; as well as provide specific guidance on conducting, analysing and reporting case study research. Our hope is that such guidance can support researchers evaluating interventions in complex systems to better exploit the diversity and richness of case study research.

Acknowledgements

Not applicable

Abbreviations

QCAQualitative comparative analysis
QEDQuasi-experimental design
RCTRandomised controlled trial

Authors’ contributions

JG, MP, SP, JM, TG, CP and SS drafted the initial paper; all authors contributed to the drafting of the final version, and read and approved the final manuscript.

This work was funded by the Medical Research Council - MRC Award MR/S014632/1 HCS: Case study, Context and Complex interventions (TRIPLE C). SP was additionally funded by the University of Oxford's Higher Education Innovation Fund (HEIF).

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Designing process evaluations using case study to explore the context of complex interventions evaluated in trials

Affiliations.

  • 1 School of Nursing, Midwifery and Paramedic Practice, Robert Gordon University, Garthdee Road, Aberdeen, AB10 7QB, UK. [email protected].
  • 2 Faculty of Health Sciences and Sport, University of Stirling, Pathfoot Building, Stirling, FK9 4LA, UK.
  • 3 Department of Surgery and Cancer, Imperial College London, Charing Cross Campus, London, W6 8RP, UK.
  • PMID: 33246496
  • PMCID: PMC7694311
  • DOI: 10.1186/s13063-020-04880-4

Background: Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail, and whether they can be transferred to other settings and populations. However, historically, context has not been sufficiently explored and reported resulting in the poor uptake of trial results. Therefore, suitable methodologies are needed to guide the investigation of context. Case study is one appropriate methodology, but there is little guidance about what case study design can offer the study of context in trials. We address this gap in the literature by presenting a number of important considerations for process evaluation using a case study design.

Main text: In this paper, we define context, the relationship between complex interventions and context, and describe case study design methodology. A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. We describe each of these in detail and highlight with examples from recently published process evaluations.

Conclusions: There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation. We provide a comprehensive overview of the issues for process evaluation design to consider when using a case study design.

Trial registration: DQIP - ClinicalTrials.gov number, NCT01425502 - OPAL - ISRCTN57746448.

Keywords: Case study design; Context; Process evaluation; Trials.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Similar articles

  • Theoretical approaches to process evaluations of complex interventions in health care: a systematic scoping review protocol. Quasdorf T, Clack L, Laporte Uribe F, Holle D, Berwig M, Purwins D, Schultes MT, Roes M. Quasdorf T, et al. Syst Rev. 2021 Oct 8;10(1):268. doi: 10.1186/s13643-021-01825-z. Syst Rev. 2021. PMID: 34625119 Free PMC article.
  • Process evaluation within pragmatic randomised controlled trials: what is it, why is it done, and can we find it?-a systematic review. French C, Pinnock H, Forbes G, Skene I, Taylor SJC. French C, et al. Trials. 2020 Nov 9;21(1):916. doi: 10.1186/s13063-020-04762-9. Trials. 2020. PMID: 33168067 Free PMC article. Review.
  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • The Effectiveness of Integrated Care Pathways for Adults and Children in Health Care Settings: A Systematic Review. Allen D, Gillen E, Rixson L. Allen D, et al. JBI Libr Syst Rev. 2009;7(3):80-129. doi: 10.11124/01938924-200907030-00001. JBI Libr Syst Rev. 2009. PMID: 27820426
  • Community engagement to reduce inequalities in health: a systematic review, meta-analysis and economic analysis. O’Mara-Eves A, Brunton G, McDaid D, Oliver S, Kavanagh J, Jamal F, Matosevic T, Harden A, Thomas J. O’Mara-Eves A, et al. Southampton (UK): NIHR Journals Library; 2013 Nov. Southampton (UK): NIHR Journals Library; 2013 Nov. PMID: 25642563 Free Books & Documents. Review.
  • Understanding the processes underpinning IMPlementing IMProved Asthma self-management as RouTine (IMP 2 ART) in primary care: study protocol for a process evaluation within a cluster randomised controlled implementation trial. Sheringham J, Steed L, McClatchey K, Delaney B, Barat A, Hammersley V, Marsh V, Fulop NJ, Taylor SJC, Pinnock H. Sheringham J, et al. Trials. 2024 Jun 4;25(1):359. doi: 10.1186/s13063-024-08179-6. Trials. 2024. PMID: 38835102 Free PMC article.
  • How a National Organization Works in Partnership With People Who Have Lived Experience in Mental Health Improvement Programs: Protocol for an Exploratory Case Study. Robertson C, Hibberd C, Shepherd A, Johnston G. Robertson C, et al. JMIR Res Protoc. 2024 Apr 19;13:e51779. doi: 10.2196/51779. JMIR Res Protoc. 2024. PMID: 38640479 Free PMC article.
  • What do we want to get out of this? a critical interpretive synthesis of the value of process evaluations, with a practical planning framework. French C, Dowrick A, Fudge N, Pinnock H, Taylor SJC. French C, et al. BMC Med Res Methodol. 2022 Nov 25;22(1):302. doi: 10.1186/s12874-022-01767-7. BMC Med Res Methodol. 2022. PMID: 36434520 Free PMC article.
  • Six Public Policy Recommendations to Increase the Translation and Utilization of Research Evidence in Public Health Practice. Klepac B, Krahe M, Spaaij R, Craike M. Klepac B, et al. Public Health Rep. 2023 Sep-Oct;138(5):715-720. doi: 10.1177/00333549221129355. Epub 2022 Oct 14. Public Health Rep. 2023. PMID: 36239490 No abstract available.
  • How to embed qualitative research in trials: insights from the feasibility study of the SAFER trial programme. Powell A, Hoare S, Modi R, Williams K, Dymond A, Chapman C, Griffin S, Mant J, Burt J. Powell A, et al. Trials. 2022 May 12;23(1):394. doi: 10.1186/s13063-022-06308-7. Trials. 2022. PMID: 35549744 Free PMC article. Clinical Trial.
  • Blencowe NB. Systematic review of intervention design and delivery in pragmatic and explanatory surgical randomized clinical trials. Br J Surg. 2015;102:1037–1047. doi: 10.1002/bjs.9808. - DOI - PubMed
  • Dixon-Woods M. The problem of context in quality improvement. In: Foundation TH, editor. Perspectives on context: The Health Foundation; 2014.
  • Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13(1):95. doi: 10.1186/1745-6215-13-95. - DOI - PMC - PubMed
  • Grant A, Sullivan F, Dowell J. An ethnographic exploration of influences on prescribing in general practice: why is there variation in prescribing practices? Implement Sci. 2013;8(1):72. doi: 10.1186/1748-5908-8-72. - DOI - PMC - PubMed
  • Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med. 2007;49(3):355–363. doi: 10.1016/j.annemergmed.2006.08.022. - DOI - PubMed

Publication types

  • Search in MeSH

Associated data

  • Search in ClinicalTrials.gov

LinkOut - more resources

Full text sources.

  • BioMed Central
  • Europe PubMed Central
  • PubMed Central

Miscellaneous

  • NCI CPTAC Assay Portal

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Qualitative Research:...

Qualitative Research: Case study evaluation

  • Related content
  • Peer review
  • Justin Keen , research fellow, health economics research group a ,
  • Tim Packwood a
  • Brunel University, Uxbridge, Middlesex UB8 3PH
  • a Correspondence to: Dr Keen.

Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy.

This is the last in a series of seven articles describing non-quantitative techniques and showing their value in health research

Introduction

The medical approach to understanding disease has traditionally drawn heavily on qualitative data, and in particular on case studies to illustrate important or interesting phenomena. The tradition continues today, not least in regular case reports in this and other medical journals. Moreover, much of the everyday work of doctors and other health professionals still involves decisions that are qualitative rather than quantitative in nature.

This paper discusses the use of qualitative research methods, not in clinical care but in case study evaluations of health service interventions. It is useful for doctors to understand the principles guiding the design and conduct of these evaluations, because they are frequently used by both researchers and inspectorial agencies (such as the Audit Commission in the United Kingdom and the Office of Technology Assessment in the United States) to investigate the work of doctors and other health professionals.

We briefly discuss the circumstances in which case study research can usefully be undertaken in health service settings and the ways in which qualitative methods are used within case studies. Examples show how qualitative methods are applied, both in purely qualitative studies and alongside quantitative methods.

Case study evaluations

Doctors often find themselves asking important practical questions, such as should we be involved in the management of hospitals and, if so, how? how will new government policies affect the lives of our patients? and how can we cope with changes …

Log in using your username and password

BMA Member Log In

If you have a subscription to The BMJ, log in:

  • Need to activate
  • Log in via institution
  • Log in via OpenAthens

Log in through your institution

Subscribe from £184 *.

Subscribe and get access to all BMJ articles, and much more.

* For online subscription

Access this article for 1 day for: £33 / $40 / €36 ( excludes VAT )

You can download a PDF version for your personal record.

Buy this article

case study design in evaluation

Site logo

  • Case Study Evaluation Approach
  • Learning Center

A case study evaluation approach can be an incredibly powerful tool for monitoring and evaluating complex programs and policies. By identifying common themes and patterns, this approach allows us to better understand the successes and challenges faced by the program. In this article, we’ll explore the benefits of using a case study evaluation approach in the monitoring and evaluation of projects, programs, and public policies.

Table of Contents

Introduction to Case Study Evaluation Approach

The advantages of a case study evaluation approach, types of case studies, potential challenges with a case study evaluation approach, guiding principles for successful implementation of a case study evaluation approach.

  • Benefits of Incorporating the Case Study Evaluation Approach in the Monitoring and Evaluation of Projects and Programs

A case study evaluation approach is a great way to gain an in-depth understanding of a particular issue or situation. This type of approach allows the researcher to observe, analyze, and assess the effects of a particular situation on individuals or groups.

An individual, a location, or a project may serve as the focal point of a case study’s attention. Quantitative and qualitative data are frequently used in conjunction with one another.

It also allows the researcher to gain insights into how people react to external influences. By using a case study evaluation approach, researchers can gain insights into how certain factors such as policy change or a new technology have impacted individuals and communities. The data gathered through this approach can be used to formulate effective strategies for responding to changes and challenges. Ultimately, this monitoring and evaluation approach helps organizations make better decision about the implementation of their plans.

This approach can be used to assess the effectiveness of a policy, program, or initiative by considering specific elements such as implementation processes, outcomes, and impact. A case study evaluation approach can provide an in-depth understanding of the effectiveness of a program by closely examining the processes involved in its implementation. This includes understanding the context, stakeholders, and resources to gain insight into how well a program is functioning or has been executed. By evaluating these elements, it can help to identify areas for improvement and suggest potential solutions. The findings from this approach can then be used to inform decisions about policies, programs, and initiatives for improved outcomes.

It is also useful for determining if other policies, programs, or initiatives could be applied to similar situations in order to achieve similar results or improved outcomes. All in all, the case study monitoring evaluation approach is an effective method for determining the effectiveness of specific policies, programs, or initiatives. By researching and analyzing the successes of previous cases, this approach can be used to identify similar approaches that could be applied to similar situations in order to achieve similar results or improved outcomes.

A case study evaluation approach offers the advantage of providing in-depth insight into a particular program or policy. This can be accomplished by analyzing data and observations collected from a range of stakeholders such as program participants, service providers, and community members. The monitoring and evaluation approach is used to assess the impact of programs and inform the decision-making process to ensure successful implementation. The case study monitoring and evaluation approach can help identify any underlying issues that need to be addressed in order to improve program effectiveness. It also provides a reality check on how successful programs are actually working, allowing organizations to make adjustments as needed. Overall, a case study monitoring and evaluation approach helps to ensure that policies and programs are achieving their objectives while providing valuable insight into how they are performing overall.

By taking a qualitative approach to data collection and analysis, case study evaluations are able to capture nuances in the context of a particular program or policy that can be overlooked when relying solely on quantitative methods. Using this approach, insights can be gleaned from looking at the individual experiences and perspectives of actors involved, providing a more detailed understanding of the impact of the program or policy than is possible with other evaluation methodologies. As such, case study monitoring evaluation is an invaluable tool in assessing the effectiveness of a particular initiative, enabling more informed decision-making as well as more effective implementation of programs and policies.

Furthermore, this approach is an effective way to uncover experiential information that can help to inform the ongoing improvement of policy and programming over time All in all, the case study monitoring evaluation approach offers an effective way to uncover experiential information necessary to inform the ongoing improvement of policy and programming. By analyzing the data gathered from this systematic approach, stakeholders can gain deeper insight into how best to make meaningful and long-term changes in their respective organizations.

Case studies come in a variety of forms, each of which can be put to a unique set of evaluation tasks. Evaluators have come to a consensus on describing six distinct sorts of case studies, which are as follows: illustrative, exploratory, critical instance, program implementation, program effects, and cumulative.

Illustrative Case Study

An illustrative case study is a type of case study that is used to provide a detailed and descriptive account of a particular event, situation, or phenomenon. It is often used in research to provide a clear understanding of a complex issue, and to illustrate the practical application of theories or concepts.

An illustrative case study typically uses qualitative data, such as interviews, surveys, or observations, to provide a detailed account of the unit being studied. The case study may also include quantitative data, such as statistics or numerical measurements, to provide additional context or to support the qualitative data.

The goal of an illustrative case study is to provide a rich and detailed description of the unit being studied, and to use this information to illustrate broader themes or concepts. For example, an illustrative case study of a successful community development project may be used to illustrate the importance of community engagement and collaboration in achieving development goals.

One of the strengths of an illustrative case study is its ability to provide a detailed and nuanced understanding of a particular issue or phenomenon. By focusing on a single case, the researcher is able to provide a detailed and in-depth analysis that may not be possible through other research methods.

However, one limitation of an illustrative case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a single unit, it may not be representative of other similar units or situations.

A well-executed case study can shed light on wider research topics or concepts through its thorough and descriptive analysis of a specific event or phenomenon.

Exploratory Case Study

An exploratory case study is a type of case study that is used to investigate a new or previously unexplored phenomenon or issue. It is often used in research when the topic is relatively unknown or when there is little existing literature on the topic.

Exploratory case studies are typically qualitative in nature and use a variety of methods to collect data, such as interviews, observations, and document analysis. The focus of the study is to gather as much information as possible about the phenomenon being studied and to identify new and emerging themes or patterns.

The goal of an exploratory case study is to provide a foundation for further research and to generate hypotheses about the phenomenon being studied. By exploring the topic in-depth, the researcher can identify new areas of research and generate new questions to guide future research.

One of the strengths of an exploratory case study is its ability to provide a rich and detailed understanding of a new or emerging phenomenon. By using a variety of data collection methods, the researcher can gather a broad range of data and perspectives to gain a more comprehensive understanding of the phenomenon being studied.

However, one limitation of an exploratory case study is that the findings may not be generalizable to other contexts or populations. Because the study is focused on a new or previously unexplored phenomenon, the findings may not be applicable to other situations or populations.

Exploratory case studies are an effective research strategy for learning about novel occurrences, developing research hypotheses, and gaining a deep familiarity with a topic of study.

Critical Instance Case Study

A critical instance case study is a type of case study that focuses on a specific event or situation that is critical to understanding a broader issue or phenomenon. The goal of a critical instance case study is to analyze the event in depth and to draw conclusions about the broader issue or phenomenon based on the analysis.

A critical instance case study typically uses qualitative data, such as interviews, observations, or document analysis, to provide a detailed and nuanced understanding of the event being studied. The data are analyzed using various methods, such as content analysis or thematic analysis, to identify patterns and themes that emerge from the data.

The critical instance case study is often used in research when a particular event or situation is critical to understanding a broader issue or phenomenon. For example, a critical instance case study of a successful disaster response effort may be used to identify key factors that contributed to the success of the response, and to draw conclusions about effective disaster response strategies more broadly.

One of the strengths of a critical instance case study is its ability to provide a detailed and in-depth analysis of a particular event or situation. By focusing on a critical instance, the researcher is able to provide a rich and nuanced understanding of the event, and to draw conclusions about broader issues or phenomena based on the analysis.

However, one limitation of a critical instance case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a specific event or situation, the findings may not be applicable to other similar events or situations.

A critical instance case study is a valuable research method that can provide a detailed and nuanced understanding of a particular event or situation and can be used to draw conclusions about broader issues or phenomena based on the analysis.

Program Implementation Program Implementation

A program implementation case study is a type of case study that focuses on the implementation of a particular program or intervention. The goal of the case study is to provide a detailed and comprehensive account of the program implementation process, and to identify factors that contributed to the success or failure of the program.

Program implementation case studies typically use qualitative data, such as interviews, observations, and document analysis, to provide a detailed and nuanced understanding of the program implementation process. The data are analyzed using various methods, such as content analysis or thematic analysis, to identify patterns and themes that emerge from the data.

The program implementation case study is often used in research to evaluate the effectiveness of a particular program or intervention, and to identify strategies for improving program implementation in the future. For example, a program implementation case study of a school-based health program may be used to identify key factors that contributed to the success or failure of the program, and to make recommendations for improving program implementation in similar settings.

One of the strengths of a program implementation case study is its ability to provide a detailed and comprehensive account of the program implementation process. By using qualitative data, the researcher is able to capture the complexity and nuance of the implementation process, and to identify factors that may not be captured by quantitative data alone.

However, one limitation of a program implementation case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a specific program or intervention, the findings may not be applicable to other programs or interventions in different settings.

An effective research tool, a case study of program implementation may illuminate the intricacies of the implementation process and point the way towards future enhancements.

Program Effects Case Study

A program effects case study is a research method that evaluates the effectiveness of a particular program or intervention by examining its outcomes or effects. The purpose of this type of case study is to provide a detailed and comprehensive account of the program’s impact on its intended participants or target population.

A program effects case study typically employs both quantitative and qualitative data collection methods, such as surveys, interviews, and observations, to evaluate the program’s impact on the target population. The data is then analyzed using statistical and thematic analysis to identify patterns and themes that emerge from the data.

The program effects case study is often used to evaluate the success of a program and identify areas for improvement. For example, a program effects case study of a community-based HIV prevention program may evaluate the program’s effectiveness in reducing HIV transmission rates among high-risk populations and identify factors that contributed to the program’s success.

One of the strengths of a program effects case study is its ability to provide a detailed and nuanced understanding of a program’s impact on its intended participants or target population. By using both quantitative and qualitative data, the researcher can capture both the objective and subjective outcomes of the program and identify factors that may have contributed to the outcomes.

However, a limitation of the program effects case study is that it may not be generalizable to other populations or contexts. Since the case study focuses on a particular program and population, the findings may not be applicable to other programs or populations in different settings.

A program effects case study is a good way to do research because it can give a detailed look at how a program affects the people it is meant for. This kind of case study can be used to figure out what needs to be changed and how to make programs that work better.

Cumulative Case Study

A cumulative case study is a type of case study that involves the collection and analysis of multiple cases to draw broader conclusions. Unlike a single-case study, which focuses on one specific case, a cumulative case study combines multiple cases to provide a more comprehensive understanding of a phenomenon.

The purpose of a cumulative case study is to build up a body of evidence through the examination of multiple cases. The cases are typically selected to represent a range of variations or perspectives on the phenomenon of interest. Data is collected from each case using a range of methods, such as interviews, surveys, and observations.

The data is then analyzed across cases to identify common themes, patterns, and trends. The analysis may involve both qualitative and quantitative methods, such as thematic analysis and statistical analysis.

The cumulative case study is often used in research to develop and test theories about a phenomenon. For example, a cumulative case study of successful community-based health programs may be used to identify common factors that contribute to program success, and to develop a theory about effective community-based health program design.

One of the strengths of the cumulative case study is its ability to draw on a range of cases to build a more comprehensive understanding of a phenomenon. By examining multiple cases, the researcher can identify patterns and trends that may not be evident in a single case study. This allows for a more nuanced understanding of the phenomenon and helps to develop more robust theories.

However, one limitation of the cumulative case study is that it can be time-consuming and resource-intensive to collect and analyze data from multiple cases. Additionally, the selection of cases may introduce bias if the cases are not representative of the population of interest.

In summary, a cumulative case study is a valuable research method that can provide a more comprehensive understanding of a phenomenon by examining multiple cases. This type of case study is particularly useful for developing and testing theories and identifying common themes and patterns across cases.

When conducting a case study evaluation approach, one of the main challenges is the need to establish a contextually relevant research design that accounts for the unique factors of the case being studied. This requires close monitoring of the case, its environment, and relevant stakeholders. In addition, the researcher must build a framework for the collection and analysis of data that is able to draw meaningful conclusions and provide valid insights into the dynamics of the case. Ultimately, an effective case study monitoring evaluation approach will allow researchers to form an accurate understanding of their research subject.

Additionally, depending on the size and scope of the case, there may be concerns regarding the availability of resources and personnel that could be allocated to data collection and analysis. To address these issues, a case study monitoring evaluation approach can be adopted, which would involve a mix of different methods such as interviews, surveys, focus groups and document reviews. Such an approach could provide valuable insights into the effectiveness and implementation of the case in question. Additionally, this type of evaluation can be tailored to the specific needs of the case study to ensure that all relevant data is collected and respected.

When dealing with a highly sensitive or confidential subject matter within a case study, researchers must take extra measures to prevent bias during data collection as well as protect participant anonymity while also collecting valid data in order to ensure reliable results

Moreover, when conducting a case study evaluation it is important to consider the potential implications of the data gathered. By taking extra measures to prevent bias and protect participant anonymity, researchers can ensure reliable results while also collecting valid data. Maintaining confidentiality and deploying ethical research practices are essential when conducting a case study to ensure an unbiased and accurate monitoring evaluation.

When planning and implementing a case study evaluation approach, it is important to ensure the guiding principles of research quality, data collection, and analysis are met. To ensure these principles are upheld, it is essential to develop a comprehensive monitoring and evaluation plan. This plan should clearly outline the steps to be taken during the data collection and analysis process. Furthermore, the plan should provide detailed descriptions of the project objectives, target population, key indicators, and timeline. It is also important to include metrics or benchmarks to monitor progress and identify any potential areas for improvement. By implementing such an approach, it will be possible to ensure that the case study evaluation approach yields valid and reliable results.

To ensure successful implementation, it is essential to establish a reliable data collection process that includes detailed information such as the scope of the study, the participants involved, and the methods used to collect data. Additionally, it is important to have a clear understanding of what will be examined through the evaluation process and how the results will be used. All in all, it is essential to establish a sound monitoring evaluation approach for a successful case study implementation. This includes creating a reliable data collection process that encompasses the scope of the study, the participants involved, and the methods used to collect data. It is also imperative to have an understanding of what will be examined and how the results will be utilized. Ultimately, effective planning is key to ensure that the evaluation process yields meaningful insights.

Benefits of Incorporating the Case Study Evaluation Approach in the Monitoring and Evaluation of Projects and Programmes

Using a case study approach in monitoring and evaluation allows for a more detailed and in-depth exploration of the project’s success, helping to identify key areas of improvement and successes that may have been overlooked through traditional evaluation. Through this case study method, specific data can be collected and analyzed to identify trends and different perspectives that can support the evaluation process. This data can allow stakeholders to gain a better understanding of the project’s successes and failures, helping them make informed decisions on how to strengthen current activities or shape future initiatives. From a monitoring and evaluation standpoint, this approach can provide an increased level of accuracy in terms of accurately assessing the effectiveness of the project.

This can provide valuable insights into what works—and what doesn’t—when it comes to implementing projects and programs, aiding decision-makers in making future plans that better meet their objectives However, monitoring and evaluation is just one approach to assessing the success of a case study. It does provide a useful insight into what initiatives may be successful, but it is important to note that there are other effective research methods, such as surveys and interviews, that can also help to further evaluate the success of a project or program.

In conclusion, a case study evaluation approach can be incredibly useful in monitoring and evaluating complex programs and policies. By exploring key themes, patterns and relationships, organizations can gain a detailed understanding of the successes, challenges and limitations of their program or policy. This understanding can then be used to inform decision-making and improve outcomes for those involved. With its ability to provide an in-depth understanding of a program or policy, the case study evaluation approach has become an invaluable tool for monitoring and evaluation professionals.

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

case study design in evaluation

Jobs for You

Junior program analyst/admin assistant – usaid lac/fo.

  • United States

Tax Coordinator – USAID Uganda

Monitoring and evaluation advisor.

  • Cuso International

Monitoring, Evaluation &Learning (MEL) Specialist

  • Brussels, Belgium
  • European Endowment for Democracy (EED)

Economics and Business Management Expert

Governance and sustainability expert, agriculture expert with irrigation background, nutritionist with food security background, director of impact and evaluation.

  • Glendale Heights, IL 60137, USA
  • Bridge Communities

USAID Benin Advisor / Program Officer

Usaid/drc elections advisor.

  • Democratic Republic of the Congo

Business Development Associate

Agriculture and resilience advisor, usaid/drc program officer, team leader, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

  • Evaluation Research Design: Examples, Methods & Types

busayo.longe

As you engage in tasks, you will need to take intermittent breaks to determine how much progress has been made and if any changes need to be effected along the way. This is very similar to what organizations do when they carry out  evaluation research.  

The evaluation research methodology has become one of the most important approaches for organizations as they strive to create products, services, and processes that speak to the needs of target users. In this article, we will show you how your organization can conduct successful evaluation research using Formplus .

What is Evaluation Research?

Also known as program evaluation, evaluation research is a common research design that entails carrying out a structured assessment of the value of resources committed to a project or specific goal. It often adopts social research methods to gather and analyze useful information about organizational processes and products.  

As a type of applied research , evaluation research typically associated  with real-life scenarios within organizational contexts. This means that the researcher will need to leverage common workplace skills including interpersonal skills and team play to arrive at objective research findings that will be useful to stakeholders. 

Characteristics of Evaluation Research

  • Research Environment: Evaluation research is conducted in the real world; that is, within the context of an organization. 
  • Research Focus: Evaluation research is primarily concerned with measuring the outcomes of a process rather than the process itself. 
  • Research Outcome: Evaluation research is employed for strategic decision making in organizations. 
  • Research Goal: The goal of program evaluation is to determine whether a process has yielded the desired result(s). 
  • This type of research protects the interests of stakeholders in the organization. 
  • It often represents a middle-ground between pure and applied research. 
  • Evaluation research is both detailed and continuous. It pays attention to performative processes rather than descriptions. 
  • Research Process: This research design utilizes qualitative and quantitative research methods to gather relevant data about a product or action-based strategy. These methods include observation, tests, and surveys.

Types of Evaluation Research

The Encyclopedia of Evaluation (Mathison, 2004) treats forty-two different evaluation approaches and models ranging from “appreciative inquiry” to “connoisseurship” to “transformative evaluation”. Common types of evaluation research include the following: 

  • Formative Evaluation

Formative evaluation or baseline survey is a type of evaluation research that involves assessing the needs of the users or target market before embarking on a project.  Formative evaluation is the starting point of evaluation research because it sets the tone of the organization’s project and provides useful insights for other types of evaluation.  

  • Mid-term Evaluation

Mid-term evaluation entails assessing how far a project has come and determining if it is in line with the set goals and objectives. Mid-term reviews allow the organization to determine if a change or modification of the implementation strategy is necessary, and it also serves for tracking the project. 

  • Summative Evaluation

This type of evaluation is also known as end-term evaluation of project-completion evaluation and it is conducted immediately after the completion of a project. Here, the researcher examines the value and outputs of the program within the context of the projected results. 

Summative evaluation allows the organization to measure the degree of success of a project. Such results can be shared with stakeholders, target markets, and prospective investors. 

  • Outcome Evaluation

Outcome evaluation is primarily target-audience oriented because it measures the effects of the project, program, or product on the users. This type of evaluation views the outcomes of the project through the lens of the target audience and it often measures changes such as knowledge-improvement, skill acquisition, and increased job efficiency. 

  • Appreciative Enquiry

Appreciative inquiry is a type of evaluation research that pays attention to result-producing approaches. It is predicated on the belief that an organization will grow in whatever direction its stakeholders pay primary attention to such that if all the attention is focused on problems, identifying them would be easy. 

In carrying out appreciative inquiry, the research identifies the factors directly responsible for the positive results realized in the course of a project, analyses the reasons for these results, and intensifies the utilization of these factors. 

Evaluation Research Methodology 

There are four major evaluation research methods, namely; output measurement, input measurement, impact assessment and service quality

  • Output/Performance Measurement

Output measurement is a method employed in evaluative research that shows the results of an activity undertaking by an organization. In other words, performance measurement pays attention to the results achieved by the resources invested in a specific activity or organizational process. 

More than investing resources in a project, organizations must be able to track the extent to which these resources have yielded results, and this is where performance measurement comes in. Output measurement allows organizations to pay attention to the effectiveness and impact of a process rather than just the process itself. 

Other key indicators of performance measurement include user-satisfaction, organizational capacity, market penetration, and facility utilization. In carrying out performance measurement, organizations must identify the parameters that are relevant to the process in question, their industry, and the target markets. 

5 Performance Evaluation Research Questions Examples

  • What is the cost-effectiveness of this project?
  • What is the overall reach of this project?
  • How would you rate the market penetration of this project?
  • How accessible is the project? 
  • Is this project time-efficient? 

performance-evaluation-survey

  • Input Measurement

In evaluation research, input measurement entails assessing the number of resources committed to a project or goal in any organization. This is one of the most common indicators in evaluation research because it allows organizations to track their investments. 

The most common indicator of inputs measurement is the budget which allows organizations to evaluate and limit expenditure for a project. It is also important to measure non-monetary investments like human capital; that is the number of persons needed for successful project execution and production capital. 

5 Input Evaluation Research Questions Examples

  • What is the budget for this project?
  • What is the timeline of this process?
  • How many employees have been assigned to this project? 
  • Do we need to purchase new machinery for this project? 
  • How many third-parties are collaborators in this project? 

case study design in evaluation

  • Impact/Outcomes Assessment

In impact assessment, the evaluation researcher focuses on how the product or project affects target markets, both directly and indirectly. Outcomes assessment is somewhat challenging because many times, it is difficult to measure the real-time value and benefits of a project for the users. 

In assessing the impact of a process, the evaluation researcher must pay attention to the improvement recorded by the users as a result of the process or project in question. Hence, it makes sense to focus on cognitive and affective changes, expectation-satisfaction, and similar accomplishments of the users. 

5 Impact Evaluation Research Questions Examples

  • How has this project affected you? 
  • Has this process affected you positively or negatively?
  • What role did this project play in improving your earning power? 
  • On a scale of 1-10, how excited are you about this project?
  • How has this project improved your mental health? 

case study design in evaluation

  • Service Quality

Service quality is the evaluation research method that accounts for any differences between the expectations of the target markets and their impression of the undertaken project. Hence, it pays attention to the overall service quality assessment carried out by the users. 

It is not uncommon for organizations to build the expectations of target markets as they embark on specific projects. Service quality evaluation allows these organizations to track the extent to which the actual product or service delivery fulfils the expectations. 

5 Service Quality Evaluation Questions

  • On a scale of 1-10, how satisfied are you with the product?
  • How helpful was our customer service representative?
  • How satisfied are you with the quality of service?
  • How long did it take to resolve the issue at hand?
  • How likely are you to recommend us to your network?

case study design in evaluation

Uses of Evaluation Research 

  • Evaluation research is used by organizations to measure the effectiveness of activities and identify areas needing improvement. Findings from evaluation research are key to project and product advancements and are very influential in helping organizations realize their goals efficiently.     
  • The findings arrived at from evaluation research serve as evidence of the impact of the project embarked on by an organization. This information can be presented to stakeholders, customers, and can also help your organization secure investments for future projects. 
  • Evaluation research helps organizations to justify their use of limited resources and choose the best alternatives. 
  •  It is also useful in pragmatic goal setting and realization. 
  • Evaluation research provides detailed insights into projects embarked on by an organization. Essentially, it allows all stakeholders to understand multiple dimensions of a process, and to determine strengths and weaknesses. 
  • Evaluation research also plays a major role in helping organizations to improve their overall practice and service delivery. This research design allows organizations to weigh existing processes through feedback provided by stakeholders, and this informs better decision making. 
  • Evaluation research is also instrumental to sustainable capacity building. It helps you to analyze demand patterns and determine whether your organization requires more funds, upskilling or improved operations.

Data Collection Techniques Used in Evaluation Research

In gathering useful data for evaluation research, the researcher often combines quantitative and qualitative research methods . Qualitative research methods allow the researcher to gather information relating to intangible values such as market satisfaction and perception. 

On the other hand, quantitative methods are used by the evaluation researcher to assess numerical patterns, that is, quantifiable data. These methods help you measure impact and results; although they may not serve for understanding the context of the process. 

Quantitative Methods for Evaluation Research

A survey is a quantitative method that allows you to gather information about a project from a specific group of people. Surveys are largely context-based and limited to target groups who are asked a set of structured questions in line with the predetermined context.

Surveys usually consist of close-ended questions that allow the evaluative researcher to gain insight into several  variables including market coverage and customer preferences. Surveys can be carried out physically using paper forms or online through data-gathering platforms like Formplus . 

  • Questionnaires

A questionnaire is a common quantitative research instrument deployed in evaluation research. Typically, it is an aggregation of different types of questions or prompts which help the researcher to obtain valuable information from respondents. 

A poll is a common method of opinion-sampling that allows you to weigh the perception of the public about issues that affect them. The best way to achieve accuracy in polling is by conducting them online using platforms like Formplus. 

Polls are often structured as Likert questions and the options provided always account for neutrality or indecision. Conducting a poll allows the evaluation researcher to understand the extent to which the product or service satisfies the needs of the users. 

Qualitative Methods for Evaluation Research

  • One-on-One Interview

An interview is a structured conversation involving two participants; usually the researcher and the user or a member of the target market. One-on-One interviews can be conducted physically, via the telephone and through video conferencing apps like Zoom and Google Meet. 

  • Focus Groups

A focus group is a research method that involves interacting with a limited number of persons within your target market, who can provide insights on market perceptions and new products. 

  • Qualitative Observation

Qualitative observation is a research method that allows the evaluation researcher to gather useful information from the target audience through a variety of subjective approaches. This method is more extensive than quantitative observation because it deals with a smaller sample size, and it also utilizes inductive analysis. 

  • Case Studies

A case study is a research method that helps the researcher to gain a better understanding of a subject or process. Case studies involve in-depth research into a given subject, to understand its functionalities and successes. 

How to Formplus Online Form Builder for Evaluation Survey 

  • Sign into Formplus

In the Formplus builder, you can easily create your evaluation survey by dragging and dropping preferred fields into your form. To access the Formplus builder, you will need to create an account on Formplus. 

Once you do this, sign in to your account and click on “Create Form ” to begin. 

formplus

  • Edit Form Title

Click on the field provided to input your form title, for example, “Evaluation Research Survey”.

case study design in evaluation

Click on the edit button to edit the form.

Add Fields: Drag and drop preferred form fields into your form in the Formplus builder inputs column. There are several field input options for surveys in the Formplus builder. 

case study design in evaluation

Edit fields

Click on “Save”

Preview form.

  • Form Customization

With the form customization options in the form builder, you can easily change the outlook of your form and make it more unique and personalized. Formplus allows you to change your form theme, add background images, and even change the font according to your needs. 

evaluation-research-from-builder

  • Multiple Sharing Options

Formplus offers multiple form sharing options which enables you to easily share your evaluation survey with survey respondents. You can use the direct social media sharing buttons to share your form link to your organization’s social media pages. 

You can send out your survey form as email invitations to your research subjects too. If you wish, you can share your form’s QR code or embed it on your organization’s website for easy access. 

Conclusion  

Conducting evaluation research allows organizations to determine the effectiveness of their activities at different phases. This type of research can be carried out using qualitative and quantitative data collection methods including focus groups, observation, telephone and one-on-one interviews, and surveys. 

Online surveys created and administered via data collection platforms like Formplus make it easier for you to gather and process information during evaluation research. With Formplus multiple form sharing options, it is even easier for you to gather useful data from target markets.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • characteristics of evaluation research
  • evaluation research methods
  • types of evaluation research
  • what is evaluation research
  • busayo.longe

Formplus

You may also like:

Formal Assessment: Definition, Types Examples & Benefits

In this article, we will discuss different types and examples of formal evaluation, and show you how to use Formplus for online assessments.

case study design in evaluation

Assessment vs Evaluation: 11 Key Differences

This article will discuss what constitutes evaluations and assessments along with the key differences between these two research methods.

Recall Bias: Definition, Types, Examples & Mitigation

This article will discuss the impact of recall bias in studies and the best ways to avoid them during research.

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Date (newest)

Evaluation design

Evidence and Evaluation Support

Shae Johnson

Download Practice guide

  • Evaluation design 1.53 MB

About this resource

Once you have decided on evaluation questions - what you want to know - then you need to decide how you are going to answer those questions. An 'evaluation design' is the overall structure or plan of an evaluation - the approach taken to answering the main evaluation questions. Evaluation design is not the same as the 'research methods' but it does help to clarify which research methods are best suited to gathering the information (data) needed to answer the evaluation questions.

This resource gives a quick overview of some of the main evaluation designs used for outcomes evaluations or impact evaluations. These are evaluations that aim to answer questions about whether a program, service or treatment (often called the 'intervention') is working as intended, or if it is having a positive or negative effect on its intended audience. We also briefly discuss some other types of evaluation design that are sometimes used in outcomes evaluations but are also commonly used to evaluate how programs or services are being delivered.

This resource is intended for use by program managers or practitioners who want a basic understanding of the different types of evaluation design.

Deciding on an evaluation design

Different evaluation designs are suitable for answering different evaluation questions, so the design of an evaluation usually depends on its purpose and the key evaluation questions it is meant to answer . This guide focuses on evaluations that measure a program or intervention's effectiveness or results. An evaluation design with a focus on effectiveness may include questions such as, ' To what extent did the program achieve its expected outcomes? ' or ' What changes occurred as a result of this program? '. However, evaluations can also have a different purpose, such as determining if a program or service was implemented as intended, if it was appropriate for its intended client group or what the cost versus benefit was. These different types of evaluation can require different kinds of evaluation design.

There are also other factors to consider when deciding on an evaluation design, and these are listed below. Working though these factors will help to inform the design and methods that will be most suitable for your evaluation. The last two factors in this list will also help establish the scope of the evaluation. Additional support to work through these factors is provided under Further reading .

Important points to consider when deciding on an evaluation design are:

  • the questions you want to answer
  • the audience for the evaluation
  • the maturity of your program (i.e. is it ready to evaluate outcomes or has it only just started?)
  • the type of program or intervention you are seeking to evaluate
  • your client or target group (e.g. who the program is for, how many people are in the program or receive a service and what their characteristics are)
  • what data are already available
  • your resources (e.g. funding, staff, skills) and time frame
  • whether you will conduct an evaluation internally or contract an external evaluator.

The following designs are most appropriate for conducting an outcomes evaluation. However, an outcomes evaluation is most useful if it is accompanied by a detailed understanding of how the program was delivered and to whom. For example, did the program reach the intended participants? Were all components of the program delivered? These types of questions are explored in a different type of evaluation called a ' process evaluation ' and it can be useful to combine process evaluations with those that look at outcomes. Knowing a program was delivered as planned will then allow you to link the program activities to the outcomes.

Evaluation designs

Researchers and evaluators sometimes refer to a 'hierarchy of evidence' for assessing the effectiveness of a program or intervention. The evaluation designs that are thought to produce the most powerful evidence that a program or intervention works are usually situated at or near the top of this hierarchy.

The hierarchies usually have randomised controlled trials (RCTs) at or near the top. These are usually followed by 'quasi-experimental' designs using comparison groups. These types of evaluation designs aim to measure changes for participants before and after the program or intervention and may compare these changes to other groups of participants that did not attend the program or intervention. There are also a range of other non-experimental designs such as pre- and post-test studies or case studies; these may not be able to produce such strong evidence for program effectiveness but can be more appropriate depending on the situation.

Experimental

  • Randomised control trial

Quasi-experimental

  • Case comparison groups

Non-experimental

Pre- and post-test studies

Case studies.

If you are planning an evaluation, you can use these hierarchies to guide your decisions about which evaluation design to use but the choice of design should also be guided by key questions outlined in the section above. RCTs may be considered the most powerful evidence but they are not always possible or appropriate. So, what do some of the main designs look like?

Experimental designs

Randomised controlled trials (RCT) are the main experimental evaluation design. RCTs are a method of systematically testing for differences between two or more groups of participants. This usually means one group receives the intervention, treatment or service that is being evaluated or tested (the 'intervention group') and the other does not (the 'control group'). Differences in results between the groups can indicate whether an intervention is effective or not.

Besides comparing the results between the groups, the main distinctive feature of an RCT is the random allocation of participants to the control and intervention groups. Randomisation provides each participant with an equal chance of being allocated to receive or not receive the intervention. 1 This is important because it means there is a greater chance that the people in the intervention and control groups will have a similar mix of attributes such as gender, health, attitudes, past history or life circumstances. Without randomisation there is more chance of systematic bias; that is, where one group is different to the other and this difference can affect the results. An example of systematic bias would be if the people in the treatment group for an anger management intervention already had lower-conflict relationships than the people in the control group. If this were so, it would not be possible to tell if any positive results were due to the intervention or to the pre-existing differences between the groups.

In RCTs, data are collected from participants before and after (and sometimes during) the program. If there is no bias in the way individuals are allocated to the groups, you can probably conclude that any differences between the groups after completing the program are due to the intervention rather than to pre-existing differences among participants. Since RCTs are typically conducted under conditions that provide a high degree of control over factors that might provide alternative explanations for findings, RCTs can provide a relatively high degree of certainty that the outcomes for participants are a direct result of the program.

Although RCTs are good at answering questions about intervention effectiveness (i.e. 'does it work?') they are less useful for answering questions about how or why an intervention works. From a child and family services perspective, RCTs cannot always accommodate the complex and challenging nature of service delivery (Tomison, 2000). In order to link participant outcomes to a program, RCTs need to be conducted under tightly controlled conditions. This can be difficult to do in real life situations and the evidence that RCTs produce is sometimes difficult to apply to everyday practice.

There are some RCT designs, such as cluster RCTs, that can be more useful for generating practice-based evidence than traditional RCTs (Ammerman, Smith, & Calancie, 2014). In cluster RCTs, groups - or clusters - of individuals such as those within schools, medical practices or entire communities are randomised to treatment or control conditions. For example, six schools may be selected to take part in a RCT and three are allocated to be treatment groups and three are allocated to be control groups.

There are also other experimental study designs that offer alternatives to traditional RCTs, such as time series analyses (Bernal, Cummins, & Gasparrini, 2017) and natural experiments (Dunning, 2012). However, these experimental designs, like most RCTs, require sophisticated statistical and methodological expertise.

As RCTs are not always practicable or appropriate, evaluators and researchers often employ the next best thing - comparison groups as part of quasi-experimental designs.

Quasi-experimental designs

A quasi-experimental design differs from an RCT in that it does not randomly assign participants to an intervention or control group. Quasi-experimental designs identify a comparison group that is as similar as possible to the treatment group in terms of baseline (pre-intervention) characteristics. There are statistical techniques for creating a valid comparison group; for example, regression discontinuity design and propensity score matching, which reduces the risk of bias (White & Sabarwal, 2014).

Comparison groups are often used when the random allocation of program participants to control and intervention groups is not possible for practical or ethical reasons. Comparison groups can include waiting lists for an intervention and participants attending other programs where the participants are not able to be randomly allocated into groups. Participants on a waiting list are a good source of comparison data, because (a) they are available to you, and (b) you can collect the same data from them as you do from those participating in the program. The two groups are likely to be reasonably well-matched in terms of demographic characteristics as long as participants in the program group have not been given prioritised entry over the waiting list group.

Comparison groups may also be found in population data that have already been collected; for example, from health datasets. In this instance, it is important that they can be statistically matched to your control group to take into account any differences in the two groups. The outcome measures used would also need to be comparable.

Evidence of greater benefits to those who participated in an intervention compared to a comparison group can suggest the program is effective, but it is more difficult to say with certainty that the program caused the change. Because there has not been a random assignment of participants, it is not always possible to say with certainty that any differences or benefits observed in the evaluation are the result of the intervention rather than pre-program differences between the groups of participants. For example, the clients in one comparison group might experience less severe problems, be from a particular cultural group, be older or have a different family type from those who participate in your program. Therefore, they might have better or worse outcomes than the other group that are not explained by the intervention. Nonetheless, if consistent results are found in repeated studies of a given type of program using a variety of quasi-experimental (and other, non-experimental) methods, then it is possible to have greater confidence in the effectiveness of the program.

Non-experimental designs

Most other evaluation designs fall under the broad heading of 'non-experimental' designs. When the use of control or comparison groups is not feasible, non-experimental designs can be appropriate.

Some common non-experimental designs (and approaches) are:

  • pre- and post-test studies
  • case studies
  • most significant change (MSC)
  • developmental
  • empowerment

Some of these approaches, such as pre- and post-test studies, usually focus on an intervention's effectiveness or outcomes. However, others may more often be used for other forms of evaluation, such as understanding how a program has been implemented or whether it is appropriate for its intended audience. We list a few here that are sometimes used for measuring outcomes. More detail on these and other designs can be found in the Further reading section.

Pre- and post-test studies examine the effect of a program without the use of either a control or comparison group. In this evaluation design, data are ideally collected (e.g. via survey or outcomes measure) from participants immediately before the program starts and again at its completion. Any change is then measured. If a program is ongoing, data might be collected from a client when they start the program. When the client leaves the program is the time for the post-test collection.

Outcomes can be measured at additional timepoints during or after the program, as well as pre and post. For example, if a client is expected to attend a program for an extended time, taking measurements mid-program can provide an opportunity to measure if it is having the expected outcomes for that client. If positive changes are found, this can also be an opportunity to provide feedback directly to the client. Outcomes measured at follow-up timepoints, such as three or six months after the program, can provide additional evidence about the long-term effectiveness of the intervention.

If the program works, the program logic would lead you to expect any changes recorded will be in the direction that supports the program goals. For example, participants completing a program may show increased self-esteem or a reduction in behaviour problems.

Pre- and post-test designs are often relatively easy to run and can require less specialised expertise than experimental or quasi-experimental designs. However, there are some important limitations to pre- and post-test designs. In these studies, even if there are differences between the pre- and post-test measures, it is difficult to say for certain whether the effects are due to participation in the program. This is because we cannot know if similar changes might have occurred anyway, even if the program had not been run. All that can be said is that some aspect of this group's behaviour (or attitudes, knowledge, skills, etc.) changed in the period between the start of the program and its conclusion.

Thinking about other explanations that may impact client outcomes is useful when looking at the findings of any evaluation. For example, improvements in child development may happen naturally as children age over the period of a program. In complex settings or when there may be other possible causes for the observed outcomes, further investigation can be useful.

Case studies are another common evaluation design. These are often used to get an in-depth understanding of a single activity or instance within a program setting. This is useful when an evaluation aims to capture information on more explanatory 'how', 'what' and 'why' questions (Crowe et al., 2011). Case studies can be used to show personal experiences or unique program processes with both qualitative and quantitative data. For example, a case study evaluation for a parenting program may evaluate a small number of clients who provide detailed stories of their experiences. In this way, case studies do allow for a richness of information, but they are not able to provide a generalisation about the program as a whole. A case study is often combined with other evaluation designs.

Most significant change

When there is a focus on identifying what the outcomes of an intervention are (i.e. what changes result from an intervention) the most significant change design may be suitable. This story-based technique involves a form of continuous inquiry whereby designated groups of stakeholders search for significant program outcomes and then deliberate on the value of these outcomes in a systematic and transparent manner (Dart & Davies, 2003).

Developmental evaluation

Developmental evaluation is a structured way to monitor, assess and provide feedback on the development of a program while it is being designed or modified (Child Family Community Australia [CFCA], 2018). The focus here is not on fully developed interventions but on programs or services where inputs, activities and outputs are not yet entirely decided on or are changing. Developmental evaluations attempt to address the challenges of evaluating developing or changeable programs and services by adopting a more responsive and adaptive approach. This is done by asking evaluative questions, applying evaluation logic, and gathering and reporting on evaluative data to support project, program, product and/or organisational development with timely feedback (Patton, 2012). Although this approach can measure outcomes it is less useful for undertaking a rigorous assessment of whether an intervention 'works'.

Realist evaluation

A realist evaluation is an approach to evaluation that uses qualitative methods (such as interviews or focus groups) to understand in detail the underlying mechanisms of a program or intervention. This may be the case when an experimental or quasi-experimental design cannot provide the level of understanding needed of the mechanisms of a program. Realist evaluation is less often used to understand whether a program is effective (i.e. did it achieve the desired outcomes) and more often used for evaluating new initiatives or programs that seem to work but where 'how and for whom' they work is not yet understood. This can include programs that have previously demonstrated inconsistent outcomes as well those that will be scaled up or implemented in new contexts (Westhorp, 2014).

Empowerment evaluation

Empowerment evaluation is more a set of principles that guide the evaluation at every stage than an evaluation design (CFCA, 2015). This approach is drawn from the participatory or collaborative field of evaluation and seeks to involve all stakeholders (i.e. evaluators, management, practitioners, participants and the community) in the evaluation process. This approach can potentially be combined with other evaluation designs.

In conclusion

This resource has provided a basic overview of the different types of evaluation design used for outcomes evaluations. There are a range of evaluation designs that allow for different types of evaluation questions to be answered. However, evaluations that focus on effectiveness - how well a program works - differ in their strength of evidence. These include experimental, quasi-experimental and pre- and post-test evaluations. Evaluation designs that are non-experimental may focus more on the 'why' and 'how' of a program. Identifying the right evaluation design for the right situation is the first step to a successful evaluation.

Further reading

Identifying an evaluation design    CDC Centres for Disease Control and Prevention , program Evaluation Framework Checklist

Evaluation designs and approaches    Better Evaluation  

WK Kellogg Foundation (2017). The Step-by-Step Guide to Evaluation

Haynes, L., Goldacre, B., & Torgerson, D. (2012). Test, learn, adapt: developing public policy with randomised controlled trials. Cabinet Office-Behavioural Insights Team.

The Indigenous Evaluation Strategy    Productivity Commission . (2020). A Guide to Evaluation under the Indigenous Evaluation Strategy.   Developmental, realist and participatory evaluations are particularly suited to allowing Aboriginal and Torres Strait Islander knowledges, perspectives and world views to be incorporated into the design and delivery of evaluations (Productivity Commission, 2020). Culturally valid methods, such as yarning (storytelling), ganma (knowledge sharing) and dadirri (listening) can also be used to engage Aboriginal and Torres Strait Islander people throughout the evaluation process.

Evaluation Methods    Once you know what you want to collect for your evaluation the next steps are to decide how you will collect the data. Further information on research or data collection methods and more detail on conducting an evaluation can be found on Planning for evaluation II: Getting into detail .

  • Ammerman, A., Smith, T. W., & Calancie, L. (2014). Practice-based evidence in public health: Improving reach, relevance, and results.  Annual Review of Public Health ,  35 , 47-63.
  • Bernal, J. L., Cummins, S., & Gasparrini, A. (2017). Interrupted time series regression for the evaluation of public health interventions: A tutorial.  International Journal of Epidemiology ,  46 (1), 348-355.
  • Child Family Community Australia (CFCA). (2015). Empowerment evaluation (CFCA Practitioner Resource). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
  • Child Family Community Australia (CFCA). (2018). Developmental evaluation (CFCA Resource Sheet). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
  • Crowe, S., Cresswell, K., Robertson, A., Huby, G., Avery, A., & Sheikh, A. (2011). The case study approach.  BMC Medical Research Methodology ,  11 (1), 1-9. doi.org/10.1186/1471-2288-11-100
  • Dart, J., & Davies, R. (2003). A dialogical, story-based evaluation tool: The Most Significant Change technique.  American Journal of Evaluation ,  24 (2), 137-155. doi.org/10.1177/109821400302400202
  • Dunning, T. (2012). Natural experiments in the social sciences: A design-based approach. Cambridge, U.K.: Cambridge University Press.
  • Patton , M. Q. (2012). Planning and evaluating for social change: An evening at SFU with Michael Quinn Patton. [Web Video]. Retrieved from www.youtube.com/watch?v=b7n64JEjUUk&a;list=UUUi_6IJ8IgUAzI6JczJUVPA...
  • Tomison, A. (2000). Evaluating child abuse protection programs (Issues in Child Abuse Prevention No. 12). Melbourne: National Child Protection Clearinghouse. Retrieved from www.aifs.gov.au/nch/pubs/issues/issues12/issues12.html
  • Westhorp, G. (2014). Realist impact evaluation: An introduction. London: Overseas Development Institute.
  • White, H., & Sabarwal, S. (2014). Quasi-experimental design and methods.  Methodological briefs: Impact Evaluation ,  8 , 1-16.

1 Participants allocated to the non-intervention group may receive an alternative intervention or receive the intervention at a later time.

This resource was authored by  Shae Johnson , Research Fellow at the Australian Institute of Family Studies.

This document has been produced as part of AIFS Evidence and Evaluation Support funded by the Australian Government through the Department of Social Services.

Featured image: © GettyImages/ FreshSplash

Related topics

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

water-logo

Article Menu

case study design in evaluation

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Evaluation of coal-seam roof-water richness based on improved weight method: a case study in the dananhu no.7 coal mine, china, 1. introduction, 2. study area and mining conditions, 3. methodology, 3.1. factors influencing the coal-seam roof water richness, 3.2. the determination of indicator weights, 3.2.1. improvement of the entropy method, 3.2.2. improvement of the scatter degree method, 3.2.3. coupled weighting, 4. results and discussion, 4.1. results, 4.2. discussion.

  • The middle section of the Xishangyao Group is a water-bearing layer composed of fractured and porous conglomerate sandstone, which directly inundates the roof of the third coal seam, posing a threat to mining safety. Six factors, including the aquifer thickness, recharge index, dip angle of the coal seam, core take rate, sand–mud interbed index, and lithological coefficient of sandstone, were selected as the main indicators for evaluating the water abundance of the roof of the third coal seam;
  • To address the limitations of the entropy method, which focuses on local differences and lacks inheritability and transitivity, the indicator conflict correlation coefficient was employed to weigh the information entropy, thus improving the entropy method to obtain the weights of individual indicators;
  • Before obtaining the weights of each indicator using the scatter degree method, a subjective optimization method was employed to pre-weigh the original values of each indicator, thereby enhancing the method. The resulting weight coefficients can better differentiate the relative importance of each indicator and their significance in evaluating the target, enabling a more comprehensive assessment;
  • The combination weighting of each indicator was performed, and a water-richness zoning model was established using GIS software. The evaluation model predicted a higher water richness in the northeastern part of the mining area. The prediction was validated to be consistent with the actual conditions, thus providing a reference for hydrological measures in other coal-seam roofs.

Author Contributions

Data availability statement, acknowledgments, conflicts of interest.

  • Govindaraju, V.; Tang, C. The dynamic links between CO 2 emissions, economic growth and coal consumption in China and India. Appl. Energy 2013 , 104 , 310–318. [ Google Scholar ] [ CrossRef ]
  • Renn, O.; Marshall, J. Coal, nuclear and renewable energy policies in Germany: From the 1950s to the “Energiewende”. Energy Policy 2016 , 99 , 224–232. [ Google Scholar ] [ CrossRef ]
  • Maiti, J.; Khanzode, V.; Ray, P. Severity analysis of Indian coal mine accidents—A retrospective study for 100 years. Saf. Sci. 2009 , 47 , 1033–1042. [ Google Scholar ] [ CrossRef ]
  • Ismail, S.; Ramli, A.; Aziz, H. Research trends in mining accidents study: A systematic literature review. Saf. Sci. 2021 , 143 , 105438. [ Google Scholar ] [ CrossRef ]
  • Zhang, P.; Dong, Y.; Zhang, X.; Xu, D. Statistical law analysis and forecast of coal mine water disaster accidents in China from 2008 to 2021. Coal Eng. 2022 , 54 , 131–137. [ Google Scholar ]
  • Dhakate, R.; Chowdhary, D.; Rao, V.; Tiwary, R.; Sinha, A. Geophysical and geomorphological approach for locating groundwater potential zones in Sukinda chromite mining area. Environ. Earth Sci. 2012 , 66 , 2311–2325. [ Google Scholar ] [ CrossRef ]
  • Hu, X.; Xu, H.; Peng, S.; Zhang, P.; Fu, M. Dynamic monitoring of water abundance of overlying strata in coal seam by transient electromagnetic method. J. China Coal Soc. 2021 , 46 , 1576–1586. [ Google Scholar ] [ CrossRef ]
  • Meju, M.; Fontes, S.; Ulugergerli, E.; La Terra, E.; Germano, C.; Carvalho, R. A joint TEM-HLEM geophysical approach to borehole siting in deeply weathered granitic terrains. Ground Water 2001 , 39 , 554–567. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ma, B.; Chen, J.; Wang, X. Application of transient electromagnetic method in coal mine water-rich detection: Taking the fifth and sixth panels of Daliuta Coal Mine in Shendong Mining Area as an example. Coal Sci. Technol. 2022 , 50 (Suppl. S2), 223230. [ Google Scholar ] [ CrossRef ]
  • Mojiri-Khozani, A.; Nassery, H.; Nikpeyman, Y.; Abedian, H. Assessing the impact of Koohrang III tunnel on the hydrogeological settings using stable isotopes and hydrochemical methods. Bull. Eng. Geol. Environ. 2023 , 82 , 219. [ Google Scholar ] [ CrossRef ]
  • Lawrence, J.; Alagarsamy, V.K.; Mohanadhas, B.; Natarajan, N.; Vasudevan, M.; Govindarajan, S.K. Nitrate transport in a fracture-skin-matrix system under non-isothermal conditions. Environ. Sci. Pollut. Res. 2023 , 30 , 18091–18112. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Nampak, H.; Pradhan, B.; Abd Manap, M. Application of GIS based data driven evidential belief function model to predict groundwater potential zonation. J. Hydrol. 2014 , 513 , 283–300. [ Google Scholar ] [ CrossRef ]
  • Wu, Q.; Fan, Z.; Liu, S.; Zhang, Y.; Sun, W. Water-richness evaluation method of water-filled aquifer based on the principle of information fusion with GIS: Water-richness index method. J. China Coal Soc. 2011 , 36 , 1124–1128. [ Google Scholar ] [ CrossRef ]
  • Wu, Q.; Xu, K.; Zhang, W.; Wei, Z. Roof aquifer water abundance evaluation: A case study in Taigemiao, China. Arab. J. Geosci. 2017 , 10 , 254. [ Google Scholar ] [ CrossRef ]
  • Han, C.; Wei, J.; Xie, D.; Xu, J.; Zhang, W.; Zhao, Z. Water-richness evaluation of sandstone aquifer based on set pair analysisvariable fuzzy set coupling method: A case from Jurassic Zhilyo formation of Jiniiagu coal mine in Ningdong mining area. J. China Coal Soc. 2020 , 45 , 2432–2443. [ Google Scholar ] [ CrossRef ]
  • Sun, Z.; Bao, W.; Li, M. Comprehensive Water Inrush Risk Assessment Method for Coal Seam Roof. Sustainability 2022 , 14 , 10475. [ Google Scholar ] [ CrossRef ]
  • Delgado, A.; Romero, I. Environmental conflict analysis using an integrated grey clustering and entropy-weight method: A case study of a mining project in Peru. Environ. Model. Softw. 2016 , 77 , 108–121. [ Google Scholar ] [ CrossRef ]
  • Yan, P.; Shang, S.; Zhang, C.; Yin, N.; Zhang, X.; Yang, G.; Zhang, Z.; Sun, Q. Research on the Processing of Coal Mine Water Source Data by Optimizing BP Neural Network Algorithm With Sparrow Search Algorithm. IEEE Access 2021 , 9 , 108718–108730. [ Google Scholar ] [ CrossRef ]
  • Wang, Q.; Han, Y.; Zhao, L.; Li, W. Water Abundance Evaluation of Aquifer Using GA-SVR-BP: A Case Study in the Hongliulin Coal Mine, China. Water 2023 , 15 , 3204. [ Google Scholar ] [ CrossRef ]
  • Qiu, M.; Yin, X.; Shi, L.; Zhai, P.; Gai, G.; Shao, Z. Multifactor Prediction of the Water Richness of Coal Roof Aquifers Based on the Combination Weighting Method and TOPSIS Model: A Case Study in the Changcheng No. 1 Coal Mine. ACS Omega 2022 , 7 , 44984–45003. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ruan, Z.; Li, C.; Wu, A.; Wang, Y. A New Risk Assessment Model for Underground Mine Water Inrush Based on AHP and D-S Evidence Theory. Mine Water Environ. 2019 , 38 , 488–496. [ Google Scholar ] [ CrossRef ]
  • Biswas, S.; Mukhopadhyay, B.; Bera, A. Delineating groundwater potential zones of agriculture dominated landscapes using GIS based AHP techniques: A case study from Uttar Dinajpur district, West Bengal. Environ. Earth Sci. 2020 , 79 , 302. [ Google Scholar ] [ CrossRef ]
  • Bogdanovic, D.; Nikolic, D.; Ilic, I. Mining method selection by integrated AHP and PROMETHEE method. An. Acad. Bras. Cienc. 2012 , 84 , 219–233. [ Google Scholar ] [ CrossRef ]
  • Asghari, M.; Nassiri, P.; Monazzam, M.; Golbabaei, F.; Arabalibeik, H.; Shamsipour, A.; Allahverdy, A. Weighting Criteria and Prioritizing of Heat stress indices in surface mining using a Delphi Technique and Fuzzy AHP-TOPSIS Method. J. Environ. Health Sci. Eng. 2017 , 15 , 1. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Liu, W.; Zheng, Q.; Pang, L.; Dou, W.; Meng, X. Study of roof water inrush forecasting based on EM-FAHP two-factor model. Math. Biosci. Eng. 2021 , 18 , 4987–5005. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Qu, X.; Han, J.; Shi, L.; Qu, X.; Bilal, A.; Qiu, M.; Gao, W. An extended ITL-VIKOR model using triangular fuzzy numbers for applications to water-richness evaluation. Expert Syst. Appl. 2023 , 222 , 119793. [ Google Scholar ] [ CrossRef ]
  • Hoseinie, S.; Ataei, M.; Osanloo, M. A new classification system for evaluating rock penetrability. Int. J. Rock Mech. Min. Sci. 2009 , 46 , 1329–1340. [ Google Scholar ] [ CrossRef ]
  • Xu, Z.; Gao, S.; Cui, S.; Sun, Y.; Sun, Y.; Chen, Z. Hydro-geological basic and practice for water-preserved mining in ecologically vulnerable area: A case study in Hami coalfield. J. China Coal Soc. 2017 , 42 , 80–87. [ Google Scholar ] [ CrossRef ]
  • Yan, Z.; Wang, J.; Wang, X. Sedimentary Environments and Coal Accumulation of the Middle Xishanyao Formation, Jurassic, in the Western Dananhu Coalfield, Turpan-Hami Basin. Geofluids 2021 , 2021 , 6034055. [ Google Scholar ] [ CrossRef ]
  • Li, Q.; Sui, W. Risk evaluation of mine-water inrush based on principal component logistic regression analysis and an improved analytic hierarchy process. Hydrogeol. J. 2021 , 29 , 1299–1311. [ Google Scholar ] [ CrossRef ]
  • Feng, S.; Wu, Q. Research on water—richness of aquifer using comprehensive weight method based on AHP and variation coefficient. Coal Eng. 2016 , 48 (Suppl. S2), 138–140. [ Google Scholar ]
  • Li, S.; Yang, Z.; Ma, X.; Liu, S.; Fang, C.; Zhang, A. Influence of water yield property of Yan’an Formation aquifer on water yield of mines in southern Shenmu-Fugu mining area. Coal Geol. Explor. 2023 , 51 , 92–102. [ Google Scholar ]
  • Dai, G.; Zhou, Y.; Yang, T.; Liu, M.; Gao, Z.; Niu, C. Study on multi-factor complex analysis method applied to watery of sandstone in ZhiluoFormation. Coal Sci. Technol. 2016 , 44 , 186–190. [ Google Scholar ] [ CrossRef ]
  • Huang, D.; Wang, X.; Chang, X.; Qiao, S.; Zhu, Y.; Xing, D. A safety assessment model of filling mining based on comprehensive weighting-set pair analysis. Environ. Sci. Pollut. Res. 2023 , 30 , 60746–60759. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cui, W.; Ye, J. Improved Symmetry Measures of Simplified Neutrosophic Sets and Their Decision-Making Method Based on a Sine Entropy Weight Model. Symmetry 2018 , 10 , 225. [ Google Scholar ] [ CrossRef ]
  • Guo, Y.; Ma, F.; Dong, Q. Analysis of influence of dimensionless methods on deviation maximization method. J. Manag. Sci. China 2011 , 14 , 19–28. [ Google Scholar ]
  • Chen, J.; Li, W.; Xue, F.; Wang, K.; Zhang, C.; Song, T. Comprehensive evaluation of TOPSIS-RSR grouting effect based on subjective and objective combined weights. Coal Sci. Technol. 2022 , 51 , 191–199. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

l /S 0~0.20.2~0.40.4~0.60.6~0.80.8~1
p 1.21.41.61.82
Total sandstone thickness/Total thickness of bed0~0.20.2~0.40.4~0.60.6~0.80.8~1
e0.20.40.60.81
BoreholesAquifer
Thickness
Recharge
Index
Dip Angle of Coal SeamCore Take RateSand–Mud Interbed
Index
Lithological Coefficient of Sandstone
ZK5121.0000.2770.8240.8760.0431.000
ZKJ5040.9290.5540.5880.9150.0310.854
ZK5310.9060.7700.5880.7700.1670.721
ZK5040.8100.7320.4120.7860.1480.796
ZKJ4020.7100.3330.8820.7980.0220.694
ZKJ3070.6800.4060.4710.7850.0190.572
ZK51110.5870.5800.3530.9280.0150.433
ZK5330.5840.6720.2940.8560.2280.329
ZKJ5050.5430.8050.5880.9460.1720.349
ZKJ4050.5010.4370.7060.9390.0990.543
ZKJ5060.4790.8310.5290.8240.3810.195
ZK5050.4720.6640.7650.8180.2630.348
ZK51120.4620.6850.5880.7770.1550.342
ZK5130.4500.5980.9410.7220.1320.314
ZKJ2070.4300.2120.2940.9580.0200.369
ZKJ2120.4300.4470.4120.5240.0310.303
ZKJ5010.3250.7420.3530.8430.2920.383
ZKJ5020.2680.6040.6470.6200.3560.333
ZK48120.2630.3810.2940.9230.0700.240
ZKJ2110.2620.6740.2940.7180.1070.236
ZKJ3080.2570.2910.6470.8890.1500.257
ZKJ4010.2540.6090.4120.8810.3510.165
ZKJ4040.2510.7200.4121.0000.0180.203
ZK5060.2480.6801.0000.9770.4020.201
ZKJ2060.2400.4810.5290.6480.0610.169
ZK49100.2140.6890.1760.8540.0160.153
ZK5140.2050.9130.4710.8100.4440.218
ZK4860.1990.5100.6470.9380.0840.225
ZK5250.1950.9490.3530.8350.5950.357
ZK5080.1840.5220.5880.8850.2080.092
ZK5090.1830.9060.5290.8820.4100.070
ZKJ4060.1770.8620.5880.8750.2560.085
ZK5320.1641.0000.6470.8301.0000.239
ZKJ5030.1470.9120.6470.8340.8990.103
ZKJ1030.1440.2880.2940.7480.0490.098
ZKJ3060.1400.5250.5290.8620.4400.167
ZKJ2080.1390.3900.4710.8400.0990.083
ZK4970.1310.8440.3530.7170.4580.136
ZK49120.1170.6500.4120.9720.4400.079
ZKJ3030.0780.8510.2940.9380.6030.031
ZKJ4030.0460.9000.4120.7940.6330.026
WeightAquifer
Thickness
Recharge IndexDip Angle of Coal SeamCore Takes
Rate
Sand–Mud Interbed IndexLithological Coefficient of Sandstone
h′0.2340.1580.1230.1680.0640.253
r′0.2850.1330.1350.1720.0440.230
Comprehensive WeightAquifer
Thickness
Recharge IndexDip Angle of Coal SeamCore Takes
Rate
Sand–Mud Interbed IndexLithological Coefficient of Sandstone
w 0.2590.1450.1290.1710.0530.242
BoreholesInflow (m /h)Hydraulic Pressure (Mpa)Comparison of Projected ResultsBoreholesInflow (m /h)Hydraulic Pressure (Mpa)Comparison of Projected ResultsBoreholesInflow (m /h)Hydraulic Pressure (Mpa)Comparison of Projected Results
S1-1300.9DisagreeS8-315\AgreeS18-170.9Agree
S1-2230.9DisagreeS9-116\AgreeS18-44.50.9Agree
S2-48.60.8AgreeS10-112\DisagreeS19-24.21.2Agree
S2-5190.8DisagreeS10-315\DisagreeS19-450.9Agree
S3-460.9AgreeS11-17.2\AgreeS2-11.10.19Agree
S3-5110.9AgreeS11-39\AgreeS2-21.1\Agree
S4-160.9AgreeS12-15.3\AgreeS2-31.10.19Agree
S4-2230.9DisagreeS12-35\DisagreeS2-41.4\Agree
S4-3100.9AgreeS14-2261DisagreeS3-21.20.2Agree
S4-4170.9AgreeS14-3111AgreeS3-41.60.26Agree
S5-3120.9AgreeS15-212.61AgreeS5-20.80.13Agree
S6-230.50.9AgreeS16-27.50.9AgreeS5-40.70.12Agree
k445\AgreeS16-34.80.9AgreeSF179\Agree
k528\AgreeS16-69.50.9AgreeSF275.3\Agree
S7-315\AgreeS17-45.50.9Agree
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Xu, J.; Wang, Q.; Zhang, Y.; Li, W.; Li, X. Evaluation of Coal-Seam Roof-Water Richness Based on Improved Weight Method: A Case Study in the Dananhu No.7 Coal Mine, China. Water 2024 , 16 , 1847. https://doi.org/10.3390/w16131847

Xu J, Wang Q, Zhang Y, Li W, Li X. Evaluation of Coal-Seam Roof-Water Richness Based on Improved Weight Method: A Case Study in the Dananhu No.7 Coal Mine, China. Water . 2024; 16(13):1847. https://doi.org/10.3390/w16131847

Xu, Jie, Qiqing Wang, Yuguang Zhang, Wenping Li, and Xiaoqin Li. 2024. "Evaluation of Coal-Seam Roof-Water Richness Based on Improved Weight Method: A Case Study in the Dananhu No.7 Coal Mine, China" Water 16, no. 13: 1847. https://doi.org/10.3390/w16131847

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Open access
  • Published: 10 November 2020

Case study research for better evaluations of complex interventions: rationale and challenges

  • Sara Paparini   ORCID: orcid.org/0000-0002-1909-2481 1 ,
  • Judith Green 2 ,
  • Chrysanthi Papoutsi 1 ,
  • Jamie Murdoch 3 ,
  • Mark Petticrew 4 ,
  • Trish Greenhalgh 1 ,
  • Benjamin Hanckel 5 &
  • Sara Shaw 1  

BMC Medicine volume  18 , Article number:  301 ( 2020 ) Cite this article

18k Accesses

45 Citations

35 Altmetric

Metrics details

The need for better methods for evaluation in health research has been widely recognised. The ‘complexity turn’ has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might link interventions and outcomes. We argue that case study research—currently denigrated as poor evidence—is an under-utilised resource for not only providing evidence about context and transferability, but also for helping strengthen causal inferences when pathways between intervention and effects are likely to be non-linear.

Case study research, as an overall approach, is based on in-depth explorations of complex phenomena in their natural, or real-life, settings. Empirical case studies typically enable dynamic understanding of complex challenges and provide evidence about causal mechanisms and the necessary and sufficient conditions (contexts) for intervention implementation and effects. This is essential evidence not just for researchers concerned about internal and external validity, but also research users in policy and practice who need to know what the likely effects of complex programmes or interventions will be in their settings. The health sciences have much to learn from scholarship on case study methodology in the social sciences. However, there are multiple challenges in fully exploiting the potential learning from case study research. First are misconceptions that case study research can only provide exploratory or descriptive evidence. Second, there is little consensus about what a case study is, and considerable diversity in how empirical case studies are conducted and reported. Finally, as case study researchers typically (and appropriately) focus on thick description (that captures contextual detail), it can be challenging to identify the key messages related to intervention evaluation from case study reports.

Whilst the diversity of published case studies in health services and public health research is rich and productive, we recommend further clarity and specific methodological guidance for those reporting case study research for evaluation audiences.

Peer Review reports

The need for methodological development to address the most urgent challenges in health research has been well-documented. Many of the most pressing questions for public health research, where the focus is on system-level determinants [ 1 , 2 ], and for health services research, where provisions typically vary across sites and are provided through interlocking networks of services [ 3 ], require methodological approaches that can attend to complexity. The need for methodological advance has arisen, in part, as a result of the diminishing returns from randomised controlled trials (RCTs) where they have been used to answer questions about the effects of interventions in complex systems [ 4 , 5 , 6 ]. In conditions of complexity, there is limited value in maintaining the current orientation to experimental trial designs in the health sciences as providing ‘gold standard’ evidence of effect.

There are increasing calls for methodological pluralism [ 7 , 8 ], with the recognition that complex intervention and context are not easily or usefully separated (as is often the situation when using trial design), and that system interruptions may have effects that are not reducible to linear causal pathways between intervention and outcome. These calls are reflected in a shifting and contested discourse of trial design, seen with the emergence of realist [ 9 ], adaptive and hybrid (types 1, 2 and 3) [ 10 , 11 ] trials that blend studies of effectiveness with a close consideration of the contexts of implementation. Similarly, process evaluation has now become a core component of complex healthcare intervention trials, reflected in MRC guidance on how to explore implementation, causal mechanisms and context [ 12 ].

Evidence about the context of an intervention is crucial for questions of external validity. As Woolcock [ 4 ] notes, even if RCT designs are accepted as robust for maximising internal validity, questions of transferability (how well the intervention works in different contexts) and generalisability (how well the intervention can be scaled up) remain unanswered [ 5 , 13 ]. For research evidence to have impact on policy and systems organisation, and thus to improve population and patient health, there is an urgent need for better methods for strengthening external validity, including a better understanding of the relationship between intervention and context [ 14 ].

Policymakers, healthcare commissioners and other research users require credible evidence of relevance to their settings and populations [ 15 ], to perform what Rosengarten and Savransky [ 16 ] call ‘careful abstraction’ to the locales that matter for them. They also require robust evidence for understanding complex causal pathways. Case study research, currently under-utilised in public health and health services evaluation, can offer considerable potential for strengthening faith in both external and internal validity. For example, in an empirical case study of how the policy of free bus travel had specific health effects in London, UK, a quasi-experimental evaluation (led by JG) identified how important aspects of context (a good public transport system) and intervention (that it was universal) were necessary conditions for the observed effects, thus providing useful, actionable evidence for decision-makers in other contexts [ 17 ].

The overall approach of case study research is based on the in-depth exploration of complex phenomena in their natural, or ‘real-life’, settings. Empirical case studies typically enable dynamic understanding of complex challenges rather than restricting the focus on narrow problem delineations and simple fixes. Case study research is a diverse and somewhat contested field, with multiple definitions and perspectives grounded in different ways of viewing the world, and involving different combinations of methods. In this paper, we raise awareness of such plurality and highlight the contribution that case study research can make to the evaluation of complex system-level interventions. We review some of the challenges in exploiting the current evidence base from empirical case studies and conclude by recommending that further guidance and minimum reporting criteria for evaluation using case studies, appropriate for audiences in the health sciences, can enhance the take-up of evidence from case study research.

Case study research offers evidence about context, causal inference in complex systems and implementation

Well-conducted and described empirical case studies provide evidence on context, complexity and mechanisms for understanding how, where and why interventions have their observed effects. Recognition of the importance of context for understanding the relationships between interventions and outcomes is hardly new. In 1943, Canguilhem berated an over-reliance on experimental designs for determining universal physiological laws: ‘As if one could determine a phenomenon’s essence apart from its conditions! As if conditions were a mask or frame which changed neither the face nor the picture!’ ([ 18 ] p126). More recently, a concern with context has been expressed in health systems and public health research as part of what has been called the ‘complexity turn’ [ 1 ]: a recognition that many of the most enduring challenges for developing an evidence base require a consideration of system-level effects [ 1 ] and the conceptualisation of interventions as interruptions in systems [ 19 ].

The case study approach is widely recognised as offering an invaluable resource for understanding the dynamic and evolving influence of context on complex, system-level interventions [ 20 , 21 , 22 , 23 ]. Empirically, case studies can directly inform assessments of where, when, how and for whom interventions might be successfully implemented, by helping to specify the necessary and sufficient conditions under which interventions might have effects and to consolidate learning on how interdependencies, emergence and unpredictability can be managed to achieve and sustain desired effects. Case study research has the potential to address four objectives for improving research and reporting of context recently set out by guidance on taking account of context in population health research [ 24 ], that is to (1) improve the appropriateness of intervention development for specific contexts, (2) improve understanding of ‘how’ interventions work, (3) better understand how and why impacts vary across contexts and (4) ensure reports of intervention studies are most useful for decision-makers and researchers.

However, evaluations of complex healthcare interventions have arguably not exploited the full potential of case study research and can learn much from other disciplines. For evaluative research, exploratory case studies have had a traditional role of providing data on ‘process’, or initial ‘hypothesis-generating’ scoping, but might also have an increasing salience for explanatory aims. Across the social and political sciences, different kinds of case studies are undertaken to meet diverse aims (description, exploration or explanation) and across different scales (from small N qualitative studies that aim to elucidate processes, or provide thick description, to more systematic techniques designed for medium-to-large N cases).

Case studies with explanatory aims vary in terms of their positioning within mixed-methods projects, with designs including (but not restricted to) (1) single N of 1 studies of interventions in specific contexts, where the overall design is a case study that may incorporate one or more (randomised or not) comparisons over time and between variables within the case; (2) a series of cases conducted or synthesised to provide explanation from variations between cases; and (3) case studies of particular settings within RCT or quasi-experimental designs to explore variation in effects or implementation.

Detailed qualitative research (typically done as ‘case studies’ within process evaluations) provides evidence for the plausibility of mechanisms [ 25 ], offering theoretical generalisations for how interventions may function under different conditions. Although RCT designs reduce many threats to internal validity, the mechanisms of effect remain opaque, particularly when the causal pathways between ‘intervention’ and ‘effect’ are long and potentially non-linear: case study research has a more fundamental role here, in providing detailed observational evidence for causal claims [ 26 ] as well as producing a rich, nuanced picture of tensions and multiple perspectives [ 8 ].

Longitudinal or cross-case analysis may be best suited for evidence generation in system-level evaluative research. Turner [ 27 ], for instance, reflecting on the complex processes in major system change, has argued for the need for methods that integrate learning across cases, to develop theoretical knowledge that would enable inferences beyond the single case, and to develop generalisable theory about organisational and structural change in health systems. Qualitative Comparative Analysis (QCA) [ 28 ] is one such formal method for deriving causal claims, using set theory mathematics to integrate data from empirical case studies to answer questions about the configurations of causal pathways linking conditions to outcomes [ 29 , 30 ].

Nonetheless, the single N case study, too, provides opportunities for theoretical development [ 31 ], and theoretical generalisation or analytical refinement [ 32 ]. How ‘the case’ and ‘context’ are conceptualised is crucial here. Findings from the single case may seem to be confined to its intrinsic particularities in a specific and distinct context [ 33 ]. However, if such context is viewed as exemplifying wider social and political forces, the single case can be ‘telling’, rather than ‘typical’, and offer insight into a wider issue [ 34 ]. Internal comparisons within the case can offer rich possibilities for logical inferences about causation [ 17 ]. Further, case studies of any size can be used for theory testing through refutation [ 22 ]. The potential lies, then, in utilising the strengths and plurality of case study to support theory-driven research within different methodological paradigms.

Evaluation research in health has much to learn from a range of social sciences where case study methodology has been used to develop various kinds of causal inference. For instance, Gerring [ 35 ] expands on the within-case variations utilised to make causal claims. For Gerring [ 35 ], case studies come into their own with regard to invariant or strong causal claims (such as X is a necessary and/or sufficient condition for Y) rather than for probabilistic causal claims. For the latter (where experimental methods might have an advantage in estimating effect sizes), case studies offer evidence on mechanisms: from observations of X affecting Y, from process tracing or from pattern matching. Case studies also support the study of emergent causation, that is, the multiple interacting properties that account for particular and unexpected outcomes in complex systems, such as in healthcare [ 8 ].

Finally, efficacy (or beliefs about efficacy) is not the only contributor to intervention uptake, with a range of organisational and policy contingencies affecting whether an intervention is likely to be rolled out in practice. Case study research is, therefore, invaluable for learning about contextual contingencies and identifying the conditions necessary for interventions to become normalised (i.e. implemented routinely) in practice [ 36 ].

The challenges in exploiting evidence from case study research

At present, there are significant challenges in exploiting the benefits of case study research in evaluative health research, which relate to status, definition and reporting. Case study research has been marginalised at the bottom of an evidence hierarchy, seen to offer little by way of explanatory power, if nonetheless useful for adding descriptive data on process or providing useful illustrations for policymakers [ 37 ]. This is an opportune moment to revisit this low status. As health researchers are increasingly charged with evaluating ‘natural experiments’—the use of face masks in the response to the COVID-19 pandemic being a recent example [ 38 ]—rather than interventions that take place in settings that can be controlled, research approaches using methods to strengthen causal inference that does not require randomisation become more relevant.

A second challenge for improving the use of case study evidence in evaluative health research is that, as we have seen, what is meant by ‘case study’ varies widely, not only across but also within disciplines. There is indeed little consensus amongst methodologists as to how to define ‘a case study’. Definitions focus, variously, on small sample size or lack of control over the intervention (e.g. [ 39 ] p194), on in-depth study and context [ 40 , 41 ], on the logic of inference used [ 35 ] or on distinct research strategies which incorporate a number of methods to address questions of ‘how’ and ‘why’ [ 42 ]. Moreover, definitions developed for specific disciplines do not capture the range of ways in which case study research is carried out across disciplines. Multiple definitions of case study reflect the richness and diversity of the approach. However, evidence suggests that a lack of consensus across methodologists results in some of the limitations of published reports of empirical case studies [ 43 , 44 ]. Hyett and colleagues [ 43 ], for instance, reviewing reports in qualitative journals, found little match between methodological definitions of case study research and how authors used the term.

This raises the third challenge we identify that case study reports are typically not written in ways that are accessible or useful for the evaluation research community and policymakers. Case studies may not appear in journals widely read by those in the health sciences, either because space constraints preclude the reporting of rich, thick descriptions, or because of the reported lack of willingness of some biomedical journals to publish research that uses qualitative methods [ 45 ], signalling the persistence of the aforementioned evidence hierarchy. Where they do, however, the term ‘case study’ is used to indicate, interchangeably, a qualitative study, an N of 1 sample, or a multi-method, in-depth analysis of one example from a population of phenomena. Definitions of what constitutes the ‘case’ are frequently lacking and appear to be used as a synonym for the settings in which the research is conducted. Despite offering insights for evaluation, the primary aims may not have been evaluative, so the implications may not be explicitly drawn out. Indeed, some case study reports might properly be aiming for thick description without necessarily seeking to inform about context or causality.

Acknowledging plurality and developing guidance

We recognise that definitional and methodological plurality is not only inevitable, but also a necessary and creative reflection of the very different epistemological and disciplinary origins of health researchers, and the aims they have in doing and reporting case study research. Indeed, to provide some clarity, Thomas [ 46 ] has suggested a typology of subject/purpose/approach/process for classifying aims (e.g. evaluative or exploratory), sample rationale and selection and methods for data generation of case studies. We also recognise that the diversity of methods used in case study research, and the necessary focus on narrative reporting, does not lend itself to straightforward development of formal quality or reporting criteria.

Existing checklists for reporting case study research from the social sciences—for example Lincoln and Guba’s [ 47 ] and Stake’s [ 33 ]—are primarily orientated to the quality of narrative produced, and the extent to which they encapsulate thick description, rather than the more pragmatic issues of implications for intervention effects. Those designed for clinical settings, such as the CARE (CAse REports) guidelines, provide specific reporting guidelines for medical case reports about single, or small groups of patients [ 48 ], not for case study research.

The Design of Case Study Research in Health Care (DESCARTE) model [ 44 ] suggests a series of questions to be asked of a case study researcher (including clarity about the philosophy underpinning their research), study design (with a focus on case definition) and analysis (to improve process). The model resembles toolkits for enhancing the quality and robustness of qualitative and mixed-methods research reporting, and it is usefully open-ended and non-prescriptive. However, even if it does include some reflections on context, the model does not fully address aspects of context, logic and causal inference that are perhaps most relevant for evaluative research in health.

Hence, for evaluative research where the aim is to report empirical findings in ways that are intended to be pragmatically useful for health policy and practice, this may be an opportune time to consider how to best navigate plurality around what is (minimally) important to report when publishing empirical case studies, especially with regards to the complex relationships between context and interventions, information that case study research is well placed to provide.

The conventional scientific quest for certainty, predictability and linear causality (maximised in RCT designs) has to be augmented by the study of uncertainty, unpredictability and emergent causality [ 8 ] in complex systems. This will require methodological pluralism, and openness to broadening the evidence base to better understand both causality in and the transferability of system change intervention [ 14 , 20 , 23 , 25 ]. Case study research evidence is essential, yet is currently under exploited in the health sciences. If evaluative health research is to move beyond the current impasse on methods for understanding interventions as interruptions in complex systems, we need to consider in more detail how researchers can conduct and report empirical case studies which do aim to elucidate the contextual factors which interact with interventions to produce particular effects. To this end, supported by the UK’s Medical Research Council, we are embracing the challenge to develop guidance for case study researchers studying complex interventions. Following a meta-narrative review of the literature, we are planning a Delphi study to inform guidance that will, at minimum, cover the value of case study research for evaluating the interrelationship between context and complex system-level interventions; for situating and defining ‘the case’, and generalising from case studies; as well as provide specific guidance on conducting, analysing and reporting case study research. Our hope is that such guidance can support researchers evaluating interventions in complex systems to better exploit the diversity and richness of case study research.

Availability of data and materials

Not applicable (article based on existing available academic publications)

Abbreviations

Qualitative comparative analysis

Quasi-experimental design

Randomised controlled trial

Diez Roux AV. Complex systems thinking and current impasses in health disparities research. Am J Public Health. 2011;101(9):1627–34.

Article   Google Scholar  

Ogilvie D, Mitchell R, Mutrie N, M P, Platt S. Evaluating health effects of transport interventions: methodologic case study. Am J Prev Med 2006;31:118–126.

Walshe C. The evaluation of complex interventions in palliative care: an exploration of the potential of case study research strategies. Palliat Med. 2011;25(8):774–81.

Woolcock M. Using case studies to explore the external validity of ‘complex’ development interventions. Evaluation. 2013;19:229–48.

Cartwright N. Are RCTs the gold standard? BioSocieties. 2007;2(1):11–20.

Deaton A, Cartwright N. Understanding and misunderstanding randomized controlled trials. Soc Sci Med. 2018;210:2–21.

Salway S, Green J. Towards a critical complex systems approach to public health. Crit Public Health. 2017;27(5):523–4.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Bonell C, Warren E, Fletcher A. Realist trials and the testing of context-mechanism-outcome configurations: a response to Van Belle et al. Trials. 2016;17:478.

Pallmann P, Bedding AW, Choodari-Oskooei B. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16:29.

Curran G, Bauer M, Mittman B, Pyne J, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26. https://doi.org/10.1097/MLR.0b013e3182408812 .

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015 [cited 2020 Jun 27];350. Available from: https://www.bmj.com/content/350/bmj.h1258 .

Evans RE, Craig P, Hoddinott P, Littlecott H, Moore L, Murphy S, et al. When and how do ‘effective’ interventions need to be adapted and/or re-evaluated in new contexts? The need for guidance. J Epidemiol Community Health. 2019;73(6):481–2.

Shoveller J. A critical examination of representations of context within research on population health interventions. Crit Public Health. 2016;26(5):487–500.

Treweek S, Zwarenstein M. Making trials matter: pragmatic and explanatory trials and the problem of applicability. Trials. 2009;10(1):37.

Rosengarten M, Savransky M. A careful biomedicine? Generalization and abstraction in RCTs. Crit Public Health. 2019;29(2):181–91.

Green J, Roberts H, Petticrew M, Steinbach R, Goodman A, Jones A, et al. Integrating quasi-experimental and inductive designs in evaluation: a case study of the impact of free bus travel on public health. Evaluation. 2015;21(4):391–406.

Canguilhem G. The normal and the pathological. New York: Zone Books; 1991. (1949).

Google Scholar  

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

King G, Keohane RO, Verba S. Designing social inquiry: scientific inference in qualitative research: Princeton University Press; 1994.

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

Yin R. Enhancing the quality of case studies in health services research. Health Serv Res. 1999;34(5 Pt 2):1209.

CAS   PubMed   PubMed Central   Google Scholar  

Raine R, Fitzpatrick R, Barratt H, Bevan G, Black N, Boaden R, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res. 2016 [cited 2020 Jun 30];4(16). Available from: https://www.journalslibrary.nihr.ac.uk/hsdr/hsdr04160#/abstract .

Craig P, Di Ruggiero E, Frohlich KL, E M, White M, Group CCGA. Taking account of context in population health intervention research: guidance for producers, users and funders of research. NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Grant RL, Hood R. Complex systems, explanation and policy: implications of the crisis of replication for public health research. Crit Public Health. 2017;27(5):525–32.

Mahoney J. Strategies of causal inference in small-N analysis. Sociol Methods Res. 2000;4:387–424.

Turner S. Major system change: a management and organisational research perspective. In: Rosalind Raine, Ray Fitzpatrick, Helen Barratt, Gywn Bevan, Nick Black, Ruth Boaden, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res. 2016;4(16) 2016. https://doi.org/10.3310/hsdr04160.

Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Serv Res. 1999;34(5 Pt 2):1225.

Hanckel B, Petticrew M, Thomas J, Green J. Protocol for a systematic review of the use of qualitative comparative analysis for evaluative questions in public health research. Syst Rev. 2019;8(1):252.

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis: Cambridge University Press; 2012. 369 p.

Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. 2006;12:219–45.

Tsoukas H. Craving for generality and small-N studies: a Wittgensteinian approach towards the epistemology of the particular in organization and management studies. Sage Handb Organ Res Methods. 2009:285–301.

Stake RE. The art of case study research. London: Sage Publications Ltd; 1995.

Mitchell JC. Typicality and the case study. Ethnographic research: A guide to general conduct. Vol. 238241. 1984.

Gerring J. What is a case study and what is it good for? Am Polit Sci Rev. 2004;98(2):341–54.

May C, Mort M, Williams T, F M, Gask L. Health technology assessment in its local contexts: studies of telehealthcare. Soc Sci Med 2003;57:697–710.

McGill E. Trading quality for relevance: non-health decision-makers’ use of evidence on the social determinants of health. BMJ Open. 2015;5(4):007053.

Greenhalgh T. We can’t be 100% sure face masks work – but that shouldn’t stop us wearing them | Trish Greenhalgh. The Guardian. 2020 [cited 2020 Jun 27]; Available from: https://www.theguardian.com/commentisfree/2020/jun/05/face-masks-coronavirus .

Hammersley M. So, what are case studies? In: What’s wrong with ethnography? New York: Routledge; 1992.

Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach. BMC Med Res Methodol. 2011;11(1):100.

Luck L, Jackson D, Usher K. Case study: a bridge across the paradigms. Nurs Inq. 2006;13(2):103–9.

Yin RK. Case study research and applications: design and methods: Sage; 2017.

Hyett N, A K, Dickson-Swift V. Methodology or method? A critical review of qualitative case study reports. Int J Qual Stud Health Well-Being. 2014;9:23606.

Carolan CM, Forbat L, Smith A. Developing the DESCARTE model: the design of case study research in health care. Qual Health Res. 2016;26(5):626–39.

Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to the BMJ editors on qualitative research. Bmj. 2016;352.

Thomas G. A typology for the case study in social science following a review of definition, discourse, and structure. Qual Inq. 2011;17(6):511–21.

Lincoln YS, Guba EG. Judging the quality of case study reports. Int J Qual Stud Educ. 1990;3(1):53–9.

Riley DS, Barber MS, Kienle GS, Aronson JK, Schoen-Angerer T, Tugwell P, et al. CARE guidelines for case reports: explanation and elaboration document. J Clin Epidemiol. 2017;89:218–35.

Download references

Acknowledgements

Not applicable

This work was funded by the Medical Research Council - MRC Award MR/S014632/1 HCS: Case study, Context and Complex interventions (TRIPLE C). SP was additionally funded by the University of Oxford's Higher Education Innovation Fund (HEIF).

Author information

Authors and affiliations.

Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK

Sara Paparini, Chrysanthi Papoutsi, Trish Greenhalgh & Sara Shaw

Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Judith Green

School of Health Sciences, University of East Anglia, Norwich, UK

Jamie Murdoch

Public Health, Environments and Society, London School of Hygiene & Tropical Medicin, London, UK

Mark Petticrew

Institute for Culture and Society, Western Sydney University, Penrith, Australia

Benjamin Hanckel

You can also search for this author in PubMed   Google Scholar

Contributions

JG, MP, SP, JM, TG, CP and SS drafted the initial paper; all authors contributed to the drafting of the final version, and read and approved the final manuscript.

Corresponding author

Correspondence to Sara Paparini .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Paparini, S., Green, J., Papoutsi, C. et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med 18 , 301 (2020). https://doi.org/10.1186/s12916-020-01777-6

Download citation

Received : 03 July 2020

Accepted : 07 September 2020

Published : 10 November 2020

DOI : https://doi.org/10.1186/s12916-020-01777-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative
  • Case studies
  • Mixed-method
  • Public health
  • Health services research
  • Interventions

BMC Medicine

ISSN: 1741-7015

case study design in evaluation

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Investigations
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • Auto Racing
  • 2024 Paris Olympic Games
  • Movie reviews
  • Book reviews
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Orlando Cepeda dies

Things to know about the gender-affirming care case as the Supreme Court prepares to weigh in

Image

FILE - A flag supporting LGBTQ+ rights decorates a desk on the Democratic side of the Kansas House of Representatives during a debate, March 28, 2023, at the Statehouse in Topeka, Kan. The U.S. Supreme Court agreed Monday to consider whether a Tennessee ban on gender-affirming care for minors is constitutional. (AP Photo/John Hanna, File)

  • Copy Link copied

The U.S. Supreme Court said Monday that it will hear arguments on the constitutionality of state bans on gender-affirming care for transgender minors.

The issue has emerged as a big one in the past few years. While transgender people have gained more visibility and acceptance in many respects, half the states have pushed back with laws banning certain health care services for transgender kids.

Things to know about the issue:

What is gender-affirming care?

Gender-affirming care includes a range of medical and mental health services to support a person’s gender identity, including when it’s different from the sex they were assigned at birth.

The services are offered to treat gender dysphoria, the unease a person may have because their assigned gender and gender identity don’t match. The condition has been linked to depression and suicidal thoughts.

Gender-affirming care encompasses counseling and treatment with medications that block puberty, and hormone therapy to produce physical changes. Those for transgender men cause periods to stop, increase facial and body hair, and deepen voices, among others. The hormones used by transgender women can have effects such as slowing growth of body and facial hair and increasing breast growth.

Gender-affirming care can also include surgery, including operations to transform genitals and chests. These surgeries are rarely offered to minors .

Image

What laws are states passing?

Over the past three years, 26 Republican-controlled states have passed laws restricting gender-affirming care for minors. Most of the laws ban puberty blockers, hormone treatment and surgery for those under 18. Some include provisions that allow those already receiving treatment to continue.

The laws also make exceptions for gender-affirming treatments that are not part of a gender transition, such as medications to stop breast growth in boys and excessive facial hair in girls.

One of the laws — in Arkansas — was nixed by a federal court and is not being enforced.

Meanwhile, at least 14 Democratic-controlled states have adopted laws intended to protect access to gender-affirming care.

The gender-affirming care legislation is a major part of a broader set of laws and policies that has emerged in Republican-controlled states that rein in rights of transgender people. Other policies, adopted in the name of protecting women and girls, bar transgender people from school bathrooms and sports competitions that align with their gender.

What have courts said so far?

Most of the bans have faced court challenges, and most are not very far along in the legal pipeline yet.

The law in Arkansas is the only one to have been struck down entirely, but the state has asked a federal appeals court to reverse that ruling.

The 6th U.S. Circuit Court of Appeals, one step below the Supreme Court, last year ruled that Kentucky and Tennessee can continue to enforce their bans amid legal challenges. The high court has agreed to hear the Tennessee case in the term that starts later this year.

The U.S. Supreme Court in April ruled that Idaho can enforce its ban while litigation over it proceeds. A lower court had put it on hold.

What does the medical community think?

Every major U.S. medical group, including the American Academy of Pediatrics and the American Medical Association, has opposed the bans and said that gender-affirming treatments can be medically necessary and are supported by evidence.

But around the world, medical experts and government health officials are not in lockstep. Some European countries in recent years have warned about overdiagnosis of gender dysphoria.

In England, the state-funded National Health Service commissioned a review of gender identity services for children and adolescents, appointing retired pediatrician Dr. Hilary Cass to lead the effort. The final version of the Cass Review , published in April, found “no good evidence on the long-term outcomes of interventions to manage gender-related distress.”

England’s health service stopped prescribing puberty blockers to children with gender dysphoria outside of a research setting, following recommendations from Cass’ interim report.

The World Professional Association for Transgender Health and its U.S. affiliate issued a statement in May saying they’re deeply concerned about the process, content and consequences of the review, saying it “deprives young trans and gender diverse people of the high-quality care they deserve and causes immense distress and harm to both young patients and their families.”

case study design in evaluation

Content Search

Poland + 1 more

Grand Bargain Localization Commitments (Poland Case Study) June 2024

  • NGO Forum - Razem

Attachments

Preview of NGO Forum_Grand Bargain Localization Commitments Poland Case Study_2024_FINAL.pdf

Grand Bargain Localization Commitments (Poland Case Study) by Groupe URD for NGO Forum “Razem”

Financing Partners: CARE Poland, Oxfam Foundation in Poland, Norwegian Refugee Council in Poland, Contributing Partners: Plan International Poland, Save the Children International (Poland)

Steering Committee composed by representatives of: Foundation Ukraine, Migration Consortium and Mudita Association, CARE Poland, Oxfam Foundation in Poland, Norwegian Refugee Council in Poland

Operator: Polish Humanitarian Action

Authors, Groupe URD: Dominika Michalak, Véronique De Geoffroy, Rana Gabi, Elie Keldani, Karina Melnytska

REPORT SUMMARY:

Within the first month of Russia's full-scale invasion of Ukraine on 24 February 2022, over two million refugees crossed into Poland. Many continued to other European countries or overseas, but a year later, over a million refugees from Ukraine were registered for Temporary Protection in Poland. In 2024, refugees are still arriving in Poland from Ukraine, and the number of registered refugees remains similar to the previous year. This report evaluates the involvement of international humanitarian actors in response to the challenges related to this forced migration movement, as compared against the Grand Bargain Localization Commitments .

Its aim is to support the common work on better relationships between international actors, such as INGOs and UN agencies, and their local partners in Poland, as well as to contribute to the global localization debate.

The research employs the NEAR Localisation Performance Measurement Framework but also draws directly on analytical categories from the research material, i.e., on how the research participants understood the localization dynamics. The data were collected between March and May 2024. It provides information on the NGO landscape before the 24th of February 2022 and the development of the response up to the beginning of May 2024. In total**, 85 persons from 55 organizations took part in the research and 6 INGOs shared their partnership agreement templates for our analysis**.

Overall, the evaluation of the response dynamics against the Grand Bargain Commitments (as operationalized by the NEAR framework) showed positive results. Adherence to the commitments was highest with regard to financing: in the case of Poland, the main strengths of the localization processes were the high availability of direct funding , the high share of overhead costs covered by INGOs, and the good availability of financial support for organizational development. Capacity sharing and the quality of partnerships are areas where improvement was expected by all the actors involved. Capacity development often focused on facilitating local actors’ adaptation to the humanitarian system and rarely assumed the character of capacity sharing. Excessive formalization of partnerships after the emergency phase of the response, often combined with inconsistencies related to these formal requirements, meant office work overshadowed matters related to field challenges and impacted the quality of partnerships.

The report identifies 13 barriers and 10 enablers of localization in Poland .

THE BARRIERS include proximity bias in identification of projects’ participants and their needs; decontextualized character of humanitarian models, standards and commitments; no shared definition of success between local and international actors; excessive administrative burden on local actors; insufficient awareness raising about the key characteristics of the humanitarian cycle among L/NNGOs; high rotation of INGO and UN employees and language barriers.

THE ENABLERS include abundant direct funding available, including funding covering overhead costs and organizational development; availability of non-competitive funding; reliability of L/NNGOs as partners; horizontal networks facilitating cooperation between organizations sharing interests and facing similar challenges; local actors with adequate experience assuming the role of intermediaries (e.g., engaging in re-granting or facilitating coordination); high social legitimacy of providing support to Ukrainian refugees; authentic work on improving partnerships; acknowledgment of local expertise and capacity. We stress that localization is largely a matter of balance: some of the enablers, when applied without monitoring or on too wide a scale, can become barriers.

HIGHLIGHTED RECCOMENDATIONS:

- Strengthening networks of cooperation , i.e., horizontal networks between local actors and similar cooperation ties between international actors, has been instrumental in overcoming some of the localization barriers and is worth considering at other sites of humanitarian intervention.

- Capacity sharing instead of capacity building to acknowledge and properly employ the expertise of local actors is recommended in any context.

- Ensuring more equity in contracting , especially with regard to prioritizing the local law as the governing law and in terms of termination is recommended, especially for contexts where the rule of law is sufficient to ensure both sides a fair trial. Ensuring availability of contracts in the local language is recommended as general good practice.

- The report also concludes that reconnecting with social movements is beneficial to international humanitarian actors wherever these movements address questions at the heart of humanitarian ethics, such as human rights, civil participation or protection from violence.

Related Content

Ukraine + 1 more

Lessons Learning Review of Movement Coordination in Operations Strengthening Movement Coordination and Cooperation (SMCC) Ukraine and Pakistan: Final Report - Executive Summary

The early lessons of cash coordination in poland, unicef ukraine humanitarian situation report no. 40 - may 2024, case study – humanitarian operations programme training in poland.

The injection capacity evaluation of high-pressure water injection in low permeability reservoir: numerical and case study

  • Zhu, Weiyao
  • Wang, Youqi
  • Liu, Yunfeng

Low permeability oil reservoir resources are rich and their efficient development is considered an important way to solve energy security issues. However, the development process of low permeability oil reservoirs is faced with the challenges of insufficient natural energy and rapid production decline. The high-pressure water injection technology is a method that relies on high-pressure and large-volume to inject fluid into the reservoir to replenish energy. It is considered as an important technical means to quickly replenish formation energy. This study focuses on the injection capacity for the high-pressure water injection technology of low permeability oil reservoir. Firstly, the fluid-structure interaction mathematical model for two-phase fluid flow was established. The solution of the mathematical model was then obtained by coupling the phase transport in porous media module and Darcy's law module on the COMSOL numerical simulation platform. The numerical model established in this study was verified through the Buckley-Leverett model. The study on the injection capacity of high-pressure water injection technology was conducted using the geological background and reservoir physical properties of Binnan Oilfield (Shengli, China). The results show that the production pressure difference is the key factor in determining the injection capacity. When the production pressure difference increases from 5 MPa to 30 MPa, the cumulative injection volume increases by 8.1 times. In addition, sensitivity analysis shows that the injection capacity is significantly influenced by the properties of the reformation area. The effect of these parameters from high to low is as follows: stress sensitivity factor, permeability, rock compressibility, and porosity. Compared to the reformation area, the influence of the physical parameters of the matrix area on the injection capacity is negligible. Therefore, effective reservoir reformation is essential for enhancing the injection capacity. This research provides a theoretical basis for the design and optimization of the high-pressure water injection technology schemes for low permeability oil reservoir.

This website may not work correctly because your browser is out of date. Please update your browser .

Case Study Research: Design and Methods

  • https://www.betterevaluation.org/sites/default/files/24737_Chapter_5.pdf File type PDF File size 382.65 KB

This book from Robert K. Yin provides detailed guidance on case study research.  Outlining a clear definition of the case study method, the book also looks at design and analysis techniques that can be used in case study research. The book includes tutorials at the end of each chapter and a discussion of values and ethics and logic models.

This book is available to purchase from Sage Publications

Chapter five of this book is available to download from Sage Publications: Chapter 5 - Analyzing Case Study Evidence

  • Chapter 1. Getting Started: How to Know Whether and When to Use the Case Study as a Research Method
  • Chapter 2. Designing Case Studies: Identifying Your Case(s) and Establishing the Logic of Your Case Study
  • Chapter 3. Preparing to Collect Case Study Evidence: What You Need to Do before Starting to Collect Case Study Data
  • Chapter 4. Collecting Case Study Evidence: The Principles You Should Follow in Working with Six Sources of Evidence
  • Chapter 5. Analyzing Case Study Evidence: How to Start Your Analysis, Your Analytic Choices, and How They Work
  • Chapter 6. Reporting Case Studies: How and What to Compose
  • Appendix A: A Note on the Uses of Case Study Research in Psychology
  • Appendix B: A Note on the Uses of Case Study Research in Evaluations
  • Appendix C: Index of Individual Case Studies (cited in BOXES or from Expanded Case Study Materials)

Yin, R. K. (2014).  Case study research: design and methods  (Fifth ed.).Sage Publications

Related links

  • http://www.sagepub.com/books/Book237921/toc#tabview=toc

Back to top

© 2022 BetterEvaluation. All right reserved.

COMMENTS

  1. Designing process evaluations using case study to explore the context

    There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and ...

  2. PDF Using Case Studies to do Program Evaluation

    not a case study is a useful evaluation tool for a given project, and if so, this guide explains how to do a good case study evaluation. A list of resources is included in Appendix A. Like any other evaluation design, the case study should suit the project to which it is applied and must be well executed for maximum benefit.

  3. Case study

    The GAO (Government Accountability Office) has described six different types of case study: 1. Illustrative: This is descriptive in character and intended to add realism and in-depth examples to other information about a program or policy. (These are often used to complement quantitative data by providing examples of the overall findings).

  4. PDF Case Study Evaluations GAO/PEMD-91-10.1

    many case studies to answer an evaluation question, whether descriptive, normative, or cause-and-effect. Case Study Evaluations is a review of methodological issues involved in using case study evaluations. It is not a detailed guide to case study design. It does, however, explain the similarities and differences

  5. Implementing case study design to evaluate diverse institutions and

    IMPLEMENTING CASE STUDY DESIGN FOR THE DPC EVALUATION. Case study research utilizes multiple forms of evidence. Still, it is a predominantly qualitative method in which a bounded entity, program, or system (a "case") is studied at length in its real-life context (Stake, 2011; Yin, 2017).Case studies are useful when the boundaries between the phenomenon of interest and its larger context ...

  6. PDF Evaluation Models, Approaches, and Designs

    The following are brief descriptions of the most commonly used evaluation (and research) designs. One-Shot Design.In using this design, the evaluator gathers data following ... groups, systems, processes, or organizations. In particular, the case study design is most useful when you want to answer how and why questions and when there is a need ...

  7. Case study research for better evaluations of complex interventions

    The Design of Case Study Research in Health Care (DESCARTE) model suggests a series of questions to be asked of a case study researcher (including clarity about the philosophy underpinning their research), study design (with a focus on case definition) and analysis (to improve process). The model resembles toolkits for enhancing the quality and ...

  8. Designing process evaluations using case study to explore the context

    Conclusions: There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between ...

  9. (PDF) Designing process evaluations using case study to explore the

    A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic ...

  10. PDF VERSION 1.0

    include an evaluative case study as part of the evaluation the evaluative case study method, but a design. "fly-through" site visit is not an evaluative case study. While there is no Time and cost considerations also may affect the decision to set rule about how long one has to be adopt the case study method.

  11. (PDF) Robert K. Yin. (2014). Case Study Research Design and Methods

    219). In evaluation, case studies can be used to capture the complexity of a case, ... urges evaluators to become familiar with case study design as outlined in this book and .

  12. Qualitative Research: Case study evaluation

    Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy. This is the last in a series of seven articles describing non ...

  13. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  14. Validity and generalization in future case study evaluations

    Validity and generalization continue to be challenging aspects in designing and conducting case study evaluations, especially when the number of cases being studied is highly limited (even limited to a single case). To address the challenge, this article highlights current knowledge regarding the use of: (1) rival explanations, triangulation ...

  15. (PDF) Case study evaluation

    Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well ...

  16. Case Study Evaluation Approach

    A case study evaluation approach is a great way to gain an in-depth understanding of a particular issue or situation. This type of approach allows the researcher to observe, analyze, and assess the effects of a particular situation on individuals or groups. An individual, a location, or a project may serve as the focal point of a case study's ...

  17. Case Study Research: Design and Methods

    2. Designing Case Studies: Identifying your case (s) and establishing the logic of your case study. 3. Preparing to Collect Case Study Evidence: What you need to do before starting to collect case study data. 4. Collecting Case Study Evidence: The principles you should follow in working with six sources of evidence. 5.

  18. Evaluation Research Design: Examples, Methods & Types

    Case Studies; A case study is a research method that helps the researcher to gain a better understanding of a subject or process. Case studies involve in-depth research into a given subject, to understand its functionalities and successes. How to Formplus Online Form Builder for Evaluation Survey Sign into Formplus

  19. Evaluation design

    Case studies. Case studies are another common evaluation design. These are often used to get an in-depth understanding of a single activity or instance within a program setting. This is useful when an evaluation aims to capture information on more explanatory 'how', 'what' and 'why' questions (Crowe et al., 2011).

  20. Evaluating the High-Volume, Low-Complexity Surgical Hub Programme: A

    In this paper, we put these considerations into practice by providing a transparent account of our qualitative study design. The methodological reflections which we present are centred around the areas where we feel there is the most uncertainty for big qualitative research: study design, sampling (of case sites and stakeholders) and analysis.

  21. Applied Sciences

    User satisfaction serves as a crucial reference for iteratively optimizing software interface designs. This paper introduces a comprehensive measurement model of user satisfaction, employing Notability and Goodnotes for case studies. The proposed model incorporates facial recognition technology to gauge the intensity of users' facial expressions while interacting with various functions of ...

  22. Design and Evaluation of a Web Platform for Connecting Instructors and

    This paper presents the design and user evaluation of ConPEC as guided by design principles and heuristics to ensure optimal usability and acceptance by end-users. Twenty (20) instructors in construction-related academic programs who are potential end-users were recruited for evaluation by interacting with ConPEC in a real-case scenario.

  23. (PDF) Qualitative Case Study Methodology: Study Design and

    In evaluation . language, the explanations . would link program . implementation with . ... The design of this case study follows a descriptive single-case study approach (Priya, 2021), aiming to ...

  24. Water

    The safety of mines is a top priority in the mining industry, and a precise assessment of aquifer water levels is crucial for conducting a risk analysis of water-related disasters. Among them, the GIS-based water abundance index method is widely used in water-richness evaluation. However, the existing research lacks sufficient determination of evaluation indicator weights, specifically in the ...

  25. Case study research for better evaluations of complex interventions

    The Design of Case Study Research in Health Care (DESCARTE) model suggests a series of questions to be asked of a case study researcher (including clarity about the philosophy underpinning their research), study design (with a focus on case definition) and analysis (to improve process). The model resembles toolkits for enhancing the quality and ...

  26. Guidance for the design of qualitative case study evaluation

    This guide, written by Professor Frank Vanclay of the Department of Cultural Geography, University of Groningen, provides notes on planning and implementing qualitative case study research.It outlines the use of a variety of different evaluation options that can be used in outcomes assessment and provides examples of the use of story based approaches with a discussion focused on their ...

  27. Things to know about the gender-affirming care case as the Supreme

    The U.S. Supreme Court said Monday that it will hear arguments on the constitutionality of state bans on gender-affirming care for transgender minors.. The issue has emerged as a big one in the past few years. While transgender people have gained more visibility and acceptance in many respects, half the states have pushed back with laws banning certain health care services for transgender kids.

  28. Grand Bargain Localization Commitments (Poland Case Study ...

    Evaluation and Lessons Learned in English on Poland and 1 other country about Coordination; published on 27 Jun 2024 by CARE, Groupe URD and 7count other organizations ... (Poland Case Study) by ...

  29. The injection capacity evaluation of high-pressure water injection in

    Low permeability oil reservoir resources are rich and their efficient development is considered an important way to solve energy security issues. However, the development process of low permeability oil reservoirs is faced with the challenges of insufficient natural energy and rapid production decline. The high-pressure water injection technology is a method that relies on high-pressure and ...

  30. Case Study Research: Design and Methods

    382.65 KB. This book from Robert K. Yin provides detailed guidance on case study research. Outlining a clear definition of the case study method, the book also looks at design and analysis techniques that can be used in case study research. The book includes tutorials at the end of each chapter and a discussion of values and ethics and logic ...