Abstract

Implementation is an emerging research topic in the field of health promotion. Most of the implementation research adheres to one of two paradigms: implementing interventions with maximum fidelity or designing interventions that are responsive to the needs of a local community. While fidelity and adaptation are often considered as contradictory, they are both essential elements of preventive interventions. An innovative program design strategy is therefore to develop hybrid programs that ‘build in’ adaptation to enhance program fit, while also maximizing the implementation fidelity. The present article presents guidelines for this hybrid approach to program implementation and illustrates them with a concrete psycho-educational group intervention. The approach, which is referred to as ‘empowerment implementation’ on the analogy of empowerment evaluation, builds on theory of implementation fidelity and community-based participatory research. To demonstrate the use of these guidelines, a psycho-educational course aimed at stress reduction and the prevention of depression and anxiety was implemented according to these guidelines. The main focus lies on how an intervention can benefit from adaptations guided by local expertise, while maintaining the core program components and still respecting the implementation fidelity.

INTRODUCTION

Implementation is an emerging research topic in the field of public health. This growing interest reflects a changing view on what counts as evidence. For a long time, ‘evidence’ was a synonym for empirical data supporting either the scale or cause of a health problem, or the causal relations between interventions and outcomes. For both kinds, the level of evidence provided is a key quality feature, with a randomized controlled trial (RCT) traditionally regarded as the ‘golden’—but often unachievable—standard for evaluation. However, over the last decade it has become clear that public health needs other types of research evidence as well (Rychetnik et al., 2002; Aro et al., 2005). It is not sufficient to know the magnitude, severity and causes of public health problems and the relative effectiveness of specific interventions to inform public health policy and practice. It is also necessary to know how a specific intervention should be implemented and under which circumstances it can be successful. Accordingly, Rychetnik et al. (Rychetnik et al., 2004) distinguish between three types of evidence in public health: evidence pointing to the fact that ‘something should be done’, to determine ‘what should be done’ and informing on ‘how it should be done’.

The attention for the latter kind of evidence has given rise to a research investigating the quality and the processes of implementation (Glasgow et al., 2003; Breitenstein et al., 2010; Rabin et al., 2010; Palinkas et al., 2011). Although there is no consensus with regard to the conceptual and methodological frameworks to be used to study implementation, various strategies have been proposed to enhance the quality of implementation. These strategies often draw upon the literature on the diffusion of innovation from the 1970s and 1980s (Dusenbury et al., 2003). The most influential strategy is to maximize the fidelity of intervention delivery (Dumas et al., 2001). The concept of implementation fidelity refers to ‘the degree to which an intervention or program is delivered as intended’ (Carroll et al., 2007). Specifically, a successful implementation is one that abides with four components of fidelity: adherence, exposure, quality of program delivery and participant responsiveness (Dane and Schneider, 1998). Mihalic (Mihalic, 2002) describes each of these components as follows: (i) adherence refers to whether interventions are delivered as intended; (ii) exposure refers to the number of sessions implemented, session length, frequency of implementation of program techniques; (iii) quality of program delivery refers to the manner in which staff deliver a program; and (iv) participant responsiveness refers to the extent to which participants are involved in program content. Hasson (Hasson, 2010) has suggested two additional factors that moderate implementation fidelity, notably ‘recruitment’ and ‘context’. The concept of (v) recruitment refers to procedures that are used to attract potential program participants, whereas (vi) context refers to surrounding social systems, such as structures and cultures of groups, inter-organizational linkages and historical as well as concurrent events. All these factors should be evaluated when conducting a process evaluation.

Multiple methodologies have already been developed to measure the implementation fidelity (Blakely et al., 1987). Their main goal is to identify the factors that lead to (the lack of) intervention success. In addition, they also focus on the mechanism and processes that must be taken into consideration when implementing complex interventions (Campbell et al., 2000; Toroyan et al., 2004; Oakley et al., 2006; Breitenstein et al., 2010). Several attempts have also been made to increase the implementation fidelity (Basen-Engquist et al., 1994; Macaulay et al., 2005; Vitale and Romance, 2005; Burns et al., 2008).

On the other hand, focusing on fidelity has also been criticized for being rigid, as it assumes full compliance with the program as prescribed by the program developer (Gresham et al., 1993). The fact that any change to the program made by implementers is considered as bias and as a threat to implementation quality is at odds with the value placed on stakeholder involvement and participation in health promotion (WHO, 1986; Levy et al., 2003). An alternative approach to program implementation is therefore to encourage adaptation, rather than limit it. Drawing on the principles of community-based health promotion—which emphasizes the participation of the community in program planning, implementation, evaluation and dissemination—the program adoption approach holds that users or local adopters should be allowed to reinvent or change programs to meet their own needs and derive a sense of ownership.

The tension between the fidelity and adaptation approach is a recurrent theme in the implementation literature (Blakely et al., 1987; Dusenbury et al., 2003). Berman (Berman, 1981) proposed a contingency model of implementation, in which choosing between the strategies of fidelity and adaptation should depend on the nature of the intervention. The fidelity approach would be most suited for highly structured interventions, whereas the adaptation approach would work better in less structured interventions. Yet, the dominant view remains that both approaches have competing objectives: implementing interventions with maximum fidelity, versus designing interventions that are responsive to the cultural needs of a local community. However, these two objectives are not necessarily contradictory (Weissberg, 1990; Castro et al., 2004). In fact, fidelity and adaptation are both essential elements of preventive interventions. Moreover, both of them are best addressed by a planned, organized and structured approach (Shen et al., 2008).

An interesting attempt to unite both approaches has been proposed by Backer (2001). Based on a literature review, this author formulated six recommendations for implementers. For a successful intervention implementation one should (i) identify and understand the theory behind the program; (ii) locate or conduct a core components analysis of the program; (iii) assess fidelity/adaptation concerns for the implementation site; (iv) consult with the program developer; (v) consult with the organization and/or community in which the implementation will take place; and (vi) develop an overall implementation plan based on these inputs. Adding to this discourse, Castro et al. (2004) suggest that an innovative program design strategy would be to develop hybrid programs that ‘build in’ adaptation to enhance program fit, while also maximizing the fidelity of implementation and program effectiveness. Unfortunately, however, the authors do not provide any guidelines as to how exactly such programs should be developed and validated, only stating that it would require ‘rigorous science-based evaluation and testing’. The basis for such guidelines may, however, be found in similar, related theoretical frameworks like empowerment evaluation.

Empowerment evaluation is grounded in empowerment theory (Zimmerman, 1995). Empowerment can be defined as ‘ … an intentional, ongoing process through which people lacking an equal share of valued resources gain greater access to and control over those resources’. It can exist at community, organizational and individual level and can be viewed both as a process and as an outcome, reflecting the achieved level of empowerment. This process offers individuals the opportunity to gain control over their lives and over democratic participation in the life of their community (Berger and Neuhaus, 1977, cited in Zimmerman and Rappaport, 1988). Zimmerman's view of an empowerment approach to intervention design, implementation and evaluation redefines the professional's role as that of a collaborator and facilitator, rather than an expert and counselor (Zimmerman, 2000). Fetterman (Fetterman, 1996) applied these principles to the evaluation of interventions, referring to this as the empowerment evaluation. It is defined as ‘ … an evaluation approach that aims to increase the probability of achieving program success by providing program stakeholders with tools for assessing the planning, implementation and self-evaluation of their program, and mainstreaming evaluation as a part of the planning and management of the program’ [(Wandersman et al., 2005), p. 28]. In this sense, empowerment evaluation is closely linked to capacity building. According to Fetterman, building the capacities of others to evaluate their own programs involves several steps: (i) determining where the program stands, including strengths and weaknesses; (ii) focusing on establishing goals with an explicit emphasis on program improvement; (iii) helping participants determine their own strategies to accomplish program goals and objectives; and (iv) helping program participants determine the type of evidence required to document progress credibly toward their goals.

The present article aims to extend these guidelines to program implementation. In the next section, this theoretical framework will be introduced, followed by the presentation of the guidelines. Finally, a psycho-educational course aimed at reducing stress and at preventing depression and anxiety will serve as an example of how these guidelines can be put into practice.

EMPOWERMENT IMPLEMENTATION

While empowerment evaluation is mainly concerned with the evaluation of a program, the same principles can also be applied to implementation. Such an ‘empowerment implementation’ could well reconcile the opposing fidelity and adaptation approaches to implementation. Indeed, involving the community in program implementation as an equal partner does not have to be at the cost of fidelity. It only requires providing the community with the concepts, tools and skills to identify the core components of the intervention, to adapt the intervention to their context and culture, and to assess, monitor and maintain the implementation quality.

The steps of an empowerment implementation approach would be as follows: (i) developing a core component; (ii) selecting partners; (iii) assessing the fidelity/adaptation concerns with partners; and (iv) developing an overall implementation plan. The way in which these steps are executed is inspired by community-based participatory research (CBPR), a collaborative approach to research, involving all partners equitably in the research process, recognizing the unique strengths that each brings [(Minkler and Wallerstein, 2003), p. 4; Minkler et al., 2006] and empowerment evaluation. The content of each step is defined by the key elements of high-fidelity implementation, and by research concerning the balancing of program fidelity and adaptation. The consecutive steps will now be explained in detail.

Developing a core component

Prior to implementation, an intervention program is developed based on a sound theoretical framework. One example is Intervention Mapping (IM), a tool for the planning and development of health promotion interventions. It maps the path from the recognition of a need or problem to the identification of a solution (Kok et al., 2004). The first four steps of IM could be run through as follows: (i) needs assessment; (ii) preparing matrices of change objectives; (iii) selecting theory-informed intervention methods and practical strategies; and (iv) producing program components and materials. The resulting intervention is tested in a controlled setting and empirically adapted until researchers end up with an effective intervention. Subsequently, researchers (empirically) define which aspects of the intervention are especially important for its efficacy and label these as the core components of the intervention.

Selecting partners who will implement the intervention

Implementing an intervention requires the participation of partners in the field in order to guarantee its success. Given that the main goal is to adapt a previously developed intervention to the unique conditions of the real-life context in which it is implemented, the notion of participation in this context does not refer to the involvement of the end users, but to that of the implementers. In terms of Fetterman's (Fetterman, 1996) framework, implementers can indeed be considered as primary stakeholders, whose role is not that of an expert or professional, but of a facilitator and enabler (Laverack and Wallerstein, 2001). As such, the focus of the participation of this study is more placed on the participation of the active ‘can affect’-stakeholders than of the ‘affected’ parties (Freeman, 1984, cited in Achterkamp and Vos, 2008). These partners preferably dispose of existing networks related to the intervention topic. For example, an interesting partner for interventions related to cancer in the USA would be the Cancer Information Service, a network of health education offices (Glasgow et al., 2004). However, if potential partners show limited interest in the program, their participation is of little use. Even if they can be persuaded to implement the program, they will be unlikely to strive for quality, and this will probably lead to poor results, i.e. inadequate implementation, weak fidelity and limited evidence-based actions … In order to avoid this, it is important to carefully select the partners who will implement the intervention. A first step is therefore to make an overview of local partners who are available. Subsequently, these potential partners must be consulted to ascertain if they are interested to participate and whether they subscribe to the scientific basis of the intervention to be implemented. The most suitable partners for the intervention can then be selected through an evaluation of their strengths and weaknesses in the function of the intervention.

Assessing the fidelity/adaptation concerns with partners

The next step is to assess the concerns with regard to fidelity and adaptation together with the partners. This implies two aspects.

Deciding on practical intervention aspects

To implement an intervention, a large number of practical aspects need to be taken into consideration. Although not all of these are crucial for the intervention to be effective, they play a large role in how the intervention can be perceived and received by the target population. Aspects that are not included in the core components of the program as defined in step 1 should therefore certainly be open for discussion, as they can have a significant impact on intervention dissemination. In the discussion about these non-core components, local partners should take the lead, as they know the situation and the target population best. Researchers should acknowledge their expertise, but remain available as resource persons. Together, researchers and implementers can tailor the intervention to suit present needs. Examples of more ‘practical’ aspects that—depending on the particular program and situation—might be open to discussion are: the recruitment of participants, the intervention location and context, the number/length/frequency of sessions … .

Deciding on content-related intervention aspects

It is very important that partner participation is not limited to practical aspects only and that partners are also offered the possibility to give advice on content, as the impetus for partners wanting to participate depends on the opportunities that are presented by a project (van der Velde et al., 2009). In this regard, researchers and local partners are considered as equal, each with their unique expertise. Both often have a substantial theoretical background, experience and personal affinity for the intervention, although these may vary in amount and in specific content. For intervention implementation, researchers can mainly rely on their theoretical background, whereas local partners most of the time have a better understanding of the specific implementation setting. Letting local partners change content may be a sensitive matter for researchers, but if this is done through an open and respectful dialogue, the intervention can benefit greatly from this collaborative action. The starting point should be that local partners have the possibility to change everything of the intervention, as long as the core components remain untouched. Examples of more ‘content-related’ aspects are participant responsiveness, means of program delivery, cultural sensitivities or preferences ….

Only some examples of aspects that are open to change were highlighted above. This list is therefore not exhaustive: there may be additional aspects to be taken into account, depending on the target group or context of the intervention. These can easily be chosen in collaboration between researchers and partners.

Developing an overall implementation plan

To ensure a successful implementation of the intervention, it is necessary to develop an implementation plan. This plan should specify the role of all partners involved and provide a clear timeline with an overview of what actions need to be undertaken when. One way to assure such quality management and avoid unintended effects at the phase of intervention realization is for researchers to actively monitor intervention implementation by the partners in the field, document potential deviations and subsequently go through these together with them to avoid future mistakes at later stages of the implementation process.

Given the commensurability between efficacy and effectiveness (Stricker, 2000), this framework increases the chances for an effective intervention when implemented in practice. Furthermore, it helps to avoid certain problems that are common to CBPR, such as the lack of skills in research methods of community members, differences in the appraisal of intervention criteria between the community and funding agencies, and shifting levels of community involvement throughout the research process (Levy et al., 2003; Wallerstein and Duran, 2006).

In the following sections, empowerment implementation will be illustrated by describing the implementation process of a psycho-educational group intervention aimed at reducing stress and preventing depression and anxiety. This will show how each step of empowerment implementation can be put into practice. First, the context is described, followed by the method and the intervention. These descriptions are followed by an overview of the implementation process. During this overview, attention is directed to specific points of interest to illustrate main aspects of the framework.

EXAMPLE: IMPLEMENTATION OF A PSYCHO-EDUCATIONAL GROUP INTERVENTION

Context

The project on the implementation of a psycho-educational group intervention was commissioned by the Flemish government in 2007 to the Policy Research Centre Welfare Health and Family (SWVG), a consortium of Flemish research and expertise centers on health and well-being whose task is to support the Flemish minister in pursuing an effective evidence-based policy. Its aim was to determine whether a psycho-educational group intervention to reduce stress and prevent depression could support the organizations in primary mental healthcare to reduce the growing burden of mental health problems. The primary mental healthcare sector in Flanders consists of a large number of organizations, each with their own goals and ways of working. Because of this diversity, a concern of the Flemish government was not only to evaluate the effectiveness of the intervention, but also whether such a course could be organized through ad hoc partnerships. The feasibility of such an implementation would offer perspectives for large-scale implementations in the future. Local organizations would be at liberty to select partners they consider most appropriate in order to achieve their (mutual) goals.

Three Flemish cities and their communities were selected as intervention sites: one larger (Antwerp) and two smaller ones (Ypres, Genk). All organizations participated voluntarily and by means of their own funding. Although there were some differences between regions, key partners were provinces, local organizations for health consultation (LOGO), local governments and local centers for ambulatory mental healthcare (CGG). The location for the course was either provided by the provinces or by the local governments. Teachers were psychologists from local centers for ambulatory mental healthcare. Although all partners were involved in course promotion, the local organizations for health consultation had the most suitable and extended network at their disposal to promote the intervention and took the lead by, e.g. distributing the majority of flyers.

Method

Questionnaires were administered to the participants before and after the intervention. These included socio-demographic variables, depression, anxiety and stress scores (DASS-42; Lovibond and Lovibond, 1995; Dutch version by de Beurs et al., 2001) and course evaluation at the level of participant reactions (Kirkpatrick, 1975, Dutch version by Baert et al., 2001). This course evaluation was also used as a semi-structured interview conducted with the course teachers after course completion.

Intervention

Background

The psycho-educational course used for the intervention is a Flemish adaptation of a Scottish program called ‘Stress Control’. It was first described by White [(White, 2000), p. 57] as ‘ … a six-session didactic cognitive-behavioral “evening class”. It aims to: [1] teach students about anxiety and associated problems—depression, panic and insomnia; [2] teach self-assessment skills to allow individuals to learn how these problems affect them [3] teach a range of techniques designed to enable individuals to tailor their own treatment with minimal therapist contact’ and has been developed within the cognitive behavioral therapy (CBT) approach close to Beck (Beck, 1981) and Meichenbaum (Meichenbaum, 1985). The Flemish version is not an exact replica of the original course. The goal of the Flemish version is more to preserve mental health than to restore it. Therefore, it focuses primarily on the phenomenon of stress than on anxiety. Since 2003 this intervention is being offered to the Flemish population, primarily by one of the major public health insurance companies. After paying an entry fee, both their members and non-members have the ability to attend the course.

Core component

The core component of the intervention is the course material containing a relaxation CD and various booklets. Each of the six weekly lessons has its own separate booklet with all (and even more) of the information presented during the evening course. One additional booklet contains all homework assignments. Teachers received course material from SWVG and were instructed to distribute this to course participants in their original format, without any changes.

Implementation

Developing a core component

In this particular case, the stress control course was developed within a CBT approach, tested in a setting with college students, and found efficacious in this controlled environment. For the subsequent larger intervention with multiple partners in different regions, the course material was defined as the core component of the intervention which had to guarantee a qualitative intervention.

Selecting partners who will implement the intervention

The psycho-educational course was implemented in three Flemish regions, selected from the regions in which the Policy Research Centre Welfare Health and Family is active (KU Leuven, UGent, VUG, and KHK, 2007). To assure an optimal reception of the intervention by all partners, concept mapping was used. This research method is based on Gray's (Gray, 1989) collaboration model. A selection of potential partners for each region was brought together in focus groups. Through prioritizing their goals and interests, the three regions that were most suited for the implementation of this intervention were determined. The partners in these regions showed positive attitudes toward psycho-education and prevention. For each region, researchers and partners together determined the strengths and weaknesses of all involved. After this evaluation, four main tasks were determined and distributed among each other: (i) organizing administration and data collection; (ii) providing the teacher; (iii) providing the location; and (iv) promoting the intervention. One example of the strengths and weaknesses analysis is that in each region it was agreed upon by all involved that the centers for ambulatory mental healthcare were best placed to provide teachers for the course. Researchers, partners and the centers themselves did not consider it wise to also involve them in promoting the intervention, though. Because of the stigma that is often still associated with mental healthcare (providers) in Flanders (Reynders et al., 2011), it was decided to look for alternative channels.

Assessing the fidelity/adaptation concerns with partners

In each of the three regions researchers and partners gathered to address both practical and content-related intervention aspects. The researchers already had suggestions for different aspects that were open for discussion, but some also emerged during the meetings. Partners were hereby considered as the experts, the researchers only took the role of facilitators to moderate discussions between them. The goal was hereby not to create homogeneous implementation conditions across regions, but rather to determine and adapt specific and relevant aspects for each region. Because the three regions each had their unique context, this resulted—as expected—in some level of heterogeneity across regions.

Deciding on practical intervention aspects

These aspects were (i) the time of course commencement. The course was set up as an evening course, but partners decided themselves which day and at which exact time they preferred the course to start, taking into account the habits and possibilities of the target population in their region. All regions opted for Tuesday, with the time of commencement at 6 p.m., 7 p.m. and 8 p.m. (ii) The course location. Both researchers and partners quickly agreed that the location should be neutral, not providing a threshold for people with any specific background to participate. Governmental buildings (respectively, a cultural center, a lecture hall and even a city hall) were chosen as the most suitable locations. (iii) Which channels would be used to promote the course and by which means. In each region, partners had specific ideas and opportunities to promote the course: distributing flyers in public locations like libraries and to their personnel, posting advertisements on their websites, publishing announcements in official city papers, contacting specific organizations which aim at the target population, and setting up meetings between the researchers and local stakeholders. (4) Which of their staff members would be most suited to teach the course. The researchers left this completely open to the partners. They all chose psychologists with considerable professional experience, both on psychopathology and in teaching (psycho-educational) courses. Finally, (5) manner of distribution of the course material. The course material could be distributed to the participants in parts, one for each lesson, or given as a whole at the commencement of the course. For didactic purposes, two regions decided to distribute the course material in parts, whereas in one region the course material was distributed as a whole, mainly for practical concerns.

Deciding on content-related intervention aspects

To support teachers during the course, default PowerPoint presentations were made available by the researchers. However, if preferred; (i) teachers could change the presentations to suit their own needs and the particular condition. Other content-related aspects also apply to teachers, since they could exert most influence on the intervention content; (ii) they could decide whether they wanted to read the relaxation exercise aloud themselves or play it from the CD; (iii) they had the possibility to introduce interaction in the course, provided they felt like the course participants needed this; and finally (iv) they could also add their own additional examples during the course in order to make them more relevant for the group.

Developing an overall implementation plan

Together with partners, researchers wrote down all arrangements, decisions and agreements in an overall implementation plan. The researchers took the final responsibility and assured in each region and for each lesson that everything was implemented as decided upon. If—for one reason or another—there were deviations from the original plan, these were carefully documented and subsequently communicated to the partners. When the course would be evaluated, these could then be taken into consideration.

Actual implementation

The course was subsequently implemented in the three regions, for two groups of 34 and one group of 18 participants. The total number of participants was not that high, but partners indicated that it was still larger than when they set up similar courses in the past.

The average age of participants was 43.04 years (SD = 10.34) and the majority was currently employed (85%). Close to 80% of participants were women, which is a high number, but not uncommon for this type of intervention (Van Daele et al., 2012). Participants' depression, anxiety and stress measures were high, approaching the clinical threshold (normative data provided by E. de Beurs, personal communication, 28 October 2007).

In general, both course participants and course trainers were pleased with the outcome. In a written questionnaire, 70% of course participants agreed that the course was useful and a total of 91% of participants indicated they would recommend the course to a friend. The three trainers were interviewed and were also favorable to the intervention. Based on their experiences, they did have some remarks, both concerning practical aspects and content. All these remarks were discussed during the interview, carefully written down and will certainly be taken into consideration for future implementations.

DISCUSSION

In this paper, we have introduced a framework to program implementation which reconciles the competing paradigms of maximizing implementation fidelity versus adapting programs to the needs of the local stakeholders. Starting from the premise that fidelity and adaptation are both essential elements of implementation ideally addressed in a planned, organized and structured way, we have proposed a four-step framework to implement prevention programs, balancing program fidelity with adaptation. The framework is based on CBPR and on empowerment evaluation, which we have extended to program implementation.

This ‘empowerment implementation’ was illustrated by applying the framework to the implementation of a psycho-educational group intervention in Flanders. The example showed that empowerment implementation offers the possibility of implementing the core components of an intervention with high fidelity, while allowing for the adaptation of the intervention to local needs, thus enhancing ownership by local stakeholders. It was seen that local partners not only prefer this flexibility, but consider it as necessary for any intervention which they are offered or required to implement. Whereas previously the adaptations made by local stakeholders to existing ‘standard’ intervention programs were mostly considered as ‘flaws’ in the implementation process, empowerment implementation provides an opportunity to redefine these adaptations as useful additions with a high ecological validity and relevance, which do not interfere with the core elements of the intervention.

The aim of the psycho-educational course was to strengthen the resilience of participants to deal with daily stressors, and to empower them to take charge of their own mental health. Whether or not this aim was achieved was not addressed in this study. However, what this study did show is that the participatory approach to implementation that was followed for this program led to a better understanding of the intervention, its goals and its core elements by the local health workers who implemented it, and stimulated them to develop, adapt and implement future interventions. As such, the effects may extend beyond the stated outcomes of the program, despite the fact that it was essentially conceived as a top-down intervention. In that way, the approach can be considered as truly empowering.

The fact that empowerment implementation is characterized by a high level of collaboration, mutual respect and program flexibility does not mean that anything goes. It remains important for researchers and stakeholders to control the outcome to assure that the intervention is implemented according to plan, maybe even more compared with ‘traditional’ frameworks for implementation. The main difference is that implementation fidelity is not determined by the strict implementation of the entire intervention, but of its core components. Deviating from the original intervention is allowed, even required and stimulated, as long as the core components remain untouched.

Such an implementation is ideally followed by empowerment evaluation. In the current study this was not possible, because clear research objectives were already formulated by policy makers prior to the start of the project. However, even in that situation it remains important to involve partners in the evaluation process. Taking time to consider the strengths and weaknesses of the actual implementation in comparison with the ideal scenario can serve as leverage for improvement. It also creates a strong intrinsic motivation for change among the partners and may offer opportunities for further collaboration.

CONCLUSION

Empowerment implementation provides a new look at the concept of implementation fidelity and intervention effectiveness. In this framework an intervention consists of two parts: a core component and less important intervention aspects. The core component is proved effective in clinical trials and remains untouched throughout the implementation process. Less important intervention aspects are decided upon through an intensive collaboration between researchers and local partners. The framework therefore consists of three main phases: partner selection, deciding upon practical aspects and deciding upon content-related aspects. As such, it addresses and overcomes the apparent contradiction between implementation fidelity and adaptation.

Current research concerning implementation fidelity often considers partner input and variability as bias. The strength of the current framework is that it offers the possibility to take this disadvantage and turn it into an advantage. All those who are involved in the program benefit from increased stakeholder participation: researchers can evaluate an intervention implementation in more realistic circumstances, whereas local partners have the ability to control and to adapt an intervention (as much as possible) to their needs, to enhance their ownership of the intervention, and to increase their capacities to develop, adapt and implement interventions in the future.

FUNDING

This work was supported by the Policy Research Centre Welfare, Health and Family.

ACKNOWLEDGEMENTS

The authors would like to thank all local partners, without whom this project would not have been possible.

REFERENCES

Achterkamp
M. C.
Vos
J. F. J.
,
Investigating the use of the stakeholder notion in project management literature, a meta-analysis
International Journal of Project Management
,
2008
, vol.
26
(pg.
749
-
757
)
Aro
A.
Van den Broucke
S.
Rätly
S.
,
Towards European consensus based tools to review the evidence and enhance the quality of health promotion practice
Promotion and Education
,
2005
, vol.
12
(pg.
10
-
14
)
Backer
T. E.
Finding the Balance—Program Fidelity and Adaptation in Substance Abuse Prevention: A State-of-the Art Review
,
2001
 
Center for Substance Abuse Prevention, Rockville, MD
Baert
H.
De Witte
K.
Sterck
G.
Vorming, training en opleiding. Handboek voor een kwaliteitsvol VTO-beleid in welzijnsvoorzieningen
,
2001
Leuven
Garant
Basen-Engquist
K.
O'Hara-Tompkins
N.
Lovato
C. Y.
Lewis
M. J.
Parcel
G. S.
Gingiss
P.
,
The effect of two types of teacher training on implementation of Smart Choices: a tobacco prevention curriculum
Journal of School Health
,
1994
, vol.
64
(pg.
334
-
339
)
Beck
A. T.
Cognitive Therapy in Depression
,
1981
Chichester
Wiley
Berman
P.
Lehming
R.
Kane
M.
,
Educational change: an implementation paradigm
Improving Schools: Using What We Know
,
1981
London
Sage
(pg.
253
-
286
)
Blakely
C. H.
Mayer
J. P.
Gottschalk
R. G.
Schmitt
N.
Davidson
W.
Roitman
D. B.
et al.
,
The fidelity-adaptation debate: implications for the implementation of public sector social programs
American Journal of Community Psychology
,
1987
, vol.
15
(pg.
253
-
268
)
Breitenstein
S. M.
Gross
D.
Garvey
C.
Hill
C.
Fogg
L.
Resnick
B.
,
Implementation fidelity in community-based interventions
Research in Nursing & Health
,
2010
, vol.
33
(pg.
164
-
173
)
Burns
M. K.
Peters
R.
Noell
G. H.
,
Using performance feedback to enhance implementation fidelity of the problem-solving team process
Journal of School Psychology
,
2008
, vol.
46
(pg.
537
-
550
)
Campbell
M.
Fitzpatrick
R.
Haines
A.
Kinmonth
A. L.
Sandercock
P.
Spiegelhalter
D.
Tyrer
P.
,
Framework for design and evaluation of complex interventions to improve health
BMJ
,
2000
, vol.
321
(pg.
694
-
696
)
Carroll
C.
Patterson
M.
Wood
S.
Booth
A.
Rick
J.
Balain
S.
,
A conceptual framework for implementation fidelity
Implementation Science
,
2007
, vol.
2
pg.
40
Castro
F. G.
Barrera
M.
Martinze
C. R.
,
The cultural adaptation of prevention interventions: resolving tensions between fidelity and fit
Prevention Science
,
2004
, vol.
5
(pg.
41
-
45
)
Dane
A. V.
Schneider
B. H.
,
Program integrity in primary and early secondary prevention: are implementation effects out of control?
Clinical Psychology Review
,
1998
, vol.
18
(pg.
23
-
45
)
de Beurs
E.
Van Dyck
R.
Marquenie
L. A.
Lange
A.
Blonk
R. W. B.
,
De DASS: een vragenlijst voor het meten van depressie, angst en stress. [The DASS: a questionnaire for measuring depression, anxiety and stress]
Gedragstherapie
,
2001
, vol.
34
(pg.
35
-
53
)
Dumas
J. E.
Lynch
A. M.
Laughlin
J. E.
Smith
E. P.
Prinz
R. J.
,
Promoting intervention fidelity. conceptual issues, methods, and preliminary results from the EARLYALLIANCE prevention trial
American Journal of Preventive Medicine
,
2001
, vol.
20
(pg.
38
-
47
)
Dusenbury
L.
Brannigan
R.
Falco
M.
Hansen
W. B.
,
A review of research on fidelity of implementation: implications for drug abuse prevention in school settings
Health Education Research
,
2003
, vol.
18
(pg.
237
-
256
)
Fetterman
D. M.
Fetterman
D.
Kaftarian
S.
Wandersman
A.
,
Empowerment evaluation: an introduction to theory and practice
Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability
,
1996
Thousand Oaks, CA
Sage
(pg.
3
-
46
)
Glasgow
R. E.
Lichtenstein
E.
Marcus
A. C.
,
Why don't we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition
American Journal of Public Health
,
2003
, vol.
93
(pg.
1261
-
1267
)
Glasgow
R. E.
Marcus
A. C.
Bull
S. S.
Wilson
K. M.
,
Disseminating effective cancer screening interventions
Cancer
,
2004
, vol.
101
(pg.
1239
-
1250
)
Gray
B.
Collaborating
Finding Common Ground for Multiparty Problems
,
1989
San Francisco
Jossey-Bass Publishers
Gresham
F. M.
Gansle
K. A.
Noell
G. H.
,
Treatment integrity in applied behavior analysis with children
Journal of Applied Behavior Analysis
,
1993
, vol.
26
(pg.
257
-
263
)
Hasson
H.
,
Systematic evaluation of implementation fidelity of complex interventions in health and social care
Implementation Science
,
2010
, vol.
5
pg.
67
Kirkpatrick
D. L.
Evaluationtraining Programs
,
1975
Madison
Wisconsin
Kok
G.
Schaalma
H.
Ruiter
R. A. C.
Van Empelen
P.
,
Intervention mapping: a protocol for applying health psychology theory to prevention programmes
Journal of Health Psychology
,
2004
, vol.
9
(pg.
85
-
98
)
KU Leuven, UGent, VUB, & KHK
,
2007
 
Proposal for funding: Policy Research Centre for Welfare, Public Health and Family. Author, Leuven http://steunpuntwvg.be/swvg/_docs/Multiannual%20program.pdf
Laverack
G.
Wallerstein
N.
,
Measuring community empowerment: a fresh look at organizational domains
Health Promotion International
,
2001
, vol.
16
(pg.
179
-
185
)
Levy
S. R.
Baldyga
W.
Jurkowski
J. M.
,
Developing community health promotion interventions: selecting partners and fostering collaboration
Health Promotion Practice
,
2003
, vol.
4
(pg.
314
-
322
)
Lovibond
S. H.
Lovibond
P. F.
Manual for the Depression Anxiety Stress Scales.
,
1995
Sydney, Australia
The Psychology Foundation of Australia
Macaulay
A. P.
Gronewold
E.
Griffin
K. W.
Williams
C.
Samoulis
J.
,
Evaluation of an Online Implementation and Enrichment Program for Providers of a Drug Prevention Program for Adolescents
,
2005
Paper presented at the American Public Health Association 133rd Annual Meeting & Exposition
Philadelphia, PA
 
December 2005
Meichenbaum
D.
Stress Inoculation Training.
,
1985
New York
Pergamon
Mihalic
S.
Blueprints for Violence Prevention Violence Initiative: Summary of training and implementation (Final Process Evaluation Report)
,
2002
Boulder, CO
University of Colorado at Boulder, Center for the Study and Prevention of Violence
Minkler
M.
Wallerstein
N.
Community Based Participatory Research in Health
,
2003
San Francisco
Jossey-Bass
Minkler
M.
Vásquez
V. B.
Warner
J. R.
Steussey
H.
Facente
S.
,
Sowing the seeds for sustainable change: a community-based participatory research partnership for health promotion in Indiana, USA and its aftermath
Health Promotion International
,
2006
, vol.
21
(pg.
293
-
300
)
Oakley
A.
Strange
V.
Bonell
C.
Allen
C. B.
Stephenson
J.
RIPPLE Study Team
,
Process evaluation in randomised controlled trials of complex interventions
BMJ
,
2006
, vol.
332
(pg.
413
-
416
)
Palinkas
L. A.
Aarons
G. A.
Horwitz
S.
Chamberlain
P.
Hurlburt
M.
Landsver
J.
,
Mixed method designs in implementation
Administration and Policy in Mental Health and Mental Health Services Research
,
2011
, vol.
38
(pg.
44
-
53
)
Rabin
B. A.
Glasgow
R. E.
Kerner
J. F.
Klump
M. P.
Brownson
R. C.
,
Dissemination and implementation research on community-based cancer prevention: a systematic review
American Journal of Preventive Medicine
,
2010
, vol.
38
(pg.
443
-
56
)
Reynders
A.
Scheerder
G.
Molenberghs
G.
Van Audenhove
C.
Suïcide in Vlaanderen en Nederland. Een verklaring vanuit sociaal cognitieve factoren en hulpzoekend gedrag
,
2011
Leuven
LUCAS
Rychetnik
L.
Frommer
M.
Hawe
P.
Shiell
A.
,
Criteria for evaluating evidence on public health interventions
Journal of Epidemiology & Community Health
,
2002
, vol.
56
(pg.
119
-
127
)
Rychetnik
L.
Hawe
P.
Waters
E.
Barratt
A.
Frommer
M.
,
A glossary for evidence based public health
Journal of Epidemiology & Community Health
,
2004
, vol.
58
(pg.
538
-
545
)
Shen
J.
Yang
H.
Cao
H.
Warfield
C.
,
The fidelity-adaptation relationship in non- evidence-based programs and its implication for program evaluation
Evaluation
,
2008
, vol.
14
(pg.
467
-
481
)
Stricker
G.
,
The relationship between efficacy and effectiveness
Prevention & Treatment
,
2000
, vol.
3
 
Article 10
Toroyan
T.
Oakley
A.
Laing
G.
Roberts
L.
Mugford
M.
Turner
J.
,
The impact of day care on socially disadvantaged families: an example of the use of process evaluation within a randomized controlled trial
Child: Care Health & Development
,
2004
, vol.
30
(pg.
691
-
698
)
Van Daele
T.
Hermans
D.
Van Audenhove
C.
Van den Bergh
O.
,
Stress reduction through psychoeducation: a meta-analytic review
Health Education & Behavior
,
2012
, vol.
39
(pg.
474
-
485
)
van der Velde
J.
Williamson
D. L.
Ogilvie
L. D.
,
Participatory action research: practical strategies for actively engaging and maintaining participation in immigrant and refugee communities
Qualitative Health Research
,
2009
, vol.
19
(pg.
1293
-
1302
)
Vitale
M. R.
Romance
N. R.
,
A Multi-phase Model for Scaling Up a Research-Validated Instructional Intervention: Implications for Leadership of Systemic Educational Reform
,
2005
Paper presented at the annual meeting of the American Education Research Association
Montreal, Canada
 
April 2005
Wallerstein
N. B.
Duran
B.
,
Using community-based participatory research to address health disparities
Health Promotion Practice
,
2006
, vol.
7
(pg.
312
-
323
)
Wandersman
A.
Snell-Johns
J.
Lentz
B. E.
Fetterman
D. M.
Keener
D. C.
Livet
M.
et al.
Fetterman
D.
Wandersman
A.
,
The principals of empowerment evaluation
Empowerment Evaluation: Principles in Practice
,
2005
New York
The Guilford Press
(pg.
27
-
41
)
Weissberg
R. P.
Tolan
P.
Keys
C.
Chertok
F.
Jason
L.
,
Fidelity and adaptation: combining the best of both perspectives
Researching Community Psychology: Issues of Theory and Methods
,
1990
Washington, DC
American Psychological Association
(pg.
186
-
189
)
White
J.
Treating Anxiety and Stress: A Group Psychoeducational Approach using Brief CBT
,
2000
Chichester
Wiley
WHO
Ottawa Charter for Health Promotion
,
1986
Geneva
WHO
Zimmerman
M. A.
,
Psychological empowerment: issues and illustrations
American Journal of Community Psychology
,
1995
, vol.
23
(pg.
581
-
599
)
Zimmerman
M. A.
Rappaport
J.
Seldman
E.
,
Empowerment theory: psychological, organizational, and community levels of analysis
Handbook of Community Psychology.
,
2000
New York
Plenum
Zimmerman
M. A.
Rappaport
J.
,
Citizen participation, perceived control and psychological empowerment
American Journal of Community Psychology
,
1988
, vol.
16
(pg.
725
-
750
)