Open access peer-reviewed chapter

Clinical Scholars: Using Program Evaluation to Inform Leadership Development

Written By

Gaurav Dave, Cheryl Noble, Caroline Chandler, Giselle Corbie-Smith and Claudia S.P. Fernandez

Reviewed: 18 May 2021 Published: 08 September 2021

DOI: 10.5772/intechopen.98451

Chapter metrics overview

474 Chapter Downloads

View Full Metrics

Abstract

Leadership development programs are notoriously difficult to evaluate, and when evaluations are attempted, they often do not go beyond measuring low-level, short-term outcomes of the impacts experienced by participants. Many leadership development programs do not systematically assess changes that are catalyzed within the organizations, communities and systems in which participants lead. To address these challenges, evaluators of the Clinical Scholars National Leadership Institute (CNLI) have designed a comprehensive, mixed-methods evaluation approach to determine the effectiveness of the training and explore the impacts of participants in the spheres in which they lead. Guided by Michael Patton’s Developmental Evaluation approach and framed by Kirkpatrick’s Training Evaluation Model, the CSNLI evaluation collects data on multiple levels to provide a robust picture of the multiple outcomes of the program. The approach focuses on individual participant outcomes, by measuring competency changes over time and exploring how participants use the competencies gained through the training in their work. Social network analysis is utilized to measure the development and expansion of participants’ networks and collaboration within the teams, cohorts, and across sectors and disciplines throughout their time in the CSNLI. The Most Significant Change methodology and semi-structured alumni interviews are used to measure impacts participants identify as occurring as a result of their participation. Finally, Concept Mapping is implemented to explore how Fellows make meaning of the foundational concepts and values of the CSNLI. The outcome and impact evaluation activities employed by the CSNLI, in combination with quality improvement-focused process evaluation, support innovation and excellence in the provision of a health equity-grounded leadership development program.

Keywords

  • leadership
  • process evaluation
  • outcome evaluation
  • impact evaluation
  • mixed method evaluation

1. Introduction

As discussed in depth in “Chapter 1: Clinical Scholars: Effective Approaches to Leadership Development,” leadership training has been identified as an essential component in talent development in a wide range of sectors, including health care and public health. In the case of the Clinical Scholars National Leadership Institute (CSNLI), online at ClinicalScholarsNLI.org, more broadly known as Clinical Scholars (CS) or the Clinical Scholars Program, leadership training is considered a catalyst to address social determinants of health and mitigate disparities (see Chapter 2). The Clinical Scholars Program is funded by the Robert Wood Johnson Foundation (RWJF), as a component of their investment to build a Culture of Health in the United States – by equipping clinical leaders with the necessary skills, commitment, and mindset to tackle some of the most complex health issues of our time, to ensure good health is available to all [1].

A common challenge facing leadership training programs is how to meaningfully measure program outcomes. It is widely accepted that leadership development programs are difficult to evaluate beyond process-type measures. There are a number of reasons why this is so – difficulty finding suitable control groups, lack of evaluation funding, attribution errors, biases introduced by self-report methods, and others [2, 3, 4, 5]. In fact, a recent report found that only 24% of organizations utilize some form of impact measurement of their leadership programs, and that the most popular measurement tool is the satisfaction survey [6, 7]. However, there are ways to gather valuable outcome data for multi-level leadership development programs that can be used to provide a well-rounded picture of the longer term outcomes and possible impacts of the training efforts.

In the case of the Clinical Scholars Program, a comprehensive, multi-level evaluation plan is essential. Evaluation aims of a program of this scope include monitoring whether the program is being delivered effectively, determining whether the program’s intended outcomes are being reached, determining cost-effectiveness of dollars spent on leadership training, and to ultimately (in the case of RWJF’s Culture of Health initiative,) identifying whether good health is becoming more accessible to all people in the US [1].

As with any training program, when measuring leadership development programs that address complex leadership challenges such as health disparities, expanding quality health care services to marginalized populations, and building interdisciplinary teams to address social determinants of health, among others, evaluation plans must be robust and comprehensive, addressing three key types of evaluation:

  1. Process evaluation, to inform program staff of participants’ levels of satisfaction with the training and short-term learning;

  2. Outcome evaluation, to measure the stickiness of the program and changes in learning and engagement over the course of the training program; and

  3. Impact evaluation, as a marker of the translation of the program into population-level outcomes and changes that may occur after the training is complete.

This chapter uses the Clinical Scholars Program as a case study of how to design and implement an evaluation of a multi-level leadership development program. In this chapter, we will provide a brief description of the Clinical Scholars Program, describe theoretical underpinnings of our approach, outline the methods we have selected to measure goal achievement, and explore implications of our approach.

1.1 Program description

The Clinical Scholars Program is described in detail in Chapter 1. It is a three-year, multidimensional leadership development program created for mid-career clinicians who practice in a wide-variety of disciplines (e.g. medicine, social work, veterinary medicine, and nursing, among others) that address health across multiple levels across the social ecological model [8, 9]. Funded by the Robert Wood Johnson Foundation (RWJF), the Clinical Scholars Program is part of a broader Foundation-led initiative to build a Culture of Health in the United States to ensure that all people have access to good health [10]. Candidates apply for the program in interdisciplinary teams of 2-5 clinicians and selected teams implement a “Wicked Problem Impact Project” (WPIP) throughout the three-year program. Each team’s WPIP is designed to provide an intervention to address a specific, complex health problem in their home community. Participants in the program are referred to as “Fellows”.

The Clinical Scholars Program curriculum focuses on four main goals:

  1. improving leadership skills

  2. developing and strengthening interdisciplinary collaboration,

  3. strengthening engagement and partnership with community stakeholders, and

  4. expanding skills to make equity, diversity and inclusion (EDI) actionable in their leadership projects and activities

The curriculum addresses learning through multiple modalities, including biannual intensive onsite leadership retreats, robust distance learning activities, team and personal executive coaching, mentoring, and active utilization of leadership skills through the implementation of the Wicked Problem Impact Project. Through the Clinical Scholars Program, participants also develop a nationwide network of clinicians who are working to create a culture of health in communities across the United States.

As described in detail in Chapter 1, the pedagogical focus of the Clinical Scholars Program is to equip clinicians with boundary spanning leadership skills, which add to the discipline-specific skills obtained through formal clinical education programs. The Clinical Scholars Program has identified 25 evidence-based Leadership Competencies that guide curriculum development. The 25 competencies are organized into four practice domains: Personal, Interpersonal, Organizational, and Community and Systems (see Figure 1). Because the Clinical Scholars Program places high value on the development of Equity, Diversity, and Inclusion (EDI) as a foundational aspect of building a Culture of Health (Chapter 2), EDI competencies are interwoven throughout all four domains.

Figure 1.

The 25 Core competencies of the Clinical Scholars Program.

1.2 Evaluation theoretical approach

The Clinical Scholars Program recognizes that individual participants work in the context of their team environment, community environment, and training environment. These different contexts require us to define and measure multiple potential domains influenced by the Clinical Scholars Program that are based on the social ecological model [8, 9]. Throughout the Clinical Scholars Program curriculum, participants are provided with opportunities to build leadership capacity, apply knowledge, develop networks, and engage with communities with the goal of developing a new cadre of clinical leaders at the individual participant level, social/program level, and community/systems level to improve the culture of health in each participant’s home community.

We employ Kirkpatrick’s Four Level Training Evaluation model to evaluate the individual, social, and program-level impact of the Clinical Scholars Program [11, 12]. Kirkpatrick’s model includes four domains. For the purposes of the Clinical Scholars Program evaluation, we have defined each level as follows:

  • Level 1: Reaction – Clinical Scholars participants’ rating of their experience of all program components in regard to satisfaction, relevance, and utility.

  • Level 2: Learning – participants’ self-report of gains in knowledge, self-efficacy, skills, and attitudes as a result of participation in the Clinical Scholars Program.

  • Level 3: Behavior – tangible actions participants report taking as a result of the knowledge and skills obtained through participation in the Clinical Scholars Program and

  • Level 4: Results – the impacts experienced by participants in their individual leadership, organizations, and communities.

Our evaluation design mirrors the theory and conceptual framework of the Clinical Scholars Program to measure program-attributable change in key areas – competencies, community engagement, networks and other complementary assessments.

Michael Patton’s Developmental Evaluation provides the principles for how we approach the evaluation of the Clinical Scholars Program. This approach gives evaluators the role of a long-term partner with those who are delivering innovative initiatives, where evaluative questions are designed to “provide feedback and support developmental decision-making and course corrections along the emergent path [13].” Such partnerships between evaluators and program staff support real-time learning in complex and emergent situations, and are useful for programs such as Clinical Scholars with high levels of innovation, fast-paced decision making and areas of uncertainty. This style represents a more flexible and adaptive style of evaluation than a traditional evaluation. As such, the Clinical Scholars Program’s evaluation plans regularly evolve to support emerging program outcomes.

Each of the theories briefly described above guide and provide a foundation for the methods employed to evaluate the multiple components of the Clinical Scholars Program. Our evaluation logic model Appendix A outlines the various short-, intermediate-, and long-term outcomes addressed through our approach.

1.3 Evaluation methods

Data collection is dispersed throughout the three-year Clinical Scholars curriculum, with particular consideration given to the spacing of data collection, in order to avoid overwhelming participants with evaluation asks. Our main evaluation activities include:

  • Process Evaluation

  • Competency Assessment

  • Social Network Analysis (SNA)

  • Community Stakeholder Assessment

  • Concept Mapping

  • Most Significant Change (MSC)

  • Alumni Evaluation

In order to ensure we address all levels of training evaluation, we have mapped each of our evaluation activities onto the Kirkpatrick Four-Level Training Evaluation Model (Figure 2). A large portion of our evaluation efforts are directed at measuring changes in Levels 3 (Behavior) and 4 (Results), in order to provide a deeper understanding into how the Clinical Scholars Program is affecting participants’ leadership growth and impact. The following sections describe each of our evaluation activities in detail.

Figure 2.

The evaluation of the clinical scholars program using Kirkpatrick’s training evaluation model.

Advertisement

2. Process evaluation

As with any training program, process evaluation is an essential piece in helping programs assess the success of an intervention. Process evaluation helps to enhance the likelihood of success by providing indications of what happened during the program, and if those activities were successful or not for various stakeholders. Process evaluation assists in informing program improvement, increasing participants satisfaction, and understanding the human capital involved in implementing the multiple components of a training program [14]. The two main process areas we focus on in the Clinical Scholars Program are onsite trainings and exit interviews.

2.1 Onsite trainings

Participants attend seven onsite retreats throughout the Clinical Scholars Program. During each retreat, participants are required to attend all learning sessions (Range 9-20 sessions per retreat) designed to teach and engage participants with leadership skills and equity topics (see Chapter 1 for a detailed description of the onsite curriculum). Immediately following each session, participants are asked to complete an 11-item survey that includes two open-ended questions for additional feedback about the session and feedback on the retreat overall. Items on overall session satisfaction, relevance, presenter delivery, presenter knowledge, usefulness of information, and knowledge and ability before and after each session are asked on 7-point Likert scales. We calculate means for all quantitative items and draw common themes from the open-ended feedback for each session. Summary reports for each session support rapid cycle learning by informing retreat debriefing sessions and discussions about aspects of sessions that can be improved for future cohorts.

Two of the items included on the session evaluation survey address learning: knowledge and ability. Knowledge and ability questions are developed using the session specific learning objectives provided by presenters. A retrospective pre/post design is utilized for the knowledge and ability questions. This approach is widely used in training programs, and the literature suggests this approach is often more reliable than the standard pretest, posttest approach and can help decrease response shift bias [15, 16, 17, 18, 19, 20, 21, 22]. Knowledge and ability items are analyzed using a paired sample t-test to measure whether the difference between knowledge and ability before and after each session is statistically significant. Additional analyses are conducted to identify additional trends that surface over time, such as impact of topic relevance or speaker ability on changes in knowledge or ability.

2.2 Exit interviews

Exit interviews are utilized to provide an opportunity for participants to reflect on their experience in the Clinical Scholars Program. Six months after completion of the Clinical Scholars Program, interviews are scheduled with the most recently graduated participants. The exit interviews were designed using Moustakas’ Phenomenological Research Approach [23, 24]. This approach was chosen for its ability to help the evaluation team understand the phenomenon of participants’ shared experiences throughout the Clinical Scholars Program. A detailed protocol was developed by the Evaluation team to guide data collection through semi-structured interviews, data analysis using a grounded theory approach, and dissemination [25].

The semi-structured interviews focus on participants’ experiences and reflection in four main areas:

  1. Individual leadership changes

  2. Organizational leadership changes

  3. Community leadership changes

  4. Experience since graduating from Clinical Scholars

Themes, examples, and other data obtained through the exit interview process will guide discussions around curriculum needs that may surface, which skills and learning participants may be continuing to use, and how to best connect with program alumni.

Advertisement

3. Outcome evaluation

As discussed above, attribution error is a common concern when evaluating leadership development programs, because the training does not exist in isolation and there is a high likelihood there are additional factors outside of Clinical Scholars impacting participants’ actions and behaviors, as well as observed outcomes [11]. We address such challenges by utilizing a multi-level and mixed-method approach, which provides a well-rounded picture of how participants report utilizing the skills learned through the training, what changes participants attribute to the training, and how community partners and stakeholders experience participants’ leadership. For the purposes of the Clinical Scholars Program, our outcome evaluation assesses changes and trends that occur during the three-year training program.

3.1 Competency assessment

The competency assessment is designed to measure changes in individual participants’ knowledge, attitude, self-efficacy, and use of the 25 Clinical Scholars competencies (see Figure 1) over the course of the 3-year program as participants work to implement solutions to wicked problems in their communities. In this survey, we address four measures: knowledge, attitude, self-efficacy, and use [26, 27, 28, 29, 30, 31, 32]. Each measure is adapted from previously validated items, and are measured on a 7-point Likert-type scale.

This survey is administered at four time points to model trends in change over time: 0 months, 6 months, 18 months, and 36 months. At the 0- and 6-month time points, items are asked using a retrospective pre/post-test approach, where participants are directed to provide a rating for each item for both 6 months ago and current day. The remaining timepoints, 18 and 36 months, participants are directed to provide only current day ratings for each measure. This data collection timeline provides data for six time points, including 6 months prior to the start of the program, allowing us to compare differences in the magnitude of change before and after the start of the program. After each data collection, reports on the self-efficacy scores for each Participant are provided to individual executive coaches to discuss during coaching meetings. All other data are shared only in the aggregate of each cohort. Data are analyzed with a paired-sample t-test measuring the differences in scores at each time point. Trends in change across time are measured using a generalized linear model.

This longitudinal approach allows us to observe short- and intermediate-term outcomes in competency skill, usage, and self-efficacy - ultimately providing insight into the “stickiness,” or sustainability, of the leadership learning and growth experienced by participants during their time in the Clinical Scholars Program.

3.2 Online leadership logs (OLLs)

In order to better understand behavioral changes the Clinical Scholars Program contributes to, participants’ submissions to Online Leadership Logs (OLLs) are analyzed. Throughout their three years in the Clinical Scholars Program, participants are asked to describe how they have used each of the 25 Core Competencies in their leadership. This web-based skills inventory self-assesses competence in the Core Competencies of the Clinical Scholars Program utilizing a method known as the Behavioral Event Interview (BEI) [33]. This method provides a means for gathering specific examples of behavior [33]. As part of their personal reflection and development, participants are asked to provide examples of how they are using each of the 25 Core Competencies by submitting “STAR” statements, where they are prompted to describe the Situation, Task, Action, and Results for the event in which they used each specific competency. The OLL gives Clinical Scholars participants practical experience in developing behavioral statements related to performance of the top 25 competencies taught throughout the Clinical Scholars Program curriculum.

As each cohort completes their three-year program, OLLs will be obtained from program staff. Data will be analyzed using a grounded theory approach to identify themes related to how the Core Competencies are operationalized in the real-world settings in which participants exist [25].

3.3 Social network analysis

The networks that participants build within their cohort, across cohorts, and within their home community is an essential component of the Clinical Scholars experience. Collaboration across disciplines and sectors builds social capital in the form of shared knowledge and experience. Social Network Analysis (SNA) is a tool that has been developed over the past century to measure the density and centrality of a component, or individual, to understand how information (or in other settings, behaviors or diseases) are transmitted between individuals [34]. We measure the network that each cohort is building at three time points: 0 months, 24 months, and 48 months. Measuring at these three timepoints, we can track how networks grow and deepen throughout the program. The complexity of the SNA utilized for the Clinical Scholars Program requires specific skills and expertise. As such, we have contracted with an outside agency to conduct the SNA [35].

Our SNA is unbounded, asking about collaboration within each participant’s team, the Clinical Scholars cohort, and across the Clinical Scholars Program cohorts, team satisfaction, and social capital in each participant’s broader community. Networks are measured by the frequency of interaction, types of activities that participants engage in with identified connections, number of collaborative activities, strength of relationships, and satisfaction with team members. Social capital is measured by asking participants to indicate whether they have helpful contacts with individuals in 14 occupations and, if so, the closeness and length of each relationship. Changes in strength and density of relationships within and across the Clinical Scholars Program cohorts is indicative of network growth.

3.4 Community stakeholder assessment

An essential factor in building equity in a culture of health in a community is the ability to build relationships with community stakeholders [36]. The Community Stakeholder Assessment is intended to explore community engagement between participants and key stakeholders during the Clinical Scholars Program. The Community Stakeholder Assessment seeks to answer six primary questions:

  1. To what extent are participants engaging with stakeholders as they implement their Wicked Problem Impact Projects (WPIP)?

  2. Is trust being established between participants and stakeholders?

  3. To what extent is a sense of collaboration being developed between participants and stakeholders?

  4. Are the relationships between participants and stakeholders based on principles of community engagement?

  5. Are participants engaging with stakeholders from people whose cultures differ from their own?

  6. To what extent do stakeholders agree/disagree with participants’ assessment of engagement?

Participants in the Clinical Scholars Program are asked to provide names and contact information for their community stakeholders 12 months and 36 months into the program. Community stakeholders are defined as:

  • Anybody outside the Clinical Scholars participant team who is actively contributing to, has contributed, or is integral to their team’s WPIP

  • A community partner who has been involved in the WPIP above and beyond a phone conversation or meeting to serve in an advisory role or who meaningfully contributes to a tangible component of the planning, development, implementation, dissemination, or marketing of the WPIP (ex. school council member)

  • Someone who the Clinical Scholars participant team has partnered with explicitly for the purposes of the WPIP.

A survey is sent to each identified community partner or stakeholder at 12 and 37 months. The instrument asks respondents to use a Likert-type scale to respond to items in the following domains. Each of these domains are derived from validated evaluation measures [37, 38].

  1. Collaboration (how collaboration impacts the program and community)

  2. Resources (how resources impact the program and community)

  3. Bridging (how members of the community and the members of the Clinical Scholars Project Team interact)

  4. Alignment with Community Engagement Principles (learning how the WPIP aligns with principles for community-engagement)

  5. Trust2 (trust between team members and the community stakeholders)

  6. Health Outcomes (how the WPIP is improving health outcomes in the community)

  7. Demographics1

  8. General Feedback1

Measures of community engagement are analyzed to look for trends in the above identified community engagement domains from 12 months to 36 months. Given the evolving nature of WPIPs and often high turnover at small community-based organizations, we cannot expect to survey the same individual community partners at each time point. Rather, by surveying more than one partner from each team and examining trends, we aim to observe any differences in how community engagement principles play out in these relationships.

3.5 Most significant change

In order to gather data regarding changes experienced and observed by participants as a result of their participation in Clinical Scholars, we utilize the Most Significant Change (MSC) methodology [39, 40]. Additionally, data gathered through the MSC process will serve to illustrate the operationalization of the concepts participants identified in the Concept Mapping project (see below).

MSC is a form of participatory monitoring and evaluation. In short, the approach collects change stories directly from participants, and then goes through a systematic process to select and present the stories that indicate the greatest impact [39]. This approach is best utilized in programs with diverse and emerging outcomes, that contain programmatic elements focused on social change, and that do not have defined outcomes against which to evaluate [39]. Because of the Clinical Scholars Program’s unique, multi-level approach to leadership development toward cultural shifts in approaching health, such an approach is appropriate to integrate with our additional evaluative efforts.

MSC includes multiple steps to gather and analyze results. The Clinical Scholars Program evaluation utilizes the following components:

  • Obtaining Most Significant Change stories from participants in Clinical Scholars. As part of their final program report, Clinical Scholars participants are asked to submit at least one story responding to this prompt: “Please describe in one or two paragraphs the most significant change that has resulted from your involvement with the Clinical Scholars Program. Think about this like telling a story. Please describe the situation, task, actions, results, or other details you can that are related to the change.”

    Participants receive detailed instructions on how to answer the question, and are asked to select whether the impact occurred on an individual leadership, organizational, or community level.

  • Selection of MSC stories. The evaluation team recruits a selection committee of Clinical Scholars Program staff, stakeholders, and participants to engage in a systematic process of selecting the stories that represent the most significant changes at each level (individual, organizational, or community).

  • Analysis. Multi-level analysis is conducted with the data obtained through the stories. Qualitative analysis uses ATLAS.ti to code and identify themes utilizing a grounded theory approach [41]. In addition, secondary analyses are conducted on the data to identify any additional insights (i.e. differences between disciplines, demographic groups, cohort, etc.…)

Advertisement

4. Impact evaluation

RWJF’s mission to build a Culture of Health in the United States includes many lofty and important goals to address health equity and ensure good health is available to all [36]. Such goals that create impact in culture and institutions are difficult to evaluate and largely beyond the scope of the Clinical Scholars Program evaluation plan. In addition, as with measuring outcomes, attribution is a large concern [2, 3, 4, 5]. To address such challenges, we implement two evaluation activities – Concept Mapping and an Alumni Survey – to obtain data about how the Clinical Scholars Program trainings may be contributing to eventually achieving a Culture of Health, and how participants continue to report activities and changes they are engaged in across multiple levels of the Social Ecological Model [8, 9].

4.1 Concept mapping

In order to determine participants’ conceptual understanding of what it means to build a Culture of Health, we conduct a Concept Mapping project during the third training year of each cohort. Concept Mapping is “a structured process, focused on a topic or construct of interest, involving input from one or more participants, that produces an interpretable pictorial view (concept map) of their ideas and concepts and how these are interrelated [42].” For the purposes of the Clinical Scholars Program, the results from the concept mapping process illustrate participants’ perceptions of various concepts represented in the Culture of Health (COH) Action Framework [41]. Understanding participants’ perceptions of building health can give us insight into what they value and are committed to in their work. Because building a Culture of Health is ultimately about creating cultural shifts in how people perceive health, values and commitment are important factors to explore [36].

During each cohort’s third year, we initiate the concept mapping project. The concept mapping process consists of six specific steps. More detailed descriptions of Concept Mapping methodology is widely available. What follows is the Clinical Scholars Program’s specific implementation of this methodology:

  1. Preparation – Evaluation staff and partners determine the main aims of the project and develop a protocol and timeline.

  2. Generation - Participants are asked to complete an online brainstorming activity where they provide as many statements as come to mind to complete a specific focal prompt. Each cohort focuses on a different topic. For example, Cohort 1 focused on components that are essential to building a Culture of Health. Cohort 2 focused on a specific topic within the COH Action Area (“Making Health a Shared Value”) [43].

  3. Structuring – Participants are asked to complete two additional online activities: sorting and rating. For the sorting activity they will be asked to sort the statements into groupings that make sense to them, based on similar meanings or themes. They are then asked to name each of the groups. For the rating activity, participants rate each statement on each of three Likert-type scales that are developed based on each year’s topic.

  4. Representation – Multi-level analysis is then conducted with the sorting and rating activity data using statistical software [44]. These analyses generate a cluster map to visually communicate the relationships between the statements provided.

  5. Interpretation – At an onsite retreat during each cohort’s third year, an in-person group discussion develops an understanding of the meaning of the cluster maps. During this discussion, the group comes to consensus on names for each of the clusters on the map and provides insight into how these clusters relate to and impact various aspects of participants’ experience in building a Culture of Health.

  6. Utilization –The findings from this project will contribute to increasing the understanding of how participants’ perceptions relate to the overall conceptualization of building a Culture of Health. Findings will not only be used to identify possible perceptual outcomes of the Clinical Scholars curriculum, but will also be used to inform the Clinical Scholars Program of any potential program improvements that may be needed to better align with the COH Action Framework.

4.2 Alumni evaluation

Our Alumni Evaluation is evolving over time and is developing in partnership with other programs funded by RWJF as part of their Leadership for Better Health initiative [45]. At the time of writing, the Clinical Scholars Program had just graduated its first cohort, and we are in data gathering stages of the internal and Initiative-wide alumni evaluation. In addition, we plan to work with RWJF partners to develop methods to obtain and analyze long-term impacts related to expansion of a Culture of Health and improved health outcomes. Specific to the Clinical Scholars Program, we plan to continue monitoring the following topics in order to understand participants’ future activities and influence:

  • Individual leadership practice and growth;

  • Network development and expansion;

  • Leadership within organizations; and

  • Leadership within home communities.

Advertisement

5. Conclusion

It is our ultimate aim that the results of each of the evaluation approaches listed in this chapter be used for two purposes: 1) to inform and improve the implementation of the Clinical Scholars Program and 2) to better understand how leadership development training can best equip health practitioners to build a Culture of Health – to expand health equity and ensure that good health is available to all. Results are disseminated to all stakeholders, including Clinical Scholars program staff, RWJF staff and the participants of Clinical Scholars themselves. It is our hope that our findings will also be of benefit to the broader field of leader and leadership development and, as such, we will focus on disseminating these results via a variety of methods, including academic publication, professional conferences, media outlets, and widely available web-based platforms, among others.

In addition, it is our goal to learn as the program and evaluation unfolds. Owing to the novel nature of RWJF’s mission to build a Culture of Health, the Clinical Scholars Program’s innovative approach to combining leadership and equity, diversity, and inclusion training, and our developmental evaluation approach, we anticipate changes and modifications to occur. Part of our internal evaluation is to document the rationale behind such changes and continue to refine our approach in order to ensure the best, and ultimately, most useful data is available to assist the Clinical Scholars Program to reach its goal of equipping clinicians to influence their communities in order to build a Culture of Health.

References

  1. 1. Robert Wood Johnson Foundation. About a Culture of Health [Internet]. 2020. Available from: https://www.rwjf.org/en/cultureofhealth/about.html
  2. 2. Dvir T, Eden D, Avolio BJ, Shamir B. Impact of transformational leadership on follower development and performance: A field experiment. Acad Manag J. 2002;45(4):735-744. doi:10.2307/3069307
  3. 3. Malling B, Mortensen L, Bonderup T, Scherpbier A, Ringsted C. Combining a leadership course and multi-source feedback has no effect on leadership skills of leaders in postgraduate medical education. An intervention study with a control group. BMC Med Educ. 2009;9(1):1-7. doi:10.1186/1472-6920-9-72
  4. 4. Frese M, Beimel S, Schoenborn S. Action Training for Charismatic Leadership: Two Evaluations of Studies of a Commercial Training Module on Inspirational Communication of a Vision. Pers Psychol. 2006;56(3):671-698. doi:10.1111/j.1744-6570.2003.tb00754.x
  5. 5. Packard T, Jones L. An outcomes evaluation of a leadership development initiative. J Manag Dev. 2015;34(2):153-168. doi:10.1108/JMD-05-2013-0063
  6. 6. Measuring The Impact Of Leadership Development: Getting Back To Basics - Harvard Business Publishing. https://www.harvardbusiness.org/measuring-the-impact-of-leadership-development-getting-back-to-basics/. Accessed March 8, 2020.
  7. 7. The State of Leadership Development Report - Harvard Business Publishing. https://www.harvardbusiness.org/insight/the-state-of-leadership-development-report/. Accessed March 8, 2020.
  8. 8. Social Ecological Model. In: Encyclopedia of Behavioral Medicine. ; 2013. doi:10.1007/978-1-4419-1005-9_101634
  9. 9. Mcleroy KR, Bibeau D, Steckler A, Glanz K. An Ecological Perspective on Health Promotion Programs. Heal Educ Behav. 1988. doi:10.1177/109019818801500401
  10. 10. Robert Wood Johnson Foundation. What Drives Health. http://www.commissiononhealth.org/WhatDrivesHealth.aspx. Published 2016. Accessed September 24, 2018.
  11. 11. Kirkpatrick DL, Kirkpatrick JD. Evaluating Training Programs. 3rd ed. San Francisco: Berrett-Koehler Publishers; 2006. https://www.bkconnection.com/static/Evaluating_Training_Programs_EXCERPT.pdf.
  12. 12. Kirkpatrick DL, Kirkpatrick JD. Implementing the Four Levels: A Practical Guide for Effective Evaluation of Training Programs. San Francisco: Berrett-Koehler Publishers; 2007. https://www.bkconnection.com/static/Implementing-the-Four-Levels-EXCERPT.pdf.
  13. 13. Patton MQ. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use.; 2011.
  14. 14. Curnan S, LaCava L, Sharpsteen D, Lelle M, Reece M. W.K. Kellogg Foundation evaluation handbook. Evaluation. 1998. doi:10.1007/s13398-014-0173-7.2
  15. 15. Fernandez CSP, Noble CC, Jensen E, Steffen D. Moving the Needle: A Retrospective Pre- and Post-analysis of Improving Perceived Abilities Across 20 Leadership Skills. Matern Child Health J. 2014. doi:10.1007/s10995-014-1573-1
  16. 16. Lam TCM, Bengo P. A comparison of three retrospective self-reporting methods of measuring change in instructional practice. Am J Eval. 2003. doi:10.1016/S1098-2140(02)00273-4
  17. 17. Pratt CC, Mcguigan WM, Katzev AR. Measuring program outcomes: Using retrospective pretest methodology. Am J Eval. 2000. doi:10.1177/109821400002100305
  18. 18. Rockwell SK, Kohn H. Post-Then-Pre Evaluation. J Ext. 1989.
  19. 19. Rohs FR. Response Shift Bias: A Problem In Evaluating Leadership Development With Self-Report Pretest-Posttest Measures. J Agric Educ. 1999. doi:10.5032/jae.1999.04028
  20. 20. Saleh SS, Williams D, Balougan M. Evaluating the effectiveness of public health leadership training: The NEPHLI experience. Am J Public Health. 2004. doi:10.2105/AJPH.94.7.1245
  21. 21. Sprangers M, Hoogstraten J. Pretesting Effects in Retrospective Pretest-Posttest Designs. J Appl Psychol. 1989. doi:10.1037/0021-9010.74.2.265
  22. 22. Fernandez CSP, Noble CC, Jensen ET, Chapin J. Improving Leadership Skills in Physicians: A 6-Month Retrospective Study. J Leadersh Stud. 2016. doi:10.1002/jls.21420
  23. 23. Moustakas C. Phenomenological Research Methods.; 2011. doi:10.4135/9781412995658
  24. 24. Creswell JW. Qualitative Inquiry and Research Design.; 2013.
  25. 25. Strauss A, Corbin JM. Grounded Theory in Practice. SAGE Publications; 1997.
  26. 26. Schnall R, Stone P, Currie L, Desjardins K, John RM, Bakken S. Development of a self-report instrument to measure patient safety attitudes, skills, and knowledge. J Nurs Scholarsh. 2008. doi:10.1111/j.1547-5069.2008.00256.x
  27. 27. University of Oregon. Discussion of 7-point Expertise Scale [Internet]. Available from: http://pages.uoregon.edu/moursund/ICT-planning/discussion_of_7-point.htm
  28. 28. Vagias W. Likert-type scale response anchors. Clemson Int Inst Tour …. 2006. doi:10.1525/auk.2008.125.1.225
  29. 29. Bandura A. Guide for constructing self-efficacy scales. Self-efficacy beliefs Adolesc. 2006. doi:10.1017/CBO9781107415324.004
  30. 30. Mencl J, Tay L, Schwoerer CE, Drasgow F. Evaluating Quantitative and Qualitative Types of Change: An Analysis of the Malleability of General and Specific Self-Efficacy Constructs and Measures. J Leadersh Organ Stud. 2012. doi:10.1177/1548051812442968
  31. 31. Schwoerer CE, May DR, Hollensbe EC, Mencl J. General and specific self-efficacy in the context of a training intervention to enhance performance expectancy. Hum Resour Dev Q. 2005. doi:10.1002/hrdq.1126
  32. 32. Russo D. Competency Measurement Model. In: European Conference on Quality in Official Statistics. ; 2016.
  33. 33. McClelland DC. Identifying Behavioral-Event Interviews. Psychol Sci. 1998;9(5):331-339.
  34. 34. Valente TW, Palinkas LA, Czaja S, Chu KH, Hendricks Brown C. Social network analysis for program implementation. PLoS One. 2015;10(6):1-18. doi:10.1371/journal.pone.0131712
  35. 35. Duke Network Analysis Center. About: The Duke Network AnalysisCenter [Internet]. Available from: https://dnac.ssri.duke.edu/about.php.
  36. 36. Robert Wood Johnson Foundation. Taking Action [Internet]. 2020. Available from: https://www.rwjf.org/en/cultureofhealth/taking-action.html
  37. 37. Oetzel JG, Zhou C, Duran B, et al. Establishing the psychometric properties of constructs in a community-based participatory research conceptual model. Am J Heal Promot. 2015. doi:10.4278/ajhp.130731-QUAN-398
  38. 38. Arora PG, Krumholz LS, Guerra T, Leff SS. Measuring community-based participatory research partnerships: The initial development of an assessment instrument. Prog Community Heal Partnerships Res Educ Action. 2015. doi:10.1353/cpr.2015.0077
  39. 39. Davies R, Davies R, Dart J, Dart J. The ‘Most Significant Change’ (MSC) Technique, a Guide to Its Use.; 2005. doi:10.1104/pp.110.159269
  40. 40. Choy S, Lidstone J. Evaluating leadership development using the Most Significant Change technique. Stud Educ Eval. 2013. doi:10.1016/j.stueduc.2013.09.001
  41. 41. Strauss A, Corbin J. Basics of Qualitative Research: Grounded Theory Procedures and Techniques (2nd Ed).; 1998. doi:10.4135/9781452230153
  42. 42. Trochim WMK. An introduction to concept mapping for planning and evaluation. Eval Program Plann. 1989. doi:10.1016/0149-7189(89)90016-5
  43. 43. Robert Wood Johnson Foundation. Making Health a Shared Value [Internet]. 2020. Available from: https://www.rwjf.org/en/cultureofhealth/taking-action/making-health-a-shared-value.html
  44. 44. The Concept System® groupwisdom™. Build 2019.24.01. Web-based Platform. Concept Systems, Incorporated, 2019. Browser based statistical platform, Available from: https://www.groupwisdom.tech.
  45. 45. Robert Wood Johnson Foundation. Leadership for Better Health [Internet]. 2020. Available from: https://www.rwjf.org/en/our-focus-areas/focus-areas/health-leadership.html

Notes

  • These domains include items with open-ended or measure specific response options.

Written By

Gaurav Dave, Cheryl Noble, Caroline Chandler, Giselle Corbie-Smith and Claudia S.P. Fernandez

Reviewed: 18 May 2021 Published: 08 September 2021