Open access peer-reviewed chapter

Assessment of Clinical Nursing Competencies: Literature Review

Written By

Nataša Mlinar Reljić, Mateja Lorber, Dominika Vrbnjak, Brian Sharvin and Maja Strauss

Submitted: 14 September 2016 Reviewed: 27 December 2016 Published: 17 May 2017

DOI: 10.5772/67362

From the Edited Volume

Teaching and Learning in Nursing

Edited by Majda Pajnkihar, Dominika Vrbnjak and Gregor Stiglic

Chapter metrics overview

4,045 Chapter Downloads

View Full Metrics

Abstract

Introduction: In Slovene nursing higher education, there is a lack of empirical evidence to support the choice of tolls for assessment of clinical skills and competencies. This literature review aims to critically discuss identified methods of clinical nursing skills assessment and competencies currently used in nursing higher education in other countries.

Keywords

  • assessment
  • clinical skill
  • clinical competence
  • nursing competencies

1. Introduction

Nursing students have to develop clinical knowledge, skills, and attitudes for professional practice, and nursing educators have to assess and evaluate students’ core skills readiness for clinical practice [1], and the assessment should be a real indicator of knowledge [2]. Assessment in clinical practice can either be formative or summative [3], with the formative often used to discuss and analyze students’ performance [4] and the summative examining practical performance in the clinical or simulation environment [5]. Both methods should ensure that the criteria for assessment reference the intended learning outcomes [6].

Three approaches to assessment of assessing nursing students’ nursing competencies were identified from the literature and include observation methods [1, 7], self-perception methods [8] and methods combining both approaches [9]. Of these methods, observation of student performance and the use of skills checklists appear to be the most common [10, 11]. This can be done either by direct observation in the clinical environment [7, 12] or by observing the student in the clinical skills laboratory (CLS) using scenarios and clinical skills checklists to measure performance [1]. Other multimethod approaches are used and include clinical portfolio evaluation [13], along with critical incident reports, case-based assessment, peer assessment [9], and reflection [14]. Reflection is important because nurses need to think critically and reflection develops responsibility in clinical practice [15].

The last decade has seen the emergence of new measurement tools being developed and tested for validity and reliability [16]. These include the objective structured clinical examinations (OSCEs) [1] that have numerous advantages over other observation tools [17], such as the development of student’s self-confidence [18], the grounding of more expressive learning [19], and the assessment of not only psychomotor skills but also allows for the assessment of knowledge and attitudes [20]. The OSCE, however, is not the only assessment tool used in nursing education. There are numbers of different scales for assessing student’s competencies and psychometric properties [2123]. This literature reviews, therefore, set out to identify and critically analyze current methods of clinical nursing skills assessment and competencies used in nursing higher education in other countries with regard to developing a comprehensive and effective method for assessing clinical competency in Slovene nursing higher education.

1.1. Aim

The aim of this literature review is to identify methods of clinical nursing skills assessment and competencies currently used in nursing higher education in other countries.

Advertisement

2. Methods

2.1. Eligibility criteria

Studies were included if they met the following inclusion criteria: empirical research primarily focused on methods of clinical nursing skills and competencies assessment and their reliability and validity, full-text available articles published in peer-reviewed journals and written in English, published between 2010 and 2016. Exclusion criteria were systematic review articles, assessment of clinical nursing skills in vocational training, assessment of special clinical nursing skills, editorial and commentary pieces, and all other literature not meeting the inclusion criteria.

2.2. Search strategy and study identification

Three electronic databases were searched for relevant literature: Medline, CINAHL, and PubMed. Key word combinations that were used included competency, competence, clinical competency, clinical competencies, clinical skill, clinical competence, professional competence, competency based education, assessment, measuring, measurement, test, scale, standards, validity, reliability, generalizability, and nursing student. Literature published within the last 5 years was searched due to the contemporary interest in clinical skills and competencies assessment in nursing.

2.3. Study selection and extraction

Identified references were merged with reference software EndNote, and duplicates were removed. The titles and abstracts of the identified results were then assessed for eligibility criteria by two of the authors (DV, ML). Studies not relevant to this review were removed. After retrieval of the full text, two of the authors (DV, ML) independently screened the studies and made decisions concerning final inclusion of the studies. A further two reviewers were then consulted (NMR, MS). Disagreements were solved by discussion. Data were extracted by predefined criteria, which included source, country, objectives, methods, and main findings.

2.4. Assessment of study quality

The Mixed Methods Appraisal Tool (MMAT) was used for assessing their quality. The tool is useful for appraising quantitative, qualitative, and mixed methods studies [24]. Methodological quality criteria are scored on a nominal scale. The tool includes two screening questions and four criteria for qualitative studies, quantitative randomized controlled trials, quantitative nonrandomized studies, quantitative descriptive studies, and three criteria for mixed methods. The score is based on the number of criteria met divided by four (from one criteria met—25% to all criteria met—100%) [24]. Each study was checked for quality by one author (NMR) and then rechecked by two other authors (ML, MS). Disagreements were solved by discussion until consensus was reached.

2.5. Data synthesis

A convergent qualitative synthesis design was selected and results from the identified studies were transformed into qualitative findings [25], using a narrative synthesis as described by Harrison et al. [26] and Dixon-Woods et al. [27]. This approach was selected as studies were heterogeneous.

Advertisement

3. Results

3.1. Study selection and its characteristics

The search revealed a total of 160 records. Figure 1 provides a flow diagram of the literature selection process.

Figure 1.

Flow diagram of literature selection.

The flow diagram (Figure 1) shows that after removing duplicates, 129 records were screened by title and abstract for their relevance, leading to the exclusion of 77 records. The remaining 52 full texts were assessed for eligibility. Critical reading of the full text led to 12 studies being retained for inclusion in the review.

3.2. Methodological quality of studies

The selected studies were conducted in Australia, Sweden, Iran, Canada, Ireland, Spain, Pakistan, and Taiwan. The studies have utilized different study designs, and a number of different methods were identified including a variety of assessment tools, OSCE, complex approaches, and others. There are presented selected studies objectives, design, main findings, and the MMAT score in Table 1.

Source and country Objectives Design Main findings MMAT
Athlin et al. [28]
Sweden
To describe the development and evaluation of a model for a National Clinical Final Examination in the bachelor nursing education. Collaborative project between four universities and adjunctive healthcare areas supplying clinical placements using the Delphi technique and literature review followed by evaluation.
73 students included in theoretical test and 68 students included in bedside test.
  • Theoretical test: problem-solving character, consists of two patient cases describing realistic situations in medical, surgical, or geriatric care in which the patient is followed throughout the care trajectory; the template of criteria for each case.

  • Bedside test: student is taking care for one patient (unknown to the students), while being observed by “an observing nurse” who is using an assessment tool: (1) assessment of needs and problems, analyzes, and planning, (2) implementation and evaluation of nursing activities and (3) reflections and final judgment.

  • Evaluation of a value, relevance and usability of the model.

  • Model was highly appreciated, and its relevance, usability, and validity considered as quite good for the assessment of nursing students’ clinical competence at the final stage of their education, especially as not only focusing on assessment of technical skills.

  • Several deficiencies, needs further evaluation.

75%
Hengameh et al. [29]
Iran
To compare the effect of applying direct observation procedural skills and routine evaluation method on clinical skills of nursing students. Randomized clinical trial.
Nursing students included.
  • Routine evaluation method: a subjective judgment of an instructor about general skills of the student during their clinical course hence the scoring.

  • Direct observation procedural skills: clinical activities evaluated based on direct observation using the checklists.

  • Evaluation in control group: in one stage by routine.

  • Evaluation in intervention group: (a) first stage test: (observation of the skills for 15 min and giving feedback for 5 min), (b) second stage test: repeating the first test after two weeks (emphasis on providing feedback on the students’ strength and weakness), third stage test: repeating the first stage test after four weeks and giving the final scores to the student.

  • Final evaluation: prepared checklist.

  • No significant difference observed between the two groups in terms of demographic variables (p > 0.05).

  • A significant difference observed between intervention and control scores (p = 0.000).

  • Application of direct observation of procedural skills has improved clinical skills of the students significantly.

100%
Hsu and Hsieh [21]
Taiwan
To develop a competency inventory to measure learning outcomes of baccalaureate nursing students and to test its psychometric properties. Cross-sectional survey.
599 nursing students included.
  • Instrument measuring six factors.

  • Ethics and accountability were found to be the most important factor contributing to nursing student’s competencies.

  • Satisfactory psychometric properties.

  • Useful instrument for measuring learning outcomes of nursing student.

75%
Iglesias-Parra et al. [30]
Spain
To develop an evaluation system of clinical competencies for the practicum of nursing students based on the Nursing Interventions Classification (NIC). Psychometric validation study: the first two phases addressed definition and content validation, and the third phase consisted of a cross-sectional study for analyzing reliability.
The population of undergraduate nursing students and clinical tutors.
  • Competencies were designed for second-year clinical placement, and using the same methodology, 18 additional interventions were identified to describe more clinical competencies to be achieved in the third year, reaching a total of 91 interventions.

  • A competency system for the nursing practicum, structured on the NIC, was found to be a reliable method for assessing and evaluating clinical competencies.

  • Further evaluations in other contexts are needed.

  • A tool based on the NIC is otherwise used for competency assessment in combination with a portfolio that includes a reflexive diary through a blog and objective structured clinical examinations.

100%
Imanipour and Jalili [31]
Iran
To develop a comprehensive assessment system for nursing students in their critical care rotation based on a programmatic approach. Development in three phases followed by assessment: determination of the educational objectives based on the nursing curriculum; identification of a list of appropriate assessment methods, selection; determination of a content validity.
38 bachelor nursing students included.
  • All items of the assessment system had a high CVR and CVI ranged.

  • The findings showed that 87.5% of the instructors and 89.47% of students believed that the new assessment system had a positive impact on learning.

  • A programmatic approach should be used for effective evaluation of clinical performance of nursing students in critical care settings because of high validity and reliability, multidimensionality, positive educational impact, and acceptability.

75%
Khan et al. [14]
Pakistan
To identify nursing students’ perceptions about the effectiveness of utilized teaching and learning strategies of clinical education, in improving students’ knowledge, skills, and attitudes. A descriptive cross-sectional study design using both qualitative and quantitative approaches.
74 nursing students included.
  • The findings revealed that the demonstration was the most effective strategy for improving students’ skills; reflection, for improving attitudes; and problem-based learning and concept map for improving their knowledge.

  • Students’ responses to open-ended questions confirmed the effectiveness of these strategies in improving their learning outcomes.

  • Identified perceptions about the effectiveness of the utilized teaching and learning strategies from students’ point of view.

  • Problem-based learning and concept map were both viewed as very effective teaching and learning strategies for the development of students’ knowledge, whereas the demonstration was perceived as an effective strategy for the development of their skills.

  • Reflection was felt to be more effective in the development of students’ knowledge and for bringing about positive changes in attitudes.

50%
Levett-Jones et al. [32]
Australia
To describe the design, implementation, and evaluation of the SOAP, a model used to assess third-year undergraduate nursing students’ clinical competence. Evaluation of Structured Observation and Assessment of Practice (SOAP) using quantitative and qualitative approach.
1031 nursing students included.
  • Four components showed acceptable factor loadings and that together accounted for 77.65% of the variance: perceived learning outcomes, consistency with general clinical performance, quality of assessors, and anxiety/stress impact.

  • Students’ evaluative feedback each semester has been consistently positive.

  • For many students, the SOAP process provokes anxiety and stress.

  • While significant improvements have been identified in students’ overall performance, the SOAP approach has uncovered a deficit in the learning outcomes of some students.

75%
Meskell et al. [33]
Ireland
To explore electronic objective structured clinical examination (OSCE) delivery and evaluate the benefits of using an electronic OSCE management system.
To explore assessors’ perceptions of and attitudes to the computer-based package.
A study was conducted using electronic software in the management of a four station OSCE assessment with a cohort of first-year undergraduate nursing students delivered over two consecutive years.
The quantitative descriptive survey methodology was used to obtain the views of the assessors on the process and outcome of using the software.
203 undergraduate students included.
  • The overall outcome of the User Acceptance Test was good, with more than 80% of the examiners having agreed that functionalities did make a lot of sense and that they accepted this online OSCE solution.

  • Electronic software facilitated the storage and analysis of overall group and individual results, thereby offering considerable time savings.

  • Submission of electronic forms was allowed only when fully completed thus removing the potential for missing data.

  • The feedback facility allowed the student to receive timely evaluation on their performance and to benchmark their performance against the class.

  • Analysis of assessment results can highlight issues around internal consistency being moderate and examiners variability.

50%
Nilsson et al. [8]
Sweden
To develop and validate a new tool intended for measuring self-reported professional competence among both nursing students prior to graduation and among practicing nurses. Construction of a new scale and evaluation of its psychometric properties.
1086 newly graduated nurse students.
  • NPC scale shows satisfactory psychometric properties in a sample of newly graduated nurses.

  • Tool can be used to evaluate the outcomes of nursing education programs, to assess nurses’ competences in relation to the needs in healthcare organizations, to identify self-reported competences, and might be used in tailoring introduction programs for newly employed nurses.

100%
Ossenberg et al. [12]
Australia
To advance the assessment properties of a new instrument, the ANSAT, and investigate the acceptability of this instrument for the evaluation of the professional competence of nursing students. Mixed method approach to collect evidence of validity supporting the instrument.
23 clinical assessors included.
  • Principal components analysis extracted one factor: professional practice competence.

  • A comparison of total instrument scores between year levels demonstrated a significant difference in each of the clinical domains (p = 0.000), suggesting that the instrument is sensitive to differing levels of performance across different year levels.

  • The ANSAT demonstrated high internal consistency.

  • Posttest evaluation completed by assessors demonstrated high usability and acceptability for use in common practice settings.

  • The results of the statistical analysis strongly support the ANSAT as a valid instrument with high internal consistency and sensitivity to student progression.

25%
Ulfvarson and Oxelmark [22]
Sweden
To develop of a new criterion-based reference tool to assess nursing knowledge and competence in clinical practice, Assessment of Clinical Education (AClEd) Development of an instrument using the social constructivist process followed by an assessment
Focus group of 5 students and 80 nurses from clinical settings.
  • The tool showed the validity in assessing nursing skills not only in the nursing student’s ability to perform a task, but also, most importantly, the quality of nursing care.

  • The validity of the tool relies on the judgment from the profession.

25%
Walsh et al. [7]
Canada
To test the psychometric properties of the Novice Objective Structured Clinical Evaluation Tool. An instrument-testing design.
565 nursing students included.
  • The tool was found to have adequate construct validity and reliability.

  • Its stability should be tested by conducting test-retest analysis.

  • Equivalency dimensions of reliability should be evaluated by looking at interrater reliability.

  • This tool shows merit for assessing elements of quality and safety education.

50%

Table 1.

Characteristics of studies included in the literature review.

Table 1 provides a detailed description of the individual studies included in the review. There are five columns in the table. The first column provides details of the source and origin of the study and are presented in alphabetical order. The second and third columns list the key objectives and the research design. The main findings are presented in the fourth column, and the final column lists the MMAT score.

The quality of studies identified was mixed (Table 1). Two of twelve studies were judged with a low quality score (25%) with the main reasons for the low quality score being the use of a nonrepresentative sample and uncontrolled testing. Four studies were judged with high quality (75%). Three studies were evaluated as moderate quality (50%), and three studies as very high quality (100%).

The studies identified in Table 1 were heterogonous that is why they were transformed into qualitative findings using a narrative synthesis [25]. The results were grouped into four assemblages according to the thematic approach: assessment tools, objective structured clinical examination (OSCE), complex assessment approaches, and other approaches.

3.3. Assessment tools

Hsu and Hsieh [21] developed an instrument known as the Competency Inventory of Nursing Students (CINSs) for measuring nursing students’ competencies and testing psychometric qualities of baccalaureate nursing students in Taiwan. They used a cross-sectional survey including 599 nursing students. This inventory assesses eight categories that cover ethics and accountability, general clinical skills, lifelong learning, biomedical science, caring, critical thinking, communication, and team working. Ulfvarson and Oxelmark [22] used the social constructivist process to develop a tool for assessing knowledge, and clinical practice contains four domains: nursing, documentation, caring, and skills and manual handling. The tool was tested and found to be valid to measure nursing skills not only of the nursing student’s ability to perform a task but also the quality of nursing care. This Assessment of Clinical Education (AClEd) tool evaluated learning outcomes during clinical practice. MMAT score for this study was very low, only 25%. The reliability of the assessment tool was not detected. Nilsson et al. [8] developed a Nurse Professional Competence (NPC) scale for measuring self-reported professional competence that covers eight factors: nursing care, value-based nursing care, medical/technical care, teaching/learning and support, documentation and information technology, legislation in nursing and safety planning, leadership in the development of nursing care, education, and supervision of staff/students. They developed a new scale and evaluated its psychometric properties on a large sample of newly graduated nurse students (n = 1086) from 11 educational institutions in Sweden. This tool can be used to estimate the outcomes of nursing education programs. It can assess nurses’ competence in relation to the needs of healthcare organizations, and it can help identify self-reported capabilities and assist in modifying introduction programs for newly employed nurses [8]. Face validity was evaluated by asking students to critically review the item and their understanding of the item within the questionnaire. The data quality was described by mean score, and the construct validity and reliability were described with orthogonal rotation [8]. We recorded the MMAT score for Nilsson et al.’s [8] study very high (100%). Ossenberg et al. [12] based their Australian Nursing Standards Assessment Tool (ANSAT) on the National Competency Standards for the Registered Nurse in Australia, covering professional practice, critical thinking and analysis, provision and coordination of care, and collaborative and therapeutic practice. The validation and acceptability of ANSAT was conducted in a pilot study on 23 clinical assessors, interviews, and with the posttest survey. The recorded MMAT score of study was 25%. More psychometric testing is needed to address current deficits [34]. Iglesias-Parra et al. [30] developed an evaluation system of clinical competencies for the practicum of nursing students based on the Nursing Interventions Classification (NIC). They have prepared a list of 73 NIC interventions that were associated with each of the 26 competencies in nine domains. They took a psychometric validation study in two phases and the cross-sectional study on the population of undergraduate nursing students and clinical tutors. It was found that the competency system, structured on the NIC assessment tool, is a reliable method for assessing and evaluating nursing interventions. Reliability and construct validity were tested by the clinical mentors on 107 students. The survey was conducted with the Delphi technique. The MMAT score was very high (100%). The assessment tool represents a multidimensional approach in formative and combined assessing [30].

3.4. Objective structured clinical examination

Meskell et al. [33] and Walsh et al. [7] both examined OSCE. Meskell et al. [33] evaluated the benefits of using an electronic OSCE assessment system in undergraduate students (n = 203). The electronic software facilitated the storage and analysis of results, thus offering significant time savings. Walsh et al. [7] were focused on the development of a Novice OSCE that included the following competencies: safety, asepsis, knowledge, organization, and caring. An instrument-testing design on a sample of nursing students (n = 565) was used. The MMAT score of both papers was 50%. Some psychometric analysis, reliability, and stability of OSCE tool should be done. OSCE is shown as a formative assessing tool, and it is argued that students should also be assessed in critical thinking and problem-solving skills in addition to clinical skills performance [1, 35].

3.5. Complex assessment approaches

Three studies focused on more complex approaches. Athlin et al. [28] developed a model of a National Clinical Final Examination (NCFE). Their model integrates knowledge from theoretical and practical studies and includes knowledge, skills, capacity of critical thinking, problem-solving, ethical reasoning, independence, and readiness to act. They prepared a two-part examination. This included a written theoretical test with problem-solving characteristics and a bedside test performing nursing care by using observation. Their model was used to assess theoretical and practical knowledge. They found that the model was highly appreciated, and its relevance, usability, and validity were considered as “quite good” for the assessment of nursing students’ clinical competence at the final stage of their education. This study recorded a high MMAT score (75%). There is a need to evaluate the model on extensive students’ groups because the study was completed using a relatively small sample in theoretical test (n = 73) and a bedside test (n = 68). The model for evaluation of theoretical and practical knowledge used a holistic approach with opportunities for feedback and reflection for students. Imanipour and Jalili [31] developed an assessment system including multiple methods. They used a combination of oral examination and direct observation of a procedural skill. The cognitive knowledge was evaluated by oral exam, and clinical skills were evaluated by direct observation using a global rating scale. The exam includes some generic procedures and two specific procedures. Clinical work sampling was used to evaluate undergraduate bachelor of nursing students’ (n = 38) professional behavior. They found that the students and instructors were very satisfied with a comprehensive clinical performance assessment system. Levett-Jones et al. [32] describe the design, implementation, and evaluation of the Structured Observation and Assessment of Practice (SOAP) model used to assess the third-year undergraduate nursing students’ (n = 1031) clinical competences. While significant enhancements have been identified in students’ overall performance, the SOAP approach has discovered an insufficiency in the learning outcomes of some students.

3.6. Other approaches

Khan et al. [14] evaluated nursing students’ perceptions about the effectiveness of utilized teaching and learning strategies of clinical education in improving students’ knowledge, skills, and attitudes: demonstration, reflection, and problem-based learning, and concept map. They used both qualitative and quantitative methods in a descriptive cross-sectional study of 74 nursing students to identify nursing students’ perceptions about the efficacy of the applied teaching and learning strategies used in clinical education. Problem-based learning and the use of concept maps were perceived to be effective teaching and learning strategies. Hengameh et al. [29] compared the routine evaluation method (a subjective judgment of an instructor about general skills of the student during their clinical course, hence the scoring) with direct observation of procedural skills (DOPS) (clinical activities evaluated based on direct observation using the checklists). They found that applying direct observation of procedural skills (DOPS) significantly enhanced clinical skills and students’ scores in clinical procedures.

Advertisement

4. Discussion

The aim of this chapter was to review the literature and critically discuss in relation to identified methods of clinical nursing skills assessment and competencies currently used in nursing higher education. Multidimensional approaches in nursing assessment should be based on a number of differing assessments methods [1]. It should be the combination of knowledge, critical thinking, caring and communication [1, 7, 30], problem-solving, and reflection [36]. Holistic assessment was found to encourage students to be more person-centered [37], rather than purely task-oriented [32]. The literature review identified a wide variety of tools and assessment methods, each with their own advantages and disadvantages. Some were evaluated by nursing students, others by nurses and clinical experts. The studies reviewed were completed in different countries from differing nursing education curriculum and this, along with the range of sample size and approaches used, has proved difficult to make any direct comparison. Nurse educators have a responsibility to ensure that graduates are well prepared for the demands and challenges they will encounter in practice [32]. There is a current imperative to implement a modern and appropriate method of clinical evaluation in nursing education [9, 29]. The current trend requires moving from a generic, technical approach to a more holistic model of clinical assessment, which supports the nurturing and development of competent nursing professionals [34]. The OSCE is a practical test [17, 38] in a simulation area, where the student shows the skills [22] and technical performance [7]. It is also a well-established method to assess clinical skills [33], using a checklist [1] to assess all students with the same set of criteria in order to determine the level of competency achieved in their performance [17, 39]. It provides a level of objectivity in how competency is assessed [32]. The review identified a number of benefits from using OSCE including the achievement of deeper meaningful learning [19], deeper consequential learning [20], and an increase in students’ confidence in practice [33]. The OSCE was also identified as a means to facilitate the assessment of psychomotor skills, as well as knowledge and attitudes [20]. As an assessment method, the OSCE helps in the identification of strengths and weaknesses and can focus more on the student getting constructive feedback with or without the consequence of a subsequent examination [40]. In addition to the previous advantages already outlined, Ulfvarson and Oxelmark [22] found that the OSCE can also be used for examining learning outcomes especially those comprising practical skills, such as medical techniques and interpretation of results. It has been recognized as a reliable and valid method to assess clinical skills competency [16, 3941], and Carraccio and Englander [42] have suggested that the OSCE becomes a key standard for assessing clinical competence. Some criticisms of the OSCE have, however, been identified.

The lack of authenticity due to students not being observed in a real clinical context was identified by Levett-Jones et al. [32], and they further criticized how the OSCE focused on the measurement of technical skills rather than the whole caring situation including the use examination of empathy and interpersonal relationships. The OSCE, however, should be used in conjunction with other evaluation methods [36, 43]. Evaluation methods should be coherent with curriculum and learning out comes. The holistic evaluation methods motivate nursing students’ learning, stimulates critical reflective thinking, and make their readiness for professional practice more preferable. Good assessment tools should also be valid and reliable [44].

4.1. Implications for nursing education

Assessment of clinical nursing skills requires collaboration between clinical partners and academia to enhance the clinical experiences of students, the professional development of preceptors or mentors, and the clinical credibility of academics [34]. The findings from the literature review represent a first opportunity to prepare our own assessment tools, according to the cultural and clinical environment, material and economic conditions, national nursing standards, capabilities and purposes of nursing care in Slovenia. There is now an opportunity for all educational institutions with the nursing study programs in the country to prepare assessment tool with cooperation of students, educational experts, and clinical nursing experts.

4.2. Limitations

The findings from the literature review must be considered with respect to the limitations of the studies reviewed and the methods used. Some relevant work may have been omitted due to the inclusion of material only in the English language. The methodological quality of included studies varied from very low [12, 22] to very high [8, 29]. The validity and reliability of the different approaches used were not always discussed, and therefore, our conclusions should be drawn with caution. The MMAT is considered as an efficient tool, although its reliability could be further improved as it appraises only the methodological quality of included studies and not the quality of their reporting [45, 46]. Narrative summary is considered as a more informal approach and can, therefore, be subject to criticism, because of its lack of transparency [27].

Advertisement

5. Conclusion

Despite the heterogeneity of designs and methodology, the findings from the literature review present an overview of current clinical skills assessment tools in practice and in the simulation environment. The assessment of nursing students should include a variety of methods and procedures. It should include the assessment of knowledge, clinical skills, and critical problem-solving in nursing care. There is a need for further research to develop a holistic clinical assessment tool with a reasonable level of validity and reliability, and it must be tested before being applied to the nursing curriculum.

References

  1. 1. Cant R, McKenna L, Cooper S. Assessing preregistration nursing students’ clinical competence: a systematic review of objective measures. International Journal of Nursing Practice. 2013;19(2):162–176. DOI: 10.1111/ijn.12053
  2. 2. Murray E, Gruppen L, Catton P, Hays R, Woolliscroft JO. The accountability of clinical education: its definition and assessment. Medical Education. 2000;34(10):871–879. DOI: 10.1046/j.1365-2923.2000.00757.x
  3. 3. McCarthy B, Murphy S. Assessing undergraduate nursing students in clinical practice: do preceptors use assessment strategies?. Nurse Education Today. 2008;28(3):301–313. DOI: 10.1016/j.nedt.2007.06.002
  4. 4. Nicol DJ, Macfarlane-Dick D. Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education. 2006;31(2):199–218. DOI: 10.1080/03075070600572090
  5. 5. Townsend AH, McLlvenny S, Miller CJ, Dunn EV. The use of an objective structured clinical examination (OSCE) for formative and summative assessment in a general practice clinical attachment and its relationship to final medical school examination performance. Medical Education. 2001;35(9):841–846. DOI: 10.1046/j.1365-2923.2001.00957.x
  6. 6. Goodwin LD, Leech NL. The meaning of validity in the new standards for educational and psychological testing: implications for measurement courses. Measurement and Evaluation in Counseling and Development. 2003;36(3):181–191.
  7. 7. Walsh M, Bailey PH, Mossey S, Koren I. The novice objective structured clinical evaluation tool: psychometric testing. Journal of Advanced Nursing. 2010;66(12):2807–2818. DOI: 10.1111/j.1365-2648.2010.05421.x
  8. 8. Nilsson J, Johansson E, Egmar AC, Florin J, Leksell J, Lepp M, et al. Development and validation of a new tool measuring nurses self-reported professional competence—the Nurse Professional Competence (NPC) scale. Nurse Education Today. 2014;34(4):574–580. DOI: 10.1016/j.nedt.2013.07.016
  9. 9. Fotheringham D. Triangulation for the assessment of clinical nursing skills: a review of theory, use and methodology. International Journal of Nursing Studies. 2010;47(3):386–391. DOI: 10.1016/j.ijnurstu.2009.09.004
  10. 10. Oermann MH, Yarbrough SS, Saewert KJ, Ard N, Charasika ME. Clinical evaluation and grading practices in schools of nursing: national survey findings part II. Nursing Education Perspectives. 2009;30(6):352–357.
  11. 11. Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE guide no. 31. Medical Teacher. 2007;29(9–10):855–871. DOI: 10.1080/01421590701775453
  12. 12. Ossenberg C, Dalton M, Henderson A. Validation of the Australian Nursing Standards Assessment Tool (ANSAT): a pilot study. Nurse Education Today. 2016;36:23–30. DOI: 10.1016/j.nedt.2015.07.012
  13. 13. Hartigan I, Murphy S, Flynn AV, Walshe N. Acute nursing episodes which challenge graduate’s competence: perceptions of registered nurses. Nurse Education in Practice. 2010;10(5):291–297. DOI: 10.1016/j.nepr.2010.01.005
  14. 14. Khan BA, Ali F, Vazir N, Barolia R, Rehan S. Students’ perceptions of clinical teaching and learning strategies: a Pakistani perspective. Nurse Education Today. 2012;32(1):85–90. DOI: 10.1016/j.nedt.2011.01.016
  15. 15. Kok J, Chabeli MM. Reflective journal writing: how it promotes reflective thinking in clinical nursing education: a students’ perspective. Curationis. 2002;25(3):35–42.
  16. 16. Yanhua C, Watson R. A review of clinical competence assessment in nursing. Nurse Education Today. 2011;31(8):832–836. DOI: 10.1016/j.nedt.2011.05.003
  17. 17. Oranye NO, Ahmad C, Ahmad N, Bakar RA. Assessing nursing clinical skills competence through objective structured clinical examination (OSCE) for open distance learning students in Open University Malaysia. Contemporary Nurse. 2012;41(2):233–241. DOI: 10.5172/conu.2012.41.2.233
  18. 18. Alinier G. Nursing students’ and lecturers’ perspectives of objective structured clinical examination incorporating simulation. Nurse Education Today. 2003;23(6):419–426. DOI: 10.1016/S0260-6917(03)00044-3
  19. 19. Barry M, Noonan M, Bradshaw C, Murphy-Tighe S. An exploration of student midwives’ experiences of the objective structured clinical examination assessment process. Nurse Education Today. 2012;32(6):690–694. DOI: 10.1016/j.nedt.2011.09.007
  20. 20. Baid H. The objective structured clinical examination within intensive care nursing education. Nursing in Critical Care. 2011;16(2):99–105. DOI: 10.1111/j.1478-5153.2010.00396.x
  21. 21. Hsu LL, Hsieh SI. Development and psychometric evaluation of the competency inventory for nursing students: a learning outcome perspective. Nurse Education Today. 2013;33(5):492–497. DOI: 10.1016/j.nedt.2012.05.028
  22. 22. Ulfvarson J, Oxelmark L. Developing an assessment tool for intended learning outcomes in clinical practice for nursing students. Nurse Education Today. 2012;32(6):703–708. DOI: 10.1016/j.nedt.2011.09.010
  23. 23. Nielsen C, Sommer I, Larsen K, Bjørk IT. Model of practical skill performance as an instrument for supervision and formative assessment. Nurse Education in Practice. 2013;13(3):176–180. DOI: 10.1016/j.nepr.2012.08.014
  24. 24. Pluye P, Robert E, Cargo M, Bartlett G, O’Cathain A, Griffiths F, et al. Proposal: a Mixed Methods Appraisal Tool for systematic mixed studies reviews [Internet]. 2011. Available from: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/84371689/MMAT 2011 criteria and tutorial 2011-06-29updated2014.08.21.pdf [Accessed: 2016-09-22]
  25. 25. Pluye P, Hong QN. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews. Annual Review of Public Health. 2014;35:29–45. DOI: 10.1146/annurev-publhealth-032013-182440
  26. 26. Harrison R, Birks Y, Hall J, Bosanquet K, Harden M, Iedema R. The contribution of nurses to incident disclosure: a narrative review. International Journal of Nursing Studies. 2014;51(2):334–345. DOI: 10.1016/j.ijnurstu.2013.07.001
  27. 27. Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research & Policy. 2005;10(1):45–53.
  28. 28. Athlin E, Larsson M, Söderhamn O. A model for a National Clinical Final Examination in the Swedish bachelor programme in nursing. Journal of Nursing Management. 2012;20(1):90–101. DOI: 10.1111/j.1365-2834.2011.01278.x
  29. 29. Hengameh H, Afsaneh R, Morteza K, Hosein M, Marjan SM, Abbas E. The effect of applying direct observation of procedural skills (DOPS) on nursing students’ clinical skills: a randomized clinical trial. Global Journal of Health Science. 2015;7(7):17–21. DOI: 10.5539/gjhs.v7n7p17
  30. 30. Iglesias-Parra MR, García-Guerrero A, García-Mayor S, Kaknani-Uttumchandani S, León-Campos Á, Morales-Asencio JM. Design of a competency evaluation model for clinical nursing practicum, based on standardized language systems: psychometric validation study. Journal of Nursing Scholarship. 2015;47(4):371–376. DOI: 10.1111/jnu.12140
  31. 31. Imanipour M, Jalili M. Development of a comprehensive clinical performance assessment system for nursing students: a programmatic approach. Japan Journal of Nursing Science. 2016;13(1):46–54. DOI: 10.1111/jjns.12085
  32. 32. Levett-Jones T, Gersbach J, Arthur C, Roche J. Implementing a clinical competency assessment model that promotes critical reflection and ensures nursing graduates’ readiness for professional practice. Nurse Education in Practice. 2011;11(1):64–69. DOI: 10.1016/j.nepr.2010.07.004
  33. 33. Meskell P, Burke E, Kropmans TJ, Byrne E, Setyonugroho W, Kennedy KM. Back to the future: an online OSCE management information system for nursing OSCEs. Nurse Education Today. 2015;35(11):1091–1096. DOI: 10.1016/j.nedt.2015.06.010
  34. 34. Wu XV, Enskär K, Lee CC, Wang W. A systematic review of clinical assessment for undergraduate nursing students. Nurse Education Today. 2015;35(2):347–359. DOI: 10.1016/j.nedt.2014.11.016
  35. 35. Cowan DT, Norman I, Coopamah VP. Competence in nursing practice: a controversial concept—a focused review of literature. Nurse Education Today. 2005;25(5):355–362. DOI: 10.1016/j.nedt.2005.03.002
  36. 36. Mitchell ML, Henderson A, Groves M, Dalton M, Nulty D. The objective structured clinical examination (OSCE): optimising its value in the undergraduate nursing curriculum. Nurse Education Today. 2009;29(4):398–404. DOI: 10.1016/j.nedt.2008.10.007
  37. 37. Laird EA, McCance T, McCormack B, Gribben B. Patients’ experiences of in-hospital care when nursing staff were engaged in a practice development programme to promote person-centredness: a narrative analysis study. International Journal of Nursing Studies. 2014;52(9):1454–1462. DOI: 10.1016/j.ijnurstu.2015.05.002
  38. 38. Ahuja J. OSCE: a guide for students, part 1. Practice Nurse. 2009;37(1):37–39.
  39. 39. Robbins LK, Hoke MM. Using objective structured clinical examinations to meet clinical competence evaluation challenges with distance education students. Perspectives in Psychiatric Care. 2008;44(2):81–88. DOI: 10.1111/j.1744-6163.2008.00157.x
  40. 40. McWilliam P, Botwinski C. Developing a successful nursing objective structured clinical examination. Journal of Nursing Education. 2010;49(1):36–41. DOI: 10.3928/01484834-20090915-01
  41. 41. Jones A, Pegram A, Fordham-Clarke C. Developing and examining an objective structured clinical examination. Nurse Education Today. 2010;30(2):137–141. DOI: 10.1016/j.nedt.2009.06.014
  42. 42. Carraccio C, Englander R. The objective structured clinical examination: a step in the direction of competency-based evaluation. Archives of Pediatrics & Adolescent Medicine. 2000;154(17):736–741. DOI: 10.1001/archpedi.154.7.736
  43. 43. van der Vleuten CP, Schuwirth LW, Driessen EW, Dijkstra J, Tigelaar D, Baartman LK, et al. A model for programmatic assessment fit for purpose. Medical Teacher. 2012;34(3):205–214. DOI: 10.3109/0142159X.2012.652239
  44. 44. Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Annals of Internal Medicine. 2011;155(8):529–536. DOI: 10.7326/0003-4819-155-8-201110180-00009
  45. 45. Souto RQ, Khanassov V, Hong QN, Bush PL, Vedel I, Pluye P. Systematic mixed studies reviews: updating results on the reliability and efficiency of the Mixed Methods Appraisal Tool. International Journal of Nursing Studies. 2015;52(1):500–501. DOI: 10.1016/j.ijnurstu.2014.08.010
  46. 46. Vrbnjak D, Denieffe S, O'Gorman C, Pajnkihar M. Barriers to reporting medication errors and near misses among nurses: a systematic review. International Journal of Nursing Studies. 2016;63:162–178. DOI: 10.1016/j.ijnurstu.2016.08.019

Written By

Nataša Mlinar Reljić, Mateja Lorber, Dominika Vrbnjak, Brian Sharvin and Maja Strauss

Submitted: 14 September 2016 Reviewed: 27 December 2016 Published: 17 May 2017