Mentored Simulation Training Improves Procedural Skills in Cardiac Catheterization
Despite valuable supplemental training resources for surgical skill acquisition, utility of virtual reality simulators to improve skills relevant to performing cardiac catheterization has not been evaluated.
Methods and Results—
Post baseline cardiac catheterization performance assessment, 27 cardiology trainees were randomized to either mentored training on a virtual reality simulator (n=12) or no simulator training (control; n=15). Cardiac catheterization performance was reassessed 1 week post baseline assessment. Performance scores at 1 week were compared with baseline within each group, and change in score from baseline to 1 week was compared between groups. Linear regression modeling was performed to assess the effect of simulator training as a function of baseline performance. Technical performance improved postintervention in the simulator group (24 versus 18; P=0.008) and changed marginally in the control group (20 versus 18; P=0.054). Improvement in technical performance was greater in the simulator group (6 versus 1; P=0.04). Global performance improved postintervention in both groups (simulator, 24 versus 17, P=0.01; control, 20 versus 18, P=0.02), with a trend toward greater improvement in the simulator group (5 versus 2; P=0.11). Lower scores at baseline were associated with larger differences in postintervention scores between the simulator and control groups (technical performance, P=0.0006; global performance, P<0.0001).
Skills required to perform cardiac catheterization can be learned via mentored simulation training and are transferable to actual procedures in the catheterization laboratory. Less proficient operators derive greater benefit from simulator training than more proficient operators.
Training to perform cardiac catheterization (CC) has been traditionally based on the apprenticeship model with an instructor guiding a student. The trainee practices on real cases and gradually learns to perform the procedure. After mastering the procedure, the apprentice becomes the teacher and the cycle repeats. In the current model, training occurs directly in the catheterization laboratory during the course of actual patient care.1 Teaching in the catheterization laboratory adds time to the procedure, which may lead to increased costs and decreased efficiency.2 Moreover, the catheterization laboratory may not be the ideal educational environment, where training often occurs by chance, leading to increase in trainee stress, which may have a negative effect on the learning process.3 Although this old apprenticeship model (see one, do one, teach one) has been successful in transferring skills and knowledge from 1 generation to the next, several authors suggest that this model is no longer acceptable to either the medical profession or to the well-informed public.1
|Variable, n||Simulator (n=11)||Control (n=15)|
|Age, y (median, IQR)||29 (28.5, 32)||31 (29, 32)|
|Use of eyeglasses||9||11|
|Year of cardiology fellowship|
|No. of prior CC rotations|
|No. of prior CC cases|
|Experience with musical instruments, y|
|Experience with sports, y|
|Experience with video games, y|
|Ability to type|
|Very slow/somewhat slow||2||6|
|Somewhat fast/very fast||9||9|
|Familiarity with computers|
|Not familiar at all/ not too familiar||1||0|
|Somewhat familiar/ very familiar||10||15|
|Comfort with advanced technology|
|Not comfortable at all/ not too comfortable||0||0|
|Somewhat comfortable/ very comfortable||11||15|
During the last decade, the use of virtual reality simulation in training has extended beyond the airline industry and military operations into the medical realm. The major driving force for its use in the medical arena is the potential to train physicians in a non–error-sensitive environment resulting in improved patient safety. During this period, the data to support the use of simulation in training of medical procedures have been accumulating steadily. Virtual reality simulators have been shown to be effective in reducing intraoperative errors during laparoscopy,4 improving performance and reducing procedure times for laparoscopic salpingectomy,5 improving performance in lower extremity endovascular intervention,6 reducing procedure time and improving catheter-based technique in carotid stenting,7 and reducing errors and excess needle manipulations in intracorporeal suturing and knot tying.8 Such evidence has resulted in uptake of simulation training in core training curriculums including general surgery, as well as for new skill acquisition such as carotid stenting for experienced operators.
Currently, a variety of CC simulators are available for clinicians and marketed especially for interventional cardiologists for practicing complex interventions including bifurcation lesions and chronic total occlusions. Although used in some centers for training novice cardiology fellows guidewire and catheter manipulation, the effectiveness of simulation in skill acquisition regarding diagnostic CC has not been studied. The aim of this pilot study was to evaluate the effectiveness of mentored simulator training on skill acquisition and transferability of skills from a simulated environment to the CC laboratory.
WHAT IS KNOWN
Performing cardiac catheterization is traditionally learned via the apprenticeship model with an instructor guiding the student, the student practicing on real patients, and after mastering the procedure, the apprentice becoming the teacher, and the cycle repeating.
Virtual reality simulation training has been shown to improve laproscopic surgery and endovascular intervention performance, but its utility in skill acquisition regarding cardiac catheterization has not been studied.
WHAT THE STUDY ADDS
Skills required to perform diagnostic coronary angiography can be learned via mentored simulation training and are transferable to actual procedures in the catheterization laboratory.
Less proficient operators derive greater benefit from simulator training than more proficient operators.
Further studies are required to determine time and cost implications and most efficient utilization of mentored simulator training in medical education programs.
Twenty-seven trainees from the adult cardiology training program rotating on the CC service at a tertiary teaching hospital were enrolled from January 2010 to November 2011 (Figure 1). The study was approved by the hospital Institutional Review Board and all participants were included after written informed consent. Participants completed an entrance survey to determine demographics as well as previous CC experience. Baseline differences in the capability of participants to perceive 3-dimensional structures, an ability that is important when performing catheter-based procedures, was determined using a visuospatial evaluation consisting of a card rotation and a cube comparison test (Education Testing Service, Princeton, NJ).
On the first day of their respective CC rotation, each participant performed 2 consecutive diagnostic CC procedures supervised by an attending interventional cardiologist. Procedures were restricted via the femoral arterial access site as the catheterization simulator only had a femoral arterial port. Participants performed procedures only on elective outpatients and stable patients with non–ST-segment elevation myocardial infarction. All participants were verbally guided through the steps of the procedure as required by the attending interventional cardiologist while their technical skills and global performance were evaluated. Participants received a single score on each of the components of the 2 checklists (Figures 2 and 3) based on their combined performance on the 2 cases. Participants were informed of the evaluation forms by which they were being scored before performing the cases.
After completion of baseline cases, all participants received didactic teaching in the form of a lecture on the tools most commonly used to perform CC and step wise sequencing of the procedure. Specific instruction was provided on the following topics: arterial puncture, catheter shapes and sizes, catheter and guidewire manipulation, catheter exchange, manifold setup, coronary artery cannulation, coronary artery injection, coronary anatomy, standard angiographic views, and performance of left ventricular angiography. Participants were then randomized using sealed envelopes to receive either CC simulation training (n=12) or no simulation training (n=15; control group). To ensure that trainees’ baseline catheterization experience was similar between the 2 groups, randomization was block stratified by the year of cardiology fellowship training. Participants in both arms continued with standard clinical education with the usual apprenticeship model training.
One week after randomization, each participant completed another 2 consecutive diagnostic CC procedures. The participant’s performance was again evaluated by the same interventional cardiologist blinded to the randomization assignment of the participant.
The MENTICE VIST (Vascular Intervention Simulation Trainer), a high-fidelity virtual reality CC simulator, comprises a mechanical unit housed within a mannequin cover, a high-performance desktop computer, and 2 display screens. Modified instruments are inserted through the access port using a haptic interface device. The operator is able to select and use appropriate angiographic tools such as coronary catheters, guidewires, inject contrast dye, and perform diagnostic and interventional procedures using the simulated fluoroscopic screen. The operator is able to adjust the table height, image detector angle and projection, and zoom features like in a real catheterization laboratory. In addition, vital signs and electrocardiogram parameters are also displayed on the screen and change consistent with the clinical scenario.
Interventional Arm-Mentored Simulator Training
In addition to continued participation in apprenticeship-based training, participants randomized to simulator training received specific mentored training on a standardized CC model on the MENTICE VIST simulator until a predesignated level of proficiency to independently complete at least 1 procedure without being prompted or helped. Mentoring was performed by an interventional cardiology fellow proficient at performing CC. Simulator training focused on safe and efficient handling of guidewires and catheters, performing catheter exchange, coronary artery cannulation, manipulating the table and image detector to obtain standard angiographic views, injection of contrast dye, performing fluoroscopy and obtaining cine images, and interpretation of cine images. Training was achieved in 1 to 2 sessions, lasting a total time of 2 to 4 hours. Participants interested in additional practice had access to the simulator training modules for the entire week before reevaluation.
Participants randomized to no simulator training continued to participate in apprenticeship-based training. After completion of the 1-week follow-up evaluation, they were offered mentored training on the simulator.
CC Performance Assessment Tools
Participant performance was measured using 2 tools, technical performance score and global performance rating score.
Technical Performance Score
The technical performance checklist is a CC procedure-specific evaluation adapted from Chaer et al.6 It identifies separate tasks that are fundamental in performing a CC (Figure 2). These include mounting the catheter on the guidewire, cannulating the coronary arteries, exchanging catheters, and obtaining and interpreting standard angiographic views. As obtaining arterial access cannot be simulated on the MENTICE VIST, this task was not included in the checklist. The examiner scored the participant on each of the defined checklist questions on a score of 0 to 4, where 0=fail; 1=success, not very good; 2=success, good; 3=success, very good; and 4=success, excellent. A participant’s technical performance score is the sum of scores of the 11 questions (maximum participant score=44).
Global Performance Rating Score
Global rating score, a marker of general overall performance, is not specific to CC and may apply to other endovascular procedures. It has been adapted from a validated scoring system9 and used previously.6 This evaluation includes overall assessment of wire and catheter skills, time and efficiency, ability to complete the case, need for verbal prompts and attending take over (Figure 3). There are 12 questions, each graded from 0 to 4, with score of 0=poor performance and 4=good performance. Participant’s score is the sum of the scores on the 12 questions (maximum participant score=48).
The primary study outcome was the change in technical performance score from baseline to 1 week. Secondary study outcome was the change in global performance rating score from baseline to 1 week.
Participant characteristics and performance scores were summarized as medians for continuous variables and percentages for categorical variables. One participant (at baseline, very familiar with computers, technical performance score=5, global performance rating score=3) randomized to simulator training did not complete the 1-week postintervention evaluation (due to personal reasons) and was excluded from the analysis. Identical analyses were performed for technical and global performance rating end points. Scores at 1 week (ie, postintervention) compared with baseline scores were assessed within each group separately using the Wilcoxon signed rank test. The change in score from baseline to 1 week (ie, 1-week−baseline) was compared between randomized groups using the Wilcoxon rank sum test. P-values <0.05 were considered significant.
As a subsequent sensitivity analysis, a linear regression model was used to assess whether the effect of simulator training differs as a function of the participant’s baseline performance score. Participant’s 1-week performance score (dependent variable) was modeled as a linear function of the baseline performance score (independent variable). A separate intercept and slope parameter was estimated for each group (simulator and control). The null hypothesis of equal slopes between the groups was tested. Inequality of the slope parameters implies that the expected impact of the intervention (ie, the difference in expected 1-week performance scores between the simulator and control group) is not constant for all trainees, but rather depends on the trainee’s baseline performance score. Using the estimated intercept and slope parameters from this model, the predicted intervention effect (IE), in other words the expected outcome with simulator training compared with no simulator training, for a participant with baseline performance score “x” was defined as:IE(x)=μ1(x)−μ0(x)
where μ1(x) denotes the predicted 1-week performance score for a participant with baseline performance score “x” who receives simulator training and μ0(x) denotes the predicted 1-week performance score for a participant with baseline score “x” who receives no simulator training.
Furthermore, we explored whether the predicted IE is greater in less experienced compared with more experienced trainees. For this analysis, novice trainees were classified as having performed <50 prior cases, whereas experienced trainees had performed ≥50 prior cases. The average predicted IE in each group, that is, the novice and experienced groups was obtained by evaluating IE(x) for each study participant (by plugging in his or her own value of x) and averaging these values among participants in each group. All statistical analyses were performed using R for Windows (Version 2.13.1, http://www.R-project.org/).
Sample Size Considerations
The study was designed to last 2 years and was expected to enroll most adult cardiology trainees undergoing a CC rotation at our institution during that timeframe. On the basis of the size of our training program, we expected to enroll ≈24 to 28 participants. With 26 participants (13 per group), we calculated that we would have ≈80% power to detect a difference of 1.1 SDs in change scores (improvement from baseline to 1 week) for participants randomized to simulator versus control groups using a 2-sample t test with 2-sided α=0.05. A difference of 1.1 SDs has the following interpretation. If one considers a hypothetical pair consisting of 1 participant trained on the simulator and 1 control participant, there is a 78% probability that the participant trained on the simulator will have greater improvement in score than the control participant. A difference of this magnitude was considered to be both highly meaningful as well as plausible.
Twenty-six participants (96.3%) completed the study. Participants in both groups were of similar age (simulator, 29 years versus control, 31 years). There were fewer men in the simulator group (46% versus 87%). Both groups were comparable in participant years of training, previous catheterization experience, and previous other experiences that may be relevant to a participant’s ability to assimilate catheter techniques (Table). Performance on the visuospatial tests was not different between the 2 groups (Card rotation test, P=0.31; Cube comparison test, P=0.77). There were no procedural complications that occurred because of fellow participation in this study.
Technical Performance Score
Technical performance scores at baseline and 1 week for each participant in the 2 groups are presented in Figure 4A. At baseline, the technical performance score in both groups was equivalent (median, simulator 18 versus control 18, P=0.68). Technical performance score at 1 week was significantly higher compared with baseline in the simulator group (median, 24 versus 18, P=0.008) and marginally higher in the control group (median, 20 versus 18, P=0.054). The change in technical performance score from baseline to 1 week was greater in the simulator group compared with the control group (median, 6 versus 1, P=0.04).
Global Performance Rating Score
Global rating performance scores at baseline and 1 week for each participant in the 2 groups are presented in Figure 4B. At baseline, the global performance rating score in both groups was equivalent (median, simulator 17 versus control 18, P=0.90). Global rating score at 1 week was higher compared with baseline in both the simulator (median, 24 versus 17, P=0.01) and control groups (median, 20 versus 18, P=0.02). The simulator-trained group demonstrated a greater improvement in global rating score compared with the control group; however, this difference did not reach statistical significance (median, 5 versus 2, P=0.11).
Effect of Simulator Training as Function of Baseline Proficiency and Experience
Figure 5 shows postintervention scores at 1 week as a function of scores at baseline. The difference in postintervention technical performance score between the simulator and control groups was greater for trainees with lower baseline scores compared with trainees with higher baseline scores as evidenced by unequal slopes (P=0.0006; Figure 5A). Similarly, trainees with lower baseline global performance rating scores had larger difference in postintervention scores between the simulator and control groups compared with trainees with higher baseline scores also evidenced by unequal slopes (P<0.0001; Figure 5B).
We further examined the effect of simulator training as a function of trainee prior CC experience. At baseline, compared with experienced trainees (≥50 cases), novice trainees (<50 cases) had lower technical (median, 11.5 versus 21.5) and global (median, 10.5 versus 23) performance scores. The average predicted IE in technical performance score was 1.5-fold greater in novice trainees compared with experienced trainees (5.2 versus 3.3). Similarly, the average predicted IE for the global performance rating score was 2.3-fold greater in novice trainees compared with experienced trainees (5.8 versus 2.5).
In this randomized controlled pilot study, mentored training on a CC simulator was associated with greater skill acquisition and transferability to patient procedures than conventional training. Cardiology trainees randomized to simulation training had significant improvement in technical performance at 1 week, whereas trainees in the control group had no significant change. Improvement in procedural performance with simulation training was greater in less proficient and less experienced trainees compared with more proficient and experienced trainees.
Performance of CC requires cognitive and technical skills. Most current training programs employ the traditional mentored approach, exposing trainees directly to procedures performed on actual patients under the guidance of an experienced operator. Rather than a structured learning curriculum comprising methodical exposure to fundamental procedural skills and clinical scenarios, this method is dependent on the random case mix of patients presenting to the catheterization laboratory. The opportunity for the trainee to ask questions and receive feedback is also frequently suboptimal because of the time constraints of the catheterization laboratory. In addition, catheterization procedures performed by novice trainees may expose patients to unwarranted risks. Given the increasing focus on clinical errors and safety, specifically as related to decrease in trainee working hours and adequacy of training, there is a growing need for novel methods in medical education.
The development of technology during the past decade has led to introduction of simulators in the medical field as tools for acquisition of fundamental skills before performing procedures on patients. Procedural techniques can be attained in a structured fashion by repeated practice of complicated maneuvers. The trainee can ask questions and receive personalized feedback. In addition, simulators can function as an objective tool for assessment of performance and certification. Although the numerous potential benefits of simulation training have been well recognized, the evidence for transfer of these skills from the simulated environment to actual practice on patients has been mixed.10,11 Some possible reasons for these inconsistent results include considerable variability in simulator designs and functionality, the way the simulator was used as a training device (training time or goals on the simulator), type of mentoring and feedback, lack of standardization of comparators, assessment methods, and study end points.
The present study is the first to demonstrate improvement in performance of CC with mentored simulator training. There are probably several reasons for the significant impact on performance observed in this study. First, the intervention arm of the study represented a curriculum containing both the physical simulator which allowed for learning and practicing proper sequencing of catheterization steps, as well as an expert-mentor who provided one-on-one teaching, personalized feedback to identify and correct errors, and a resource to ask questions. The benefit observed with this intervention is a function of both components rather than the practice on the simulator alone. Other studies confirm the importance of feedback in simulator training. Mahmood et al12 showed that in the absence of feedback there was no improvement in performance on the colonoscopy simulator, whereas Boyle et al13 demonstrated that the provision of standardized proximate feedback during laparoscopic colectomy simulator training was associated with significantly fewer errors and an improved learning curve.
Second, the simulator curriculum was proficiency based; instead of fixed numbers or time, participants randomized to the simulator trained to a level of proficiency predesignated as independently completing at least 1 procedure on the simulator without being prompted or helped. Requiring participants to train until they reached this benchmark ensured that all participants randomized to the simulator acquired the skills to perform at minimum the basic step wise sequencing needed for CC. Training on the simulator to a higher benchmark, for example, completion of at least 5 cases independently, completing more difficult cases, or performing procedures more rapidly within a set time, may have translated into greater improvement of performance in the study. Finally, the technical and global performance checklists used in this study were representative of the skills taught and practiced on the simulator and were evaluated in a standardized fashion in the catheterization laboratory.
Lower performance scores at baseline were associated with greater difference in postintervention performance score between the simulator and control group than higher baseline performance scores. In other words, greater magnitude of benefit with simulator training was observed in trainees less proficient at baseline, compared with more proficient trainees. These results are not surprising, given that skills learned and practiced on the simulator were basic fundamental skills to perform CC, and therefore, trainees already proficient at baseline were less likely to improve as much with the intervention. Novice trainees with little prior CC experience are 1 such group likely to be less proficient at baseline. This was evident in this study with lower baseline technical and global performance scores in trainees who had performed <50 prior CCs compared with those who had performed ≥50 prior cases. IE of simulator training on technical performance was 1.5-fold greater and on global performance was 2.3-fold greater for novice compared with experienced trainees. Similar observations were made in the study by Dayal et al7 where novice participants derived the greatest benefit from carotid stenting simulator training in a mentored program, whereas experienced interventionalists did not derive much benefit. It has been suggested14 that simulation-based training allows for the development of the “pre-trained novice,” an individual who has been trained to the point where many psychomotor skills and spatial judgments have been automated, allowing them to focus more on learning operative strategy and how to handle intraoperative complications, rather than wasting operating room time on the initial refinement of psychomotor skills. This has significant implications in integrating simulator training into core teaching curriculums, where the greatest benefit from this intervention would be derived during the early phases of trainee education.
The results of this study should be interpreted taking into consideration certain limitations. First, this study was not designed to investigate impact on patient safety, and therefore, conclusions on whether these results would translate into improved patient outcomes cannot be drawn. Second, despite restricting the procedure to femoral arterial access site to make comparisons easier, there was tremendous patient heterogeneity (eg, body habitus, arterial tortuosity, coronary ostia location, extent of coronary disease), which may confound results if the distribution of difficult cases is unequal between the 2 groups in this relatively small study population. Third, the amount of time spent on simulator training to attain the predesignated level of proficiency was not recorded. This information would have provided important insight regarding the magnitude of time and financial investment required to implement simulator training into training curriculums. Fourth, other skills required to perform CC including obtaining arterial access, communication, decision-making, team work, leadership, and instituting alternative plans could not be simulated and were not assessed. Fifth, although we believe that the technical, communication, and teaching skills of the mentor are an integral component of the intervention, the impact of the quality of these skills on the training effect was not evaluated. Finally, the long-term durability of the results of improved performance with simulator training was not ascertained.
Skills required to perform CC can be learned via mentored simulation training and are transferable to actual procedures in the catheterization laboratory. Less proficient operators derive greater benefit from simulator training than more proficient operators. Further studies are required to determine time and cost implications, and most efficient utilization of mentored simulator training as a complementary educational tool in general cardiology training programs.
We are grateful to all the trainees who participated in this study.
Grantcharov TP. Is virtual reality simulation an effective training method in surgery?Nat Clin Pract Gastroenterol Hepatol. 2008; 5:232–233.CrossrefMedlineGoogle Scholar
Babineau TJ, Becker J, Gibbons G, Sentovich S, Hess D, Robertson S, Stone M. The ‘cost’ of operative training for surgical residents.Arch Surg. 2004; 139:366–369.CrossrefMedlineGoogle Scholar
Wetzel CM, Kneebone RL, Woloshynowych M, Nestel D, Moorthy K, Kidd J, Darzi A. The effects of stress on surgical performance.Am J Surg. 2006; 191:5–10.CrossrefMedlineGoogle Scholar
Grantcharov TP, Kristiansen VB, Bendix J, Bardram L, Rosenberg J, Funch-Jensen P. Randomized clinical trial of virtual reality simulation for laparoscopic skills training.Br J Surg. 2004; 91:146–150.CrossrefMedlineGoogle Scholar
Larsen CR, Soerensen JL, Grantcharov TP, Dalsgaard T, Schouenborg L, Ottosen C, Schroeder TV, Ottesen BS. Effect of virtual reality training on laparoscopic surgery: randomised controlled trial.BMJ. 2009; 338:b1802.CrossrefMedlineGoogle Scholar
Chaer RA, Derubertis BG, Lin SC, Bush HL, Karwowski JK, Birk D, Morrissey NJ, Faries PL, McKinsey JF, Kent KC. Simulation improves resident performance in catheter-based intervention: results of a randomized, controlled study.Ann Surg. 2006; 244:343–352.CrossrefMedlineGoogle Scholar
Dayal R, Faries PL, Lin SC, Bernheim J, Hollenbeck S, DeRubertis B, Trocciola S, Rhee J, McKinsey J, Morrissey NJ, Kent KC. Computer simulation as a component of catheter-based training.J Vasc Surg. 2004; 40:1112–1117.CrossrefMedlineGoogle Scholar
Van Sickle KR, Ritter EM, Baghai M, Goldenberg AE, Huang IP, Gallagher AG, Smith CD. Prospective, randomized, double-blind trial of curriculum-based training for intracorporeal suturing and knot tying.J Am Coll Surg. 2008; 207:560–568.CrossrefMedlineGoogle Scholar
Reznick R, Regehr G, MacRae H, Martin J, McCulloch W. Testing technical skill via an innovative ‘bench station’ examination.Am J Surg. 1997; 173:226–230.CrossrefMedlineGoogle Scholar
Sutherland LM, Middleton PF, Anthony A, Hamdorf J, Cregan P, Scott D, Maddern GJ. Surgical simulation: a systematic review.Ann Surg. 2006; 243:291–300.CrossrefMedlineGoogle Scholar
Sturm LP, Windsor JA, Cosman PH, Cregan P, Hewett PJ, Maddern GJ. A systematic review of skills transfer after surgical simulation training.Ann Surg. 2008; 248:166–179.CrossrefMedlineGoogle Scholar
Mahmood T, Darzi A. The learning curve for a colonoscopy simulator in the absence of any feedback: no feedback, no learning.Surg Endosc. 2004; 18:1224–1230.CrossrefMedlineGoogle Scholar
Boyle E, Al-Akash M, Gallagher AG, Traynor O, Hill AD, Neary PC. Optimising surgical training: use of feedback to reduce errors during a simulated surgical procedure.Postgrad Med J. 2011; 87:524–528.CrossrefMedlineGoogle Scholar
Gallagher AG, Ritter EM, Champion H, Higgins G, Fried MP, Moses G, Smith CD, Satava RM. Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training.Ann Surg. 2005; 241:364–372.CrossrefMedlineGoogle Scholar