Implementation Lessons from a Multi-Center Academic Health System
AUTHORS
Charles Lin, MD, MSc1; John O’Donnell, DrPH, CRNA2
1University of Pittsburgh Medical Center, Pittsburgh, PA
2University of Pittsburgh School of Nursing, Pittsburgh, PA
Conflict of interest statement
The authors have no conflicts of interest to declare.
Corresponding Author:
Charles Lin, linc4@upmc.edu
Please cite this article as: Lin, C., & O’Donnell, J. (2026). Scaling a Simulation-Based Surgical Airway Workshop for Anesthesia Providers: Implementation Lessons from a Multi-Center Academic Health System. Simulation Technology & Operations Resource Magazine, 5(1), 1-5. ISSN: 3070-3506.
SUMMARY
Surgical airway management is a rare but critical skill for anesthesia providers, representing the final step in the “cannot ventilate, cannot oxygenate” algorithm. To address gaps in staff clinician training, we developed and scaled a simulation-based, scalpel-bougie front of neck access (SB-FONA) workshop across a large academic health system. Adapted from trainee curricula, the workshop emphasized evidence-based, skills-focused training for practicing anesthesiologists. Guided by the Adapted Implementation Model for Simulation (AIM-SIM) and the Consolidated Framework for Implementation Research (CFIR), the program was implemented across seven University of Pittsburgh Medical Center hospitals. Evaluation focused on implementation outcomes, including acceptability, adoption, appropriateness, feasibility, and fidelity. Although interest in surgical airway training was high (83%), participation was limited by logistical barriers, particularly those related to timing and location. In response, we redesigned the program to enable mobile delivery at local sites, integration into existing departmental meetings, and a dyadic teaching model to reduce instructor burden. Formal knowledge assessments during training were removed to enhance psychological safety, while a final competency check was retained. Equipment was left on site to promote deliberate practice and sustain engagement. Key lessons emphasized the importance of flexible delivery, efficient resource use, clinician-centered design, and iterative feedback. This work demonstrates that implementation science-guided adaptation can support successful, scalable procedural simulation training within complex healthcare systems.
BACKGROUND
Three years ago, our team developed a focused surgical airway workshop specifically for staff anesthesia providers. We adapted our training from existing trainee workshops and redesigned it to accommodate the unique needs and time constraints of practicing clinicians. Although anesthesia providers rarely perform surgical airway management, it is a vital, high-stakes skill. It represents the final step in the “Cannot Ventilate, Cannot Intubate” pathway of the American Society of Anesthesiologists’ Difficult Airway Algorithm. Over the past three years, we successfully designed, piloted, implemented, and scaled this simulation-based scalpel-bougie front of neck access (SB-FONA) workshop. Initially tested with trainees, the workshop was subsequently tailored for staff clinicians and has now been implemented across seven hospitals within the University of Pittsburgh Medical Center, a large multi-center academic healthcare system. This paper shares the key lessons we learned in successfully scaling simulation-based training for staff clinicians.
Now, imagine you created a simulation workshop at your hospital. If someone asked you five years later how effective it was, how would you answer? The answer depends on how “effectiveness” is defined. According to the Kirkpatrick Model, effective training occurs in four stages: (1) learners find the experience relevant and engaging, (2) knowledge and skills are acquired, (3) clinical practice changes as a result, and (4) patient outcomes improve. If outcomes do not improve, the issue could lie in either the quality of the training or the way it was implemented (Kirkpatrick, 1998). The latter—poor implementation despite a well-designed program—is known as a Type III error (Dubrowski, Barwick, & Dubrowski, 2018).
To guide our implementation and scaling efforts, we used the Adapted Implementation Model for Simulation (AIM-SIM). This model, built from existing implementation science frameworks, organizes simulation implementation into three phases: (1) stakeholder engagement and context exploration, (2) pre-implementation planning, and (3) program implementation with ongoing monitoring and evaluation (Dubrowski, Barwick, & Dubrowski, 2018). One key framework within AIM-SIM is the Consolidated Framework for Implementation Research (CFIR), which categorizes the factors that influence implementation success into four domains: (1) characteristics of the intervention, (2) the setting and available resources, (3) the individuals involved, and (4) the implementation process (Damschroder et al., 2009). These domains shape key considerations, including user perception, adaptability, complexity, resource requirements, local culture and climate, implementation readiness, engagement, execution, and evaluation. These CFIR-guided insights informed our approach and became the foundation for the lessons we describe in this report.
Lesson 1: Identify Implementation Barriers Unique to Each Site
Although interest in surgical airway training is high among anesthesia providers, interest alone does not ensure participation. The primary challenge is identifying and addressing implementation barriers that outweigh this interest. Each department, team, and hospital has a unique culture and distinct obstacles that can hinder the adoption of new training programs. Jee et al. described these barriers in a qualitative study on simulation training in emergency departments. They identify both tangible obstacles (e.g., lack of funding, time, and space) and intangible obstacles (e.g., resistance to change, limited engagement, and cultural mindset) (Jee et al., 2023).
Before launching a workshop, it's essential to gauge the level of interest and ensure there is a receptive audience. Surgical airway training is considered essential by anesthesia providers, who agree that airway experts must be proficient in the skill. However, because surgical airway skills are infrequently required, many clinicians may lack the emotional urgency to pursue training, even if they recognize its importance. We initially announced the workshop system-wide and hosted it on weekends at a community hospital. Despite high reported interest, only 24 out of more than 200 anesthesia providers attended. To better understand this discrepancy, we surveyed our department. Among the 83 respondents:
- 83% agreed that surgical airway training is important.
- 77% wanted more training.
- 87% were interested in simulation-based education.
- 75% cited the location of the workshop as a key barrier, and 72% cited timing as a key barrier to attendance.
In response, we adapted our approach. We moved the workshop on-site at each local hospital and scheduled it during monthly departmental meetings, which included both anesthesiologists and certified nurse anesthetists. To shorten the workshop duration, didactic education was completed at home, while live hospital sessions focused on hands-on practice. In total, the expected commitment for attending the workshop at the hospital is less than 10 minutes. This shift in approach exemplifies microlearning—the delivery of educational content in small, focused units—which has been shown to improve procedural performance and knowledge retention (De Gagne et al., 2019). By aligning the workshop with clinicians’ schedules and removing barriers to participation, we significantly improved engagement and uptake.
Lesson 2: Engagement and Motivation – Either It’s There, or It Isn’t
Engagement is the cornerstone of successful implementation and encompasses several key outcomes: acceptability, appropriateness, and feasibility (Proctor et al., 2023). While these elements are closely related, they all depend on one crucial factor—motivation. For trainees, simulation workshops are often required. In contrast, staff clinicians typically engage in refresher training voluntarily. Their participation depends more on personal motivation and a commitment to maintaining clinical competency. However, motivation among staff clinicians can be unpredictable. Behavioral economics research shows that external incentives do not reliably drive participation. In fact, offering incentives like compensation may even undermine intrinsic motivation (Cerasoli, Nicklin, & Ford, 2014). Instead of focusing on incentivizing attendance, efforts should be directed toward removing barriers. Making the workshop more convenient, accessible, and relevant is far more effective in encouraging participation than offering rewards (Cerasoli, Nicklin, & Ford, 2014). When staff feel the training is practical, easy to access, and valuable to their clinical practice, they are more likely to engage, regardless of whether incentives are provided.
Lesson 3: Adoption – Be Flexible and Inflexible at the Right Times
Be prepared to design, evaluate, and redesign your workshop. Once the workshop demonstrates efficacy, the next set of outcomes to be examined is related to implementation. Implementation outcomes include acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability (Proctor et al., 2011). Each of these metrics requires assessment. For example, when considering fidelity (accuracy in reproducing the real-life clinical event), which aspect is most critical to retention of knowledge or skill? Is physical fidelity or process fidelity more important? Understanding this allows a tailored approach to curriculum redesign to maximize learning and retention.
In product design terms, the initial version of our workshop was a Minimally Viable Product (MVP)—a basic but functional model intended for iterative refinement based on user feedback (Ries, 2011). Like a startup, we approached development by treating clinicians as early adopters and using their input to improve the experience. Once our pilot study demonstrated that the workshop improved technical performance and confidence with the scalpel-bougie front of neck access technique, we modified our workshop according to user preferences. For instance, we learned that staff clinicians did not want to undergo extensive knowledge evaluation, so we limited assessments to a final rigorous test of the SB-FONA skill. We remained committed to our core goal of broadening exposure to effective SB-FONA training, while staying flexible in delivery. Key adaptations from the original MVP included:
- Dyadic teaching model: Pairs of participants alternate roles as learner and teacher, addressing instructor shortages (Ding et al., 2022).
- Concise video instruction: A short procedural video meets the time constraints of busy clinicians.
- On-site equipment: Leaving airway trainers and materials on-site encourages ongoing practice and teaching.
- Integrated scheduling: Hosting workshops during routine morning conferences improved accessibility.
- Simplified design: The workshop accommodates tight schedules and local hospital culture while enabling scalability (Damschroder et al., 2009).
Lesson 4: Resource Constraints – Time is Money, and Space isn’t Cheap
Implementation involves both tangible and intangible costs. These include monetary expenses, scheduling challenges, and user convenience. Through ongoing conversations with department leaders, we made practical adjustments to reduce barriers and enhance access for busy clinicians.
Lesson 5: Staffing Constraints – A Champion Helps but Isn’t Enough
While local champions are valuable, they are not sufficient for sustainable implementation. Faculty turnover and scheduling conflicts often limit their long-term impact. To address this, we moved from an instructor-led model to a dyad teaching approach, which requires no external facilitator (Ding et al., 2022). This model also helps counter the doorway effect—the tendency to forget training after leaving the learning environment (Swallow, Zacks, & Abrams, 2009). By keeping trainers and materials in the clinical setting, we promoted deliberate practice, habit formation, and long-term retention.
Lesson 6: Feedback and Evaluation – Communicate and Ask the Right Questions
Clear communication with stakeholders, including users and those involved in implementation, is critical. Throughout our workshop, ongoing engagement with stakeholders ensured the workshop was convenient and focused to maximize adoption.
Evaluation efforts should focus on meaningful metrics. For example, we found tracking attendance provides more actionable insight than gauging interest alone. When attendance was low, we surveyed faculty to identify barriers and adjust future sessions accordingly. Continuous feedback loops and responsive design helped us refine and expand the program successfully.
Final Note: Sustain Your Motivation
The greatest threat to implementation is not a lack of resources—it is a lack of motivation among participants and project leaders. Maintaining momentum requires a committed team that supports one another through the challenges of achieving long-term change. Epidemiologic studies have previously shown that demonstrating efficacy in a study is easier than demonstrating effectiveness in the real world, creating what is known as the efficacy-effectiveness gap (Cramer-van der Welle et al., 2018; Eichler et al., 2011; Wilson, Hanna, & Booth, 2024). Sustained adherence to the lessons presented in our work has the potential to substantially enhance healthcare professional education and, ultimately, patient outcomes.
REFERENCES
Cerasoli, C. P., Nicklin, J. M., & Ford, M. T. (2014). Intrinsic motivation and extrinsic incentives jointly predict performance: a 40-year meta-analysis. Psychol Bull, 140(4), 980-1008. https://doi.org/10.1037/a0035661
Cramer-van der Welle, C. M., Peters, B. J. M., Schramel, F., Klungel, O. H., Groen, H. J. M., van de Garde, E. M. W., Santeon, N. S. G., & Santeon, N. s. g. (2018). Systematic evaluation of the efficacy-effectiveness gap of systemic treatments in metastatic nonsmall cell lung cancer. Eur Respir J, 52(6). https://doi.org/10.1183/13993003.01100-2018
Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci, 4, 50. https://doi.org/10.1186/1748-5908-4-50
De Gagne, J. C., Park, H. K., Hall, K., Woodward, A., Yamane, S., & Kim, S. S. (2019). Microlearning in Health Professions Education: Scoping Review. JMIR Med Educ, 5(2), e13997. https://doi.org/10.2196/13997
Ding, J., Xiao, X., Biagi, S., & Varkey, T. (2022). Dyad learning versus individual learning under medical simulation conditions: a systematic review. MedEdPublish (2016), 12, 73. https://doi.org/10.12688/mep.19285.1
Dubrowski, R., Barwick, M., & Dubrowski, A. (2018). “I Wish I Knew This Before…”: An Implementation Science Primer and Model to Guide Implementation of Simulation Programs in Medical Education. In O. Safir, R. Sonnadara, P. Mironova, & R. Rambani (Eds.), Boot Camp Approach to Surgical Training (pp. 103-121). Springer International Publishing. https://doi.org/10.1007/978-3-319-90518-1_10
Eichler, H. G., Abadie, E., Breckenridge, A., Flamion, B., Gustafsson, L. L., Leufkens, H., Rowland, M., Schneider, C. K., & Bloechl-Daum, B. (2011). Bridging the efficacy-effectiveness gap: a regulator's perspective on addressing variability of drug response. Nat Rev Drug Discov, 10(7), 495-506. https://doi.org/10.1038/nrd3501
Jee, M., Murphy, E., Umana, E., O'Connor, P., Khamoudes, D., McNicholl, B., O'Donnell, J. J., James, B., & Irish Trainee Emergency Research, N. (2023). Exploring barriers and enablers to simulation-based training in emergency departments: an international qualitative study (BEST-ED Study). BMJ Open, 13(9), e073099. https://doi.org/10.1136/bmjopen-2023-073099
Kirkpatrick, D. (1998). Evaluating training programs: the four levels (2nd ed.). Berrett-Koehler Publishers.
Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., Griffey, R., & Hensley, M. (2011). Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health, 38(2), 65-76. https://doi.org/10.1007/s10488-010-0319-7
Proctor, E. K., Bunger, A. C., Lengnick-Hall, R., Gerke, D. R., Martin, J. K., Phillips, R. J., & Swanson, J. C. (2023). Ten years of implementation outcomes research: a scoping review. Implement Sci, 18(1), 31. https://doi.org/10.1186/s13012-023-01286-z
Ries, E. (2011). The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Currency.
Swallow, K. M., Zacks, J. M., & Abrams, R. A. (2009). Event boundaries in perception affect memory encoding and updating. J Exp Psychol Gen, 138(2), 236-257. https://doi.org/10.1037/a0015631
Wilson, B. E., Hanna, T. P., & Booth, C. M. (2024). Efficacy-effectiveness gaps in oncology: Looking beyond survival. Cancer, 130(3), 335-338. https://doi.org/10.1002/cncr.35075