This process helps institutions to prioritize the regular collection of data that support their commitment to provide culturally competent care. Evaluating the effectiveness of a training, by measuring against established benchmarks or goals, can show organizational leadership that trainings are helpful. This kind of commitment usually requires an organizational champion to come forward and make it happen. A champion should solicit buy-in from the facility to put necessary resources into evaluating practice changes after training.
What to measure
A widely recognized approach to evaluation of training and education programs is the “Kirkpatrick Model,” developed by Dr. Donald Kirkpatrick in the 1950s. The model consists of four levels of evaluation:
- The reactions of participants in the training (for instance, their expressed degree of satisfaction)
- The degree of learning that participants experienced (for instance, whether their knowledge, attitudes or skills improved)
- Changes in the behavior of participants after the training (for instance, how their interactions with patients were affected by what they learned)
- What results the training had on the organization (for instance, whether and how the way patients were treated changed, and what differences the changes made on the patients)
Evaluating the effectiveness of a training, by measuring against established benchmarks or goals, can show organizational leadership that trainings are helpful.
Another way to conceptualize evaluations is to look at different measurements of training processes, outputs, and outcomes. Potential measurements include the following:
|Type of Evaluation||What is Measured|
|Process measures||Number registered for the training|
Number completing the training
Learner professional roles and demographics
|Output measures||Learner satisfaction with training content|
Learner assessment of trainers
Learner feedback for improvements
|Outcome measures||Knowledge change|
Learner confidence in their skills
Intention to change
Actual behavior change
Multiple methods of obtaining information on training results. We suggest collecting quantitative and qualitative feedback from learners, and using multiple methods for receiving feedback. Tests administered at the beginning and at the conclusion of a training (generally referred to as “pre- and post-tests”) are useful in measuring immediate changes in attitudes, knowledge, and awareness as a result of the training. Learners should always have the option to submit evaluations anonymously but should be invited to provide their contact information if they want to make themselves available for follow-up. Any post-test should ask learners to answer questions focused on their knowledge, their attitudes, their assessment of their own skills, and their behavioral intentions. They should also be given the opportunity to offer general comments and recommendations for improvements.
Generally, shorter evaluations get more responses from learners and make good use of limited training time. However, when possible, follow-up measurements – for instance, 30, 60 or 90-days after the training – can help assess the effectiveness of the training at changing knowledge, skills and behavioral intentions over time. Most evaluations of skills-building ask the learners about their confidence in using their newly acquired skills and their intention to use them. As an option, evaluations can also collect the proficiencies of each of the trainers.
Evaluation tools. There are some standardized tools available to evaluate cultural competence training that could be used or adapted for LGBTQIA cultural competence trainings. For example, the Association of American Medical College’s Tool for Assessing Cultural Competency Training (TACCT) can be used to evaluate medical education and training programs (including continuing medical education) focused on cultural competence. Another tool for assessing clinical skills is the LGBTQ Development of Clinical Skills Scale. The Association of American Medical Colleges’ MedEdPORTAL, has a number of useful training resources that are each accompanied by evaluation tools.
In addition to collecting learner demographic and professional information, administering short pre- and post-tests, and offering opportunities for follow-up evaluations at a later date, learners can be offered opportunities to submit anonymous questions during the training, and the opportunity to stay after the training to ask questions or offer observations one-on-one.
Conducting evaluations in a culturally competent way also requires considering accessibility and acceptability of methods. Trainers should ensure that every learner has the necessary accommodations to fully participate in the trainings and evaluations.
A note on the goal of evaluations. Cultural competence training is often expected to have a direct, causal impact on patient health outcomes. While outcomes of some trainings may be relatively straightforward to measure – for instance, reducing hospital-acquired infections, and reducing patient complaints and malpractice suits – professional oversight boards do not generally require evidence of direct causal benefits to support trainings to inform providers of advances in their field or increase their diagnostic skills. Accordingly, it is most appropriate to focus on measures of increased or improved knowledge, attitudes, and skills of the learners. Cultural competence training programs can, however, encourage learners to create action plans oriented toward systems change, measuring systems change, and assessing patient satisfaction before and after changes take place as one way to assess the impact of cultural competence training on patient experience.