A Personal Journey Towards Innovation
During my secondment in Australia, I grappled with how users would interact with an intricate training programme. The client’s Head of Education candidly remarked that they had never encountered a genuinely competency-based curriculum due to its perceived complexity. This statement resonated profoundly, prompting me to contemplate the genuine requirements of competency-based training—a methodology increasingly advocated by regulatory bodies such as the General Medical Council (GMC). During my time with the Royal College of Paediatrics and Child Health (RCPCH), I attended a meeting that revealed a gap between curriculum content and its practical implementation in medical training programs. It became clear that there was a gap between what was expected from an institute and how this practically could be delivered on the ground. This experience underscored a pervasive issue in medical education: the disconnect between the intended curriculum and how it is actually delivered and evaluated on the ground
Challenges of Traditional Competency-Based Training Approaches
The conventional approach to competency-based medical education heavily relies on manual processes that are time-consuming and susceptible to human error. In a typical work-based assessment scenario, several crucial steps are involved
- Identifying the type of assessment being conducted.
- Mapping the assessment against the relevant competencies outlined in the curriculum.
- Ensuring the mapping is accurate by selecting the appropriate curriculum areas.
The assessor then evaluates the student based on their interaction and the mapped competencies. However, assessors may not have in-depth knowledge of the curriculum, yet they are expected to assess students based on the provided mapping information.
When these assessments are compiled, supervisors and reviewers need to evaluate a student’s progress based on the assessments. Crucially, they must ensure that the assessments are correctly mapped to the training program or curriculum. Currently, this mapping process relies heavily on the student’s understanding of the curriculum, which can lead to inconsistencies and gaps.
Supervisors, often overburdened, struggle to adequately map training activities to relevant competencies, leading to inconsistencies and gaps in trainee evaluations. Consequently, this traditional approach is prone to overlooking vital competencies, ultimately compromising the quality of medical education and patient care.
This manual, error-prone process highlights the need for a more streamlined and consistent approach to competency mapping and assessment in medical training programs. Leveraging advanced technologies like AI can automate and enhance this process, ensuring accurate mapping while reducing the administrative burden on educators.
Exacerbating these issues is the challenge of timely response and assessment complexity. There is an inherent issue with response time, as anything with a gap of 5 days or more makes it very difficult for an assessor to accurately recall the details of an observed interaction or event. Anything beyond a 15-day window becomes virtually impossible to assess reliably.
Supervisors, often overburdened, are tasked with understanding comprehensively written curriculum and using evidence created by students and manually mapped to make decisions on how well these students are performing against a document which can consist of over 30 learning outcomes, several pages and a lot of information, which they did not have a hand in writing and now need to accurately understand. Not only understand it from an academic point of view but deliver this on the ground to students.
Harnessing AI: Transforming Competency Assessments
AI possesses the power to revolutionise the way we conduct assessments. While we are merely scratching the surface of AI’s applications, the focus in education has primarily been on utilising AI to assist with writing tasks, generating questions, or addressing concerns about academic integrity. However, the emphasis should shift towards leveraging AI to augment the assessment process rather than replacing human educators.
Consider a typical work-based assessment in medical education. Traditionally, students or administrators map these assessments to the curriculum, often relying on their in-depth knowledge of the framework. However, this approach is susceptible to inconsistencies and gaps in understanding.
For example within the GPC’s from the GMC there is a learning outcome of communication, when this was being tagged by students everything linked to communication, because everything does have an element of communication to it. This was a consistent problem within the trainee base within the RCPCH, large number of tags for a small amount of text or content. This gave the impression that there was a lot of work done but you could have multiple areas of the curriculum covered by one minor case based discussion which has been tagged incorrectly.
AI can be employed to map assessments to relevant curriculum areas without requiring specialised knowledge. By acting as a virtual “curriculum expert,” the AI can analyse assessment inputs and accurately align them with the appropriate competencies, outputting the information in a user-friendly format. This sophisticated solution leverages advanced language models to ensure precise mapping of educational activities to competency frameworks.
Below we have a graph showing the number of tags against certain areas of the curriculum, based on the AI’s understanding of a curriculum. This give a numeric account of coverage.
In this graph we have the same curriculum but using a simple grading mapping system we can give a competency grading against the curriculum and the specific competencies. The consistency is now completed by the AI and you could further this still by only allowing a small number of curriculum areas tagged. In this instance we have removed the need for the student to have a deep understanding of curriculum tagging and focused their assessments to specific areas of the curriculum so that their supervisor can make a more informed judgement against the same curriculum.
Feedback can then be drawn from this and given to the student based on their engagement with the curriculum and training programme rather than how many assessments they have done or how much time they have spent in one post.
If we compare this to more traditional methods, time tracking in posts, assessment counts and the tagging elements against the curriculum to show coverage, then we can use the curriculum and it’s competencies itself to check the understanding of a students progress. We can use competencies to judge the student and reflective feedback to understand their own thoughts behind this progress.
Moreover this gives us the opportunity for standardisation, the AI is reading these assessments at every point, a single voice across a vast understanding and geographic location. Someone’s experience in one region may well be very different from someone in another and the level of expertise in one supervisor may well be different than in another. The AI gives us an opportunity to not only standardise the mapping but trained correctly it can facilitate more accurate written feedback as well.
Traditional methods like tracking the number of assessments or how long someone needs to be in a post for are often the result of both struggles of a large student base and averaging against years of assessment strategy and simple logistics. Making sure a student is competent in a certain subject by giving them a 6 month placement in that specialty is a good way to ground a student in that specialty but it lacks individuality and flexibility.
Skill-SYNC AI: A Tailored Solution
Skill-SYNC AI is an AI-powered tool meticulously designed to address these inefficiencies by automating the mapping of educational activities to competencies. Leveraging advanced language models, Skill-SYNC AI analyses assessment inputs and aligns them accurately with curriculum frameworks, such as the GMC’s Generic Professional Capabilities (GPCs).
This automation streamlines administrative workloads and enhances the accuracy and effectiveness of competency assessments.
Below is an example to capture simple metrics and average grades in the early stage of training against a more professional standard. We can see how much they are engaging, the tags against their framework as well as their assessments and then an overall average grade.
This is then broken down into graphical formats to show coverage across the curriculum;
We can use this information to prompt the AI to give focused feedback for the end user, a personal tutor or educational supervisor substitute to give that feedback on how well a student is doing. This can help form the basis of feedback from individuals in these posts again saving time in a very busy work environment.
Benefits and Call to Action
By quantifying competencies effectively and providing clear, actionable feedback based on robust data, Skill-SYNC AI allows educators to focus on qualitative teaching and mentoring, confident that administrative elements are handled efficiently by AI.
We are at a pivotal moment in educational technology, where AI can significantly elevate the standards of competency-based training. Skill-SYNC AI is seeking collaborators and early adopters from educational institutions, especially within the medical sector, to join in refining and expanding this innovative solution. Being an early adopter offers the opportunity to influence product development and gain early access to cutting-edge educational technology.
Conclusion: A Call for Collaboration
The journey to improve competency-based medical education is ongoing, and with AI, we have the tools to make substantial advancements. By reducing the administrative burden, increasing the precision of competency mapping, and contributing to the development of a more skilled healthcare workforce, Skill-SYNC AI promises to transform medical education and, ultimately, elevate patient care. Join us in this exciting endeavour to shape the future of medical training, ensuring that it is as effective, efficient, and patient-centred as possible.