Training and user procedures

During a PhD track, ePortfolio usage procedures and training for all stakeholders will be developed and validated.

This work package focuses on developing online usage procedures and training to enable ePortfolio users to perform their learning, assessment, and/or supervision role(s) in the workplace. The aim is to align and test generic procedures and agreements around ePortfolio design, involving various stakeholders (students, teachers, workplace supervisors and educational institutions).

Finished studies

  • Training to Support ePortfolio Users During Clinical Placements: a Scoping Review

Aim: Training is described in the literature as one of the critical factors for successful ePortfolio implementation. Yet, there is still much uncertainty about the design and outcomes of such ePortfolio training. Therefore, we conducted a scoping review to bring together research describing the design and outcomes of ePortfolio training.

Results: The results of this scoping review were published in Medical Science Educator. You can read the full article here:

  • Document analysis ePortfolio processes

Aim: ePortfolios are used in many different ways. Even within the various healthcare courses in tiny Flanders, colleges and universities deploy an ePortfolio in different ways. The aim of this analysis is to map and compare ePortfolio processes in different courses.

Results: Using a document analysis, the ePortfolio processes of different programmes were mapped and compared. The result is a research report. Read the executive summary of the research report here:

  • The path to a validated competency framework with an online Delphi survey

Aim: The researchers of work packages 4, 5 and 6 each conducted a Delphi study to validate the CanMEDS framework in their own context (specialist medicine, general medicine and general health professions). They gained a lot of experience in validating a competency framework. These are important insights that could also be of interest to other researchers and professionals. Therefore, as part of work package 2, an overarching study was done in which the researchers were interviewed about their experiences in conducting an online Delphi survey.

Results: These experiences were compiled in a research report that describes the advantages and challenges of this methodology. In addition, tips are provided on how to deal with these challenges. All insights were also visualised in an infographic. This can be an initial tool for others to determine whether an online Delphi survey is a suitable method for their development process. You can download it here:

Would you like to read the research report? Please send an email to

  • Automating the Identification of Feedback Quality Criteria and the CanMEDS Roles in Written Feedback Comments Using Natural Language Processing

Goal: Sofie investigated whether a large language model can be trained to identify quality criteria for feedback and the CanMEDS roles in feedback fragments. The results showed that such a language model can perform this task relatively well. The trained model formed the basis for the feedback tool developed within work package 2.

Results: The findings were published in Perspectives on Medical Education. By clicking the button below, you can read the full article.

Ongoing studies

Based on the focus groups we did at the beginning of the project and the existing literature, we found that providing quality written feedback in an ePortfolio is a challenge for many mentors. Moreover, students report that they find the feedback they receive vague and general. Therefore, within this work package, we chose to focus on feedback for developing training.

To train mentors in providing quality written feedback, a non-traditional approach was chosen. Unlike traditional feedback training or workshops, a just-in-time approach integrated into the ePortfolio was chosen here.

Within this work package, a tool was developed that uses artificial intelligence to support mentors and supervisors in writing feedback. In the ePortfolio itself, the tool is visible as a button that generates a report on the quality of the feedback written by the mentor. With 1 press on the button, mentors can thus evaluate their feedback and receive adapted tips based on what they have written. In addition, the tool provides insight into which competences are addressed in the feedback. This allows mentors to improve their feedback even before they send it to their students!


Care4 conference, February 2022

Sofie presented her research on the quality of written feedback in ePortfolios. Read the abstract here:

AERA (American Educational Research Association) conference, April 2022

Sofie, Clara, Oona, Marieke and Mieke presented the Scaffold project at the annual AERA conference. Our researchers explained various aspects of the Scaffold project. Sofie provided the introduction and presented her review on training for ePortfolio users. You can read the abstract below:

EARLI SIG1&4 Joint Conference, June 2022

Sofie presented her study on the language of narrative comments in ePortfolios at the EARLI SIG 1&4 conference using a poster presentation. You can read the abstract below.

Jure conference, July 2022

Sofie presented a poster and presented in a symposium on innovative research methods at the JURE conference. You can read the abstracts of these below:

AMEE conference, August 2022

Sofie presented her study on the quality of written feedback in ePortfolios. You can read the abstract here:

NVMO, May 2023

In a roundtable session, we presented the Scaffold project and discussed the challenges of collaborating with different disciplines and the future of ePortfolio. You can read the abstract below: