Standarisasi Observer OSCE dengan Rubrik dan Multivideo

https://doi.org/10.22146/jpki.25104

Ide Pustaka Setiawan(1*), Noviarina Kurniawati(2), Rr. Siti Rokhmah Projosasmito(3)

(1) Fakultas Kedokteran Universitas Gadjah Mada, Yogyakarta
(2) Fakultas Kedokteran Universitas Gadjah Mada, Yogyakarta
(3) Fakultas Kedokteran Universitas Gadjah Mada, Yogyakarta
(*) Corresponding Author

Abstract


Background: One of the factors influencing the validity and reliability of the assessment is the standardization of the observers in assessing students’ performance. A recent study by Setiawan (2011) found that there is differences in the standard of assessment used by general practitioners and specialized doctors in assessing students in OSCE.7 These differences are considered to be harmful for the students, therefore needs to be improved. Several training methods are developed to overcome the problem. This study aims to assess whether rubric and multi video can be used as a means of standardization of OSCE observers.

Method: This was an experimental action research. The instruments used in this study were checklist, rubric, and video recording of students doing OSCE (n=5), which further be called multi-video. The subjects of the study were the OSCE observers in station Integrated Patient Management (IPM) who were divided into control and treatment group. The subjects assessed students’ performance from the multi-video in two data collection sessions. In the first session, both control and treatment group used checklist for assessing the multi-video. Furthermore in the second session, the control group did as the first data collection session, while the treatment group used checklist and rubric for assessing the multivideo. The result of which compared and tested using independent sample t-test.

Results: As many as 33 observers, which consists of 23 general practitioners (GP) and 10 specialized doctors (SP), participated in the first data collection session. In the second data collection session, 28 observers consist of 20 GPs and 8 SPs participated. The result of the first data collection session, which used only checklist as an instrument, showed a significant difference in the standard of assessment used by the GPS and SPs (p<0.05), whereas the second data collection session, which used rubric as an additional instrument for the treatment group, showed no significant difference between GPs and SPs in the standard of assessment (p>0.05).

Conclusion: Rubrics and multi video can be used as a means of standardization of OSCE observer in assessing students’ performance.

 


Keywords


Standardization, observer, OSCE, assessment

Full Text:

PDF


References

  1. Dent JA, Harden RM. New horizons in medical education. In: Dent JA, Harden RM, editors. A Practical Guide for Medical Teachers. 2nd edition. Dundee: Elsevier; 2005.
  2. Suryadi E. Pendidikan di laboratorium keterampilan medik. Yogyakarta: Bagian Pendidikan Kedokteran Fakultas Kedokteran Universitas Gadjah Mada; 2008.
  3. Marks M, Humphrey-Murto S. Performance assessment. In: Dent JA, Harden RM, editors. A practical guide for medical teachers. 2nd edition. Dundee: Elsevier; 2005.
  4. McAleer S. Choosing assessment instruments. In: Dent JA, Harden RM, editors. A practical guide for medical teachers. 2nd edition. Dundee: Elsevier; 2005.
  5. Leinster S. The undergraduate curriculum. In: Dent JA, Harden RM, editors. A practical guide formedical teachers. 2nd edition. Dundee: Elsevier; 2005.
  6. Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Medical Education Quartet-The Lancet. 2001;357:945-9.
  7. Setiawan IP. Discrepancy of OSCE’s assessors in assessing medical students’ clinical competence. Paper of The 18th WONCA Asia Pacific Regional Conference; 2011; Cebu. Cebu: The World Organization of National Colleges, Academies and Academic Associations of General Practitioners/ Family Physicians; 2011.
  8. Djarwoto B. Laporan evaluasi pelatihan observer OSCE. Yogyakarta: Fakultas Kedokteran Universitas Gadjah Mada; 2011.
  9. Setiawan IP. Instrument for evaluating clinical skills laboratory teacher’s didactical performance [thesis]. Maastricht (Netherlands); Maastricht University; 2011.
  10. Crosson A, Boston M, Levison A, Matsumura LC, Resnick L, Wolf MK, Junker B. Beyond summative evaluation: the instructional quality assessment as a professional development tool. Department of Education University California; 2004.
  11. Leonhardt. Using rubric as an assessment tool in your classroom. Texas: San Antonio; 2005.
  12. Adamo G, Dent JA. Teaching in the clinical skills centre. In: Dent JA, Harden RM, editors. A Practical Guide for Medical Teachers. Dundee: Elsevier; 2005.
  13. Smith SR. Outcome-based curriculum. In: Dent JA, Harden RM, editors. A Practical Guide for Medical Teachers. Dundee: Elsevier; 2005.



DOI: https://doi.org/10.22146/jpki.25104

Article Metrics

Abstract views : 1143 | views : 2274

Refbacks

  • There are currently no refbacks.


Copyright (c) 2017 Ide Pustaka Setiawan, Noviarina Kurniawati, Rr. Siti Rokhmah Projosasmito

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Jurnal Pendidikan Kedokteran Indonesia (The Indonesian Journal of Medical Education) indexed by: