Promoting professionalism in undergraduate medical education is an important goal of medical schools. It has been noted that burnout and decreased empathy have been observed in third year students. Drexel University College of Medicine instituted a Professional Formation Curriculum to try to combat this trend. This paper investigates a new third year course in this curriculum consisting of peer supported, small groups with specially trained faculty facilitators utilizing Google+ Hangout social networking technology. The year-long groups promote private, peer support where students post meaningful experiences or self reflections centered on professional behavior. Via Blackboard, group participants discuss these narratives in a safe environment. Students’ empathy was assessed using the Jefferson Scale of Empathy and their ability for personal reflection was assessed using the Groningen Reflection Ability Scale. The results showed no decrease in empathy in theses students and an increase in ability for personal reflection at the conclusion of the course.–Lee Ann Schein, Ph.D.
Both readers and writers of medical education research should take note of Sullivan’s editorial in this month’s Journal of Graduate Medical Education wherein she outlines common “spin” techniques used in reporting education research results and how to avoid them. We’re all aware of fishing expeditions where investigators look at every possible association, and the all too common practice of attributing lack of difference between study groups to small sample size, but some may not have considered reporting Likert scale means to the 100th decimal point as being similarly problematic. Many of the solutions she proposes are simple, straight forward, and generally involve careful planning with judicious use of a key ingredient often forgotten- common sense. — Sarang Kim, MD
Link To Article (not yet available in PubMed)
Gail M. Sullivan (2014) Is There a Role for Spin Doctors in Med Ed Research?. Journal of Graduate Medical Education: September 2014, Vol. 6, No. 3, pp. 405-407.
Diagnostic mistakes in internal medicine are common, and this study probably underestimates them. Researchers, many of them residents, in Toronto tracked diagnoses made on medicine inpatients by the ER physician, the admitting resident, and the admitting attending. The “real” diagnosis was determined by chart review; the admitting attending’s diagnosis was accepted if no contrary information was found. Attendings were more accurate than residents (79% vs. 66%), but this may reflect the study methodology and the fact that more information may have accumulated between the resident’s and the attending’s evaluations. Attendings were very influenced by the residents’ diagnosis. When the resident was correct, attendings were correct in 96% of cases; but if the resident was incorrect, the attending made the correct diagnosis in only 44% of cases. — Laura Willett, MD
Michael D. Jain, George A. Tomlinson, Danica Lam, Jessica Liu, Deepti Damaraju, Allan S. Detsky, and Luke A. Devine (2014) Workplace-Based Assessment of Internal Medicine Resident Diagnostic Accuracy. Journal of Graduate Medical Education: September 2014, Vol. 6, No. 3, pp. 532-535.
In this lovely qualitative study, entering first-year medical students listened to 2-minute podcast clips of lectures previously given by well-rated (>4 on a 5-point Likert global scale) or poorly-rated (<3 on the same scale) faculty members. During the podcast clip, a photograph of an attractive or unattractive gender-matched person was projected onto a large screen. Students were then given 30 seconds to give a global rating on the same Likert scale and describe their impressions of the teacher. The descriptions were analyzed and broke down into two independent components: knowledge/intellect and “charisma”. “Charisma” included sub-components caring, engaging, entertaining, confident, and organized.
Interestingly, student global ratings based on the 2-minute podcast clip were very highly correlated (r=0.78) to prior students’ end-of-course ratings. Knowledge/intellect scores did not influence the global rating (p=0.5), but both charisma and having an attractive picture projected had a high positive association (p<0.001). Average ratings over 4 were reserved for high-charisma presentations, while average ratings less than 3.5 were associated with low-charisma presentations with an unattractive picture on display. In addition to being humorous, this study brings into question our use of learner global evaluations for high-stakes decisions regarding faculty. — Laura Willett, MD
Rannelli L1, Coderre S, Paget M, Woloschuk W, Wright B, McLaughlin K. How do medical students form impressions of the effectiveness of classroom teachers? Med Educ. 2014 Aug;48(8):831-7. doi: 10.1111/medu.12420.
Poor tolerance of ambiguity in physicians has been associated with higher healthcare costs from excessive test-ordering and higher rates of burnout. In this AAMC survey of all 2013 matriculating allopathic medical students (74% participation rate, n=13,867), students were asked 7 Likert-style questions from a previously derived questionnaire assessing comfort with ambiguous situations. A sample question: “If I am uncertain about the responsibilities involved in a particular task, I get very anxious.” Predictors of a higher tolerance for ambiguity with at least a moderate effect size were: higher age at matriculation (particularly ages 26 and more) and lower scores on a widely-used perceived stress score. — Laura Willett, MD
Caulfield M, Andolsek K, Grbic D, Roskovensky L. Ambiguity Tolerance of Students Matriculating to U.S. Medical Schools.
Acad Med. 2014 Sep 23.
Having students diagram basic science themes is a good way for students to integrate complex topics. First year medical students used concept maps as part of their problem-based learning exercises. The authors investigated if/how collaborative drawings affects discussion and knowledge of basic science topics. Results emerging from student questionnaires and facilitators’ focus groups showed that diagramming had a positive influence on interpersonal group interactions, the focus and detail of their discussions, and making coherent views of the topics. —Lee Ann Schein, Ph.D.
Bas De Leng, Hannie Gijlers, Collaborative diagramming during problem based learning in medical education: Do computerized diagrams support basic science knowledge construction? Medical Teacher : 1-7.
This study involved 11 internal medicine programs participating in the ACGME Educational Innovations Project Ambulatory Collaborative, representing both university and community-based programs utilizing a traditional weekly model, block model with separate ambulatory rotations, or a combination of both. Patient satisfaction was found to be statistically significantly higher in traditional weekly and block models compared to the combination model, though the absolute differences for most of the items were 0.1 or 0.2 points on a 1 to 6 Likert scale with overall very high means (e.g., 5.8 vs 5.7 for question on how often did this doctor show respect for what you had to say). The study also suggests patient satisfaction was correlated with better process outcomes such as HbA1C <8 and LDL <100 for patients with diabetes. Limitations of this study include its non-randomized design with multiple potential confounding variables including imbalance in patient and provider level differences among sites using different clinic models, as well as correlation without proof of causation in either the satisfaction findings or improved process outcomes for diabetes care. While the study adds to a body of evidence on this topic that is often based on pre/post designs, it’s worth noting that most residency programs have adopted changes to the resident continuity clinic model based on meeting competing demands for residency training while improving resident satisfaction, rather than improving patient satisfaction or outcomes. — Sarang Kim, MD
Link To Article (not yet available in PubMed)
Maureen D. Francis, Eric Warm, Katherine A. Julian, Michael Rosenblum, Kris Thomas, Sean Drake, Keri Lyn Gwisdalla, Michael Langan, Christopher Nabors, Anne Pereira, Amy Smith, David Sweet, Andrew Varney, and Mark L. Francis (2014) Determinants of Patient Satisfaction in Internal Medicine Resident Continuity Clinics: Findings of the Educational Innovations Project Ambulatory Collaborative. Journal of Graduate Medical Education: September 2014, Vol. 6, No. 3, pp. 470-477.
This study looks at the relationship between the performance of students in their last year of undergraduate medical school and in their first roles as junior doctors. Data (grade point average, last year Emergency Medicine attachment exam, and 6th year written examination) were collected from 200 students in their final year (6th year) of their Australian medical education. In addition, results of each student’s performance in their initial clinical assignments post graduation were obtained. These are measured by the Junior Doctor Assessment Tool, which evaluates newly-graduated doctors’ clinical management skills, communication skills, and professional behavior throughout the first two postgraduate years. No single undergraduate assessment reliably predicted the performance of the students once they started clinical practice, however, taken as an aggregate, these scores correlated with their achievement as measured by the Junior Doctor Assessment Tool. – Lee Ann Schein, Ph.D.
Relationships between academic performance of medical students and their workplace performance as junior doctors. Carr SE, Celenza A, Puddey I, Lake F
BMC Medical Education [BMC Med Educ] 2014 Jul 30; Vol. 14 (1), pp. 157.
A comparative study was conducted assessing the communication skills of first and second year medical students. Students were observed interacting with simulated patients in a clinical scenario. Their communication skills were observed and assessed by simulated patients, communication skills faculty and health care faculty not trained in communication skills. Due to a curriculum change, one cohort of students had no formal training in communication skills (2nd year students), while the cohort of first year students had communication skills as part of their medical school curriculum. As expected, all three groups of reviewers rated the communication skills of the first year students with the formal training in communication higher than those of second year students. An interesting finding was that there was no significant difference between the rating scores given to both groups of students by the simulated patients and by the communication skills faculty. The ratings, however scored by the non-communication skills faculty were significantly higher than the other two assessor groups for the same student interactions. Simulated patients, who worked with the students in these exercises, were more in sync with the faculty who were actively involved in teaching communication skills than health care faculty who were not. This underscores the need to have appropriate facilitators for gauging student exercises. – Lee Ann Schein, Ph.D.
Assessors For Communication Skills: Sps Or Healthcare Professionals?
Liew, Siaw. Medical Teacher Volume: 36 Issue: 7 (2014-01-01) p. 626 – 631. ISSN: 1466-187X
Educators at Rutgers Robert Wood Johnson Medical School used guided reviews of online e-journals as a tool to promote collaborative learning among first year medical students. After reading medical journal articles posted online, students, either individually or in groups, answered on-line questions about the articles. Students needed to integrate the basic science knowledge they received from class lectures to correctly answer the questions. Answers were posted either individually or in groups of four, encouraging peer teaching and collaboration. Greater than 90% of the students responded well to factoid-based questions as well as questions which required a higher level of reasoning, as judged by faculty reviewers. In addition, based on the students’ evaluations, the students found the exercise useful as a tool to foster critical analysis of medical journal articles as well as a good way to review the basic science material taught in class. – Lee Ann Schein, Ph.D.