Team-based learning (TBL) is a widely used instructional strategy used in medical schools and involves advance preparation, readiness assurance (IRATs/GRATs) and application of knowledge to problem-solving exercises termed Group Application (GApp) exercises. While graded IRATs and GRATs promote in-depth preparation, the foundation of TBL is the GApp exercise. Using a 22-item survey, the current study investigated how the preferences and perceptions of students are influenced by graded versus ungraded GApp exercises. The data indicated that the perceived effectiveness of GApp exercises does not depend on grade weight. Also, the IRAT grades which reflect advance preparation by the students did not differ significantly with ungraded GApp exercises. A majority of students preferred ungraded GApp exercises and mentioned that this creates a better learning environment with reduced stress and anxiety and improves group discussion. -Sangita Phadtare, Ph.D., Cooper Medical School of Rowan University
In this qualitative study using constructivist grounded theory analysis, investigators describe how faculty interpret narrative comments on residents’ in-training evaluation. De-identified narrative comments of 126 residents were distributed to 24 faculty who were each asked to review 15-16 documents and sort them into four categories ranging from A for outstanding to D for unacceptable, and then rank-order the residents within each category. Semi-structured interviews were conducted to assess how faculty decided to categorize and rank-order residents. Results show that all faculty read between the lines to understand narrative comments. “Excellent” was often interpreted as “middle of the road,” and “good” was considered to actually mean “below level, needs work.” Faculty reported scanning for “red flags” to identify the most relevant cues among all the comments- examples of positive red flags included “chief resident material, future colleague, superstar,” and examples of negative red flags included “solid, good, improving, functioning at expected level.” While faculty also considered consistency and specificity of comments to categorize residents, it’s interesting to note that faculty assume what is written is often not what is meant, and instictively and actively search for hidden meaning. So why do we not just say what we mean? Maybe our feelings about politeness may have something to do with it. The saying “if you can’t say something nice, don’t say nothing at all,” (also known as Thumper’s rule from the Disney movie Bambie), may impede our ability to describe and interpret trainee performance accurately. — Sarang Kim, MD
Medical Student’s Perspective: This article is valuable in exploring how faculty interpret narrative comments on resident evaluations. I was not surprised by the need for illustrative examples to make residents stand out nor by many evaluations blending together. Scanning for positive and negative red flags in such letters seems a natural method for differentiation. I was surprised, however, to see generalized skepticism towards many commonly used descriptive words. It was also surprising that personality trait praises were the most common commentaries but also some of the most disregarded. The most concerning idea revealed was the differing opinions on the purpose of resident evaluations and the resulting variation in commentary content. Some believed they were for resident feedback while others felt their purpose was for program directors to use in generating recommendation letters for fellowship or job applications. Since these goals can seem misaligned, I believe the best standardized approach would be a separate forum of honest feedback documented during rotations along with a summative commentary to highlight exemplary qualities or concerns. This way, the program director stays better abreast of resident development and can evaluate residents based on fuller commentary. — Victoria Behrend, MD Candidate 2015 Rutgers Robert Wood Johnson Medical School
Resident’s Perspective: As one progresses through medical training, less emphasis is placed on objectively based knowledge assessments and shifted towards largely narrative-based assessments of clinical performance. Although these narrative-based assessments serve many purposes, one core objective is to provide residents constructive feedback to promote professional growth. Faculty physicians face a troubling dilemma when attempting to provide this feedback effectively. They must provide honest and constructive criticism relating to areas where individual residents can improve. However, they must do this in a way that preserves a working teacher-student relationship, protects the psyche of the resident, and not compromise future fellowship/job opportunities for the resident. The “hidden code” referred to by the authors of this study allows faculty physicians to address many of these issues. Unfortunately though, resident physicians have little experience deciphering the “hidden code”. The residents may believe they are meeting expectations and get a false sense of security when in fact the faculty might perceive them as struggling. This raises the question of whether the current use of narrative-based assessments, embedded with hidden codes, is ultimately doing residents- and possibly even their patients- a disservice. I would argue, yes. — James Penn, MD (PGY 3) Rutgers Robert Wood Johnson Medical School, Internal Medicine Residency Program
Ginsburg S, Regehr G, Lingard L, Eva KW. Reading between the lines: faculty interpretations of narrative evaluation comments. Med Educ 2015;49:296-306
Yes, is the resounding answer from Dr. Guerrasio, who literally wrote the book on medical learner remediation. The authors describe their institution’s experience remediating learners identified as having deficiency in clinical reasoning, most with co-existing deficits in other domains as well. Of 53 such learners, 96% passed a blinded reassessment and 91% continued successfully in their programs. They describe a time-intensive program in which learners create extensive tables for “all the main presentations for that specialty”, including illness scripts with semantic qualifiers, diagnostic testing, and treatment algorithms. The median number of faculty hours expended per learner was 18, with a range up to 100 hours, excluding time for “planning, assessment, or preparation.” — Laura Willett, MD
Baylor College of Medicine created a novel process to identify and evaluate professional behaviors of their undergraduate medical students and to implement policies to address breaches in professionalism. A committee of five faculty members experienced in ethics and or behavioral sciences was formed to evaluate alleged professionalism occurrences. An on-line, confidential reporting system was employed alerting the committee of potential breaches. The committee then rates the alleged offense as mild, moderate, or major. A record is kept of mild acts and, unless a second report is received, no further action is taken. Major transgressions are reported directly to the Dean of Student Affairs. Breaches which are considered moderate result in a face to face meeting between the committee and the student, giving the student a chance to explain the alleged infraction. The committee also discusses action plans with the student to prevent reoccurrences. The meetings are intervention sessions using guided reflection, as opposed to disciplinary sessions. From 2008–2013, the committee received 79 reported concerns and conducted 20 student interviews for moderate offenses. Only one of these students incurred additional professionalism reports. This novel approach to professionalism breaches received positive feedback from both the students and administration at Baylor. — Lee Ann Schein, Ph.D
Gill AC, Nelson EA, Mian AI, Raphael JL, Rowley DR, Mcguire AL.Med Teach. Responding to moderate breaches in professionalism: An intervention for medical students.2015 Feb;37(2):136-9.
In the midst of much controversy about resident duty hour regulations, another study adds to the uncertainty about its impact on medical student education. In this study, investigators surveyed medical students and clerkship directors about the quality of teaching, evaluation, and patient care during internal medicine clerkship or sub internship before and after duty hour regulations. Response rates were 48% to 64% for students and 82% for clerkship directors. Although students perceived few adverse effects of the 2011 duty hour regulations on their education, clerkship directors generally had negative perceptions. While it may be somewhat reassuring that students’ perceptions were not worse, the discrepancy between students’ and clerkship directors’ perceptions is interesting, and lends support to the notion that perceptions may not be useful in trying to measure the impact of duty hour regulations. — Sarang Kim, MD
R Kogan J, Lapin J, Aagaard E, Boscardin C, Aiyer MK, Cayea D, Cifu A, Diemer G, Durning S, Elnicki M, Fazio SB, Khan AR, Lang VJ, Mintz M, Nixon LJ, Paauw D, Torre DM, Hauer KE. The effect of resident duty-hours restrictions on internal medicine clerkship experiences: surveys of medical students and clerkship directors.Teach Learn Med. 2015;27(1):37-50.
Interdisciplinary teamwork has been shown to lead to improved patient outcomes as well as greater job satisfaction for the healthcare workers involved. Consequently, there is increased interest in providing interprofessional education (IPE) for medical students. University of Toronto initiated a “Transition to Clerkship” course for third-year medical students to prepare them for working with other healthcare professionals (HCPs) on the wards. The intent of this course is to improve interprofessional relationships and facilitate positive attitudinal changes toward HCPs. The students shadowed a variety of HCPs in the hospital and then provided feedback on their experience. A large majority of students responded that the experiences improved their understanding of the roles and responsibilities of the professions that they shadowed and that the experience left them better equipped to communicate with the various HCPs. The students felt that this experience was a valuable component of their education. Lee Ann Schein, Ph.D.
Daniel M. Shafran , Lisa Richardson & Mark Bonta, A Novel Interprofessional Shadowing Initiative for Senior Medical Students, Med Teach. 2015 Jan;37(1):86-9.
Traditionally, lectures are the principle mode of delivering core content in an emergency medicine (EM) curriculum. The authors describe a variety of different active learning approaches that can be incorporated in this curriculum to improve knowledge retention and create a deeper understanding of the material. The active learning approaches are proving to be very effective for understanding evidence-based medicine, communication skills, and self-directed learning, the skills that are well aligned with the goal of EM residency programs. This is a very comprehensive compilation of a variety of active learning methods that can be used by medical educators. The authors have created very structured and concise tables summarizing these approaches, which will allow the educators to choose the most appropriate method based on their goals and available resources. The authors give brief description (objectives, how to use, suggestions for modifications, examples etc) of each of these approaches such as, pause procedures, one-minute paper, the muddiest point, think-pair-share, case-based learning, concept maps, role-play, commitment activities, jigsaw, team-based learning, problem-based learning, and thinking hats. Although the authors have described these approaches for emergency medicine curriculum, these can be used in all years throughout the medical education. -Sangita Phadtare, Ph.D., Cooper Medical School of Rowan University
MARGARET WOLFF, MARY JO WAGNER, STACEY POZNANSKI, JOCELYN SCHILLER, SALLY SANTEN., NOT ANOTHER BORING LECTURE: ENGAGING LEARNERS WITH ACTIVE LEARNING TECHNIQUES., THE JOURNAL OF EMERGENCY MEDICINE 2015 JANUARY VOLUME 48(1), 85–93
Medical student diversity is an important aim, with socioeconomic diversity perhaps more difficult to measure and attain than other types of diversity. The authors suggest an easy-to-administer measure of parental education and occupation (EO) with 5 levels ranging from EO-1 (parent has less than a bachelor’s degree) to EO-5 (parent has doctoral/professional degree and “executive, managerial, professional position”). There was a strong, graded correlation with other widely used measures of socioeconomic status, such as low family contribution to education. This reviewer felt that, given that 32.7% of EO-1 candidates had no other measures of disadvantaged status and census data showing that only around 30% of adults have a bachelor’s degree, that there might be some utility to sub-dividing the EO-1 group into more and less disadvantaged subgroups. — Laura Willett, MD
Grbic D, Jones DJ, Case ST. The Role of Socioeconomic Status in Medical School Admissions: Validation of a Socioeconomic Indicator for Use in Medical School Admissions. Acad Med. 2015 Jan 27.
Replicating the results of experiment, investigators found that examiners’ ratings of candidates in two very different high-stakes behavioral testing situations were affected by the performance of prior candidates. The first group of scores were from 2,272 takers of a UK 16-station OSCE required for any medical school graduate who has been out of medical school for more than 2 years and who wishes to pursue subspecialty training. The second were the multiple mini-interview (MMI) scores of 3,016 applicants to the University of Alberta Medical School. Consistent negative correlations were found between the index score and those of preceding examinees. That is, a candidate was graded somewhat higher if he or she was preceded by less-ranked candidates, and vice versa. The effect accounted for 5% to 11% of total score variance. This appears to be a fairly robust effect. Now the difficulty will be in deciding how to correct or ameliorate it. — Laura Willett, MD
Yeates P1, Moreau M, Eva K. Are Examiners’ Judgments in OSCE-Style Assessments Influenced by Contrast Effects? Acad Med. 2015 Jan 27.
In this extensive single-center study, critical care attendings and fellows generated more than 10,000 assessments regarding the futility of care provided to their ICU patients. Attendings were much less likely than fellows (7% vs. 17%) to deliver an assessment that care was futile. Six months later, 61.5% of patients assessed by fellows as receiving futile care had died, as opposed to 84.6% of patients so assessed by attendings. Not only were attendings more accurate, they also took more time (4 days vs. 2 days of patient interaction) to come to this assessment. — Laura Willett, MD