Surveys: This is required reading for faculty planning to engage in survey research or to use a published survey. The authors analyzed publications in three major health education journals which described 37 separate self-administered surveys. The vast majority (94.6%) of surveys contained at least one violation of best practices in survey design. Even more concerning, the papers seldom reported a survey’s validity (35.1%) or reliability (21.6%). The authors point out that, if educators are using a published survey, they should re-establish the survey’s validity and reliability in their own setting. — Laura Willett, MD
Artino AR Jr, Phillips AW, Utrankar A, Ta AQ, Durning SJ,”The Questions Shape the Answers”: Assessing the Quality of Published Survey Instruments in Health Professions Education Research.
Acad Med. 2017 Oct
Admissions: Five California public medical schools collaborated in this observational comparison of medical school admissions interview structure. Three of the schools utilized traditional interviews (TIs) of about 30 minutes each and two utilized MMIs. For both types of interviews, older age and female gender were associated with higher interview scores, while ethnicity and MCAT score had no association with interview scores. Self-identified student financial disadvantage was associated with higher TI scores and lower MMI scores, both influences being statistically significant. Also interestingly, higher GPA had a negative association with MMI scores but no significant association with TI scores. Perhaps the traditional interviewers were more likely to be aware of the student’s disadvantaged status and academic rating, whereas the brevity and structure of the MMI would likely preclude this. Yet more data for admissions officers to figure into their models! — Laura Willett, MD
Henderson MC, Kelly CJ, Griffin E, Hall TR, Jerant A, Peterson EM, Rainwater JA, Sousa FJ, Wofsy D, Franks P. Medical School Applicant Characteristics Associated With Performance in Multiple Mini-Interviews Versus Traditional Interviews: A Multi-Institutional Study. Acad Med. 2017 Oct.
Feedback: In follow-up to an earlier scoping review on feedback, concluding that we have mostly poor-quality data on the subject, the authors focused on 51 articles describing “the substance and setting of the feedback given” to health sciences learners, mostly medical students and residents. Five common themes were identified, all flagging major problems or issues with feedback. First was leniency bias: “feedback providers were often reluctant to give any, or even minimally, constructive (i.e., negative) feedback.” Next, feedback was often of low quality, “limited in amount or too general and did not conform to published guidelines.” Faculty were often deficient in giving feedback, even after training. Multiple feedback tools were not optimally used; often “the tool simply was not used or… information on the completed feedback tool was missing.” Lastly, gender of the feedback giver and recipient seems to have an inordinate impact on the exchange. The authors make several suggestions for improving feedback, the most evidence-based of which seems to be using feedback tools with some evidence in the literature of successful use. — Laura Willett, MD
Bing-You R, Varaklis K, Hayes V, Trowbridge R, Kemp H, McKelvy D., The Feedback Tango: An Integrative Review and Analysis of the Content of the Teacher-Learner Feedback Exchange., Acad Med. 2017 Oct
Link To Article
Assessment: Alliance for Academic Internal Medicine Undergraduate Medical Education (UME) Task Force proposes a model for competency-based assessment in IM clerkships, informed by results of a survey of clerkship directors who were asked to identify high priority EPA’s (entrustable professional activities) for students in the IM clerkship. Six EPA’s were identified as important for assessment in the IM clerkship and were related to accuracy in obtaining a history and physical, importance of verbal and written communication, interpretation of diagnostic studies, and generating differential diagnoses. Subdivisions under these EPA’s were also suggested, with use of an entrustment/supervision scale anchored by “not allowed to practice,” “allowed under full supervision,” and “allowed with on-demand supervision.”
What the authors propose is a potentially useful model for competency-based assessment in the IM clerkship that would also align UME assessment with how GME assessment is currently being done. However, their proposal is not quite ready for use in the current educational environment. A formidable challenge to implementation of such an assessment is the larger paradigm shift that would need to take place to replace clerkship grades with a dichotomous scale of meets standards or does not meet standards. How likely that is to happen in the next few years is hard to say but this is a topic that is being actively debated in the educational community. So stay tuned…
–Sarang Kim, MD
Fazio SB, Ledford CH, Aronowitz PB, Chheda SG, Choe JH, Call SA, Gitlin SD, Muntz M, Nixon LJ, Pereira AG, Ragsdale JW, Stewart EA, Hauer KE. Competency-Based Medical Education in the Internal Medicine Clerkship: A Report From the Alliance for Academic Internal Medicine Undergraduate Medical Education Task Force. Acad Med. 2017 Sep
Link to Article
USMLE: Probably, and it is likely most helpful for the students with lower-than-median MCAT scores, according to this non-randomized intervention for pre-clinical medical students. All students in one class were provided free access to a commercial NBME-style question bank and their access to questions was tracked. Higher utilization of test questions was associated with higher objective achievement including USMLE 1 scores for all students, but the gain was much higher in those students with lower-than-average MCAT scores. For example, in going from 0 to nearly 1,500 questions accessed (the range seen in this group), students with the median MCAT score of 30 were projected to raise their USMLE 1 score by about 20 points. Students in the lowest MCAT group were projected to have a rise of about 40 points, while those in the highest group experienced a minimal rise. The authors suggest that this access may “benefit all students while conferring relatively greater benefit to students who may enter medical school with greater academic risk or less well-developed standardized test-taking skills.” Obvious confounders are student persistence and work ethic. It would be useful to find out exactly how the students are using these questions – e.g. mostly to “cram” for exams, or for paced re-exposure to previously-learned material. — Laura Willett, MD
Baños JH, Pepin ME, Van Wagoner N. Class-Wide Access to a Commercial Step 1 Question Bank During Preclinical Organ-Based Modules: A Pilot Project. Acad Med. 2017 Aug 16.
Milestones: In this study involving 20% of pediatric residency training programs in the US, milestone evaluations of residents rated marginal or unsatisfactory (M/U) were compared to those of residents rated satisfactory. Overall, fewer than 2% of residents were rated less than satisfactory, with interns (first-year residents) and international medical graduates slightly more likely to be categorized in this way. On a 5-point milestone scale, M/U interns rated significantly lower in every subcompetency, with average ratings 0.60-0.97 below their “satisfactory” peers. Two subcompetences discriminated very well between the groups of interns: organization/prioritization and transfers of care. With increasing resident seniority, the differences in average ratings between satisfactory and M/U residents became much more variable with the subcompetency, with the largest differences found in professionalism, trustworthiness, and transfers of care. — Laura Willett, MD
Li ST, Tancredi DJ, Schwartz A, Guillot A, Burke A, Trimm RF, Guralnick S, Mahan JD, Gifford KA; Association of Pediatric Program Directors (APPD) Longitudinal Educational Assessment Research Network (LEARN) Validity of Resident Self-Assessment Group. Identifying Gaps in the Performance of Pediatric Trainees Who Receive Marginal/Unsatisfactory Ratings. Acad Med. 2017 Jun
Link To Article
Publishing: This is required reading for faculty who want, or need, to publish medical education research articles. Researchers describe the pre-peer review editorial process at Academic Medicine, during which 65% of submitted manuscripts are rejected, either directly by the editor-in-chief or by associate editor review. Free text comments from associate editors regarding 369 manuscripts selected for expedited rejection were analyzed qualitatively. The average rejected article had 3.11 reasons for rejection, with the reasons falling into 9 major themes. Themes represented by more than 30% of rejected manuscripts were: “ineffective study question or design” 92%; “suboptimal data collection process” 49%; “weak discussion and/or conclusions” 37%; unimportant or irrelevant topic to the journal’s mission” 37%; “weak data analysis and/or presentation of results” 33%. Many of these objections could likely be avoided by better planning during the study design phase. — Laura Willett, MD
Meyer HS, Durning SJ, Sklar D, Maggio LA. Making the First Cut: An Analysis of Academic Medicine Editors’ Reasons for Not Sending Manuscripts Out for External Peer Review. Acad Med. 2017 Aug.
Link To Article