Admissions: Maybe, according to these results. All 140 admissions committee members took a test for white-black implicit bias, and then took a survey based on their experiences. This was followed by a discussion of aggregate results by the committee along with an implicit bias expert. Results were reported by gender and faculty vs. student status, but not by under-represented minority (URM) status. No group of committee members reported explicit bias, but all groups (male and female, students and faculty) displayed moderate to substantial implicit bias based on the test. Sixty-seven percent felt that the exercise was worthwhile. In the next admissions cycle, there were no statistically significant changes in the percentage of URM applicants interviewed (20% to 19%), the percentage URM interviewees offered acceptance (32% to 29%), or the percentage of accepted URM students matriculating (43% to 54%). — Laura Willett, MD
Quinn Capers IV, MD, Daniel Clinchot, MD, Leon McDougle, MD, and Anthony G. Greenwald, PhD, Implicit Racial Bias in Medical School Admissions, Academic Medicine, September 2016
Link To Article
Clinical and Translational Science: For the clinical educator, more questions than answers are raised by this survey data from mostly high-profile US institutions with Clinical and Translational Science Awards (CTSA). CTSA Consortium institutions were surveyed regarding the biostatistics, epidemiology, and research design (BERD) programs which supported these teams. There was wide variability amongst the programs, but some trends were apparent. The median number of BERD FTEs was about 3.0, with a dominant representation of biostatisticians. The BERD units did a large amount of consulting with both junior and senior investigators, and contributed to multiple grant applications and manuscripts. “It is unknown how many of these consultations were for students, residents, or fellow; our experience is that access to CTSA-supported consulting by these groups varies across the CTSA consortium.” For learners expected to produce high-quality scholarly activity, access to BERD specialists is needed, and often not attainable without significant cost.–Laura Willett, MD
Rahbar MH, Dickerson AS, Ahn C, Carter RE, Hessabi M, Lindsell CJ, Nietert PJ, Oster RA, Pollock BH, Welty LJ. Characteristics of Biostatistics, Epidemiology, and Research Design Programs in Institutions With Clinical and Translational Science Awards. Acad Med. 2016 Aug
Link To Article
Milestones: Residency program directors will be interested in this description of end-of-year milestones data from 2,030 categorical pediatric residents in forty-seven representative 3-year categorical programs. Milestones are generally rated on a 1-5 scale for 21 different subcompetencies. In pediatrics, milestone ratings are anchored by behavior descriptions. (In some other specialties, ratings are explicitly benchmarked with level 4 described as “ready for unsupervised practice.”) As anticipated, median subcompetency milestone rankings increased with year of training, from 2.5-3.0 for first-year residents to 4.0 (for all subcompetencies except quality improvement) for third-year residents. The gap in milestone ranking between the lowest 10% and highest 10% of residents closed substantially over year of training, as well. Overall, 21% of graduating residents achieved a score of at least 4 in all subcompetencies, and 79% achieved a score of at least 3 in all subcompetencies. — Laura Willett, MD
Li ST, Tancredi DJ, Schwartz A, Guillot AP, Burke AE, Trimm RF, Guralnick S, Mahan JD, Gifford KA; Association of Pediatric Program Directors (APPD) Longitudinal Educational Assessment Research Network (LEARN) Validity of Resident Self-Assessment Group. Competent for Unsupervised Practice: Use of Pediatric Residency Training Milestones to Assess Readiness. Acad Med. 2016 July
Link To Article
Primary Care: Education experts at Harvard Medical School, along with several medical students, propose a primary-care curriculum to coordinate activities across departments and develop a greater understanding of primary care in all students. Eleven major themes emerged in their discussions: longitudinal patient experiences across all 4 years; comfort with uncertainty and undifferentiated illness presentations; care management across organizations; communication skills; common acute care syndromes; common chronic illness issues; prevention; mental illness, substance use, and violence; quality improvement; interprofessional training; and population health issues. These are worthy and difficult-to-achieve goals. Curricular change alone is unlikely to increase the supply of U.S. primary-care physicians, unless the compensation difference with subspecialists is diminished. — Laura Willett, MD
Fazio SB, Demasi M, Farren E, Frankl S, Gottlieb B, Hoy J, Johnson A, Kasper J, Lee P, McCarthy C, Miller K, Morris J, O’Hare K, Rosales R, Simmons L, Smith B, Treadway K, Goodell K, Ogur B. Blueprint for an Undergraduate Primary Care Curriculum. Acad Med. 2016 Jul 12.
Link To Article
Workload: Was and colleagues, at Stanford Children’s Hospital, used a research database to determine the number of notes an intern wrote and the number of orders they placed for each of 6 core rotations. The authors proposed that comparing the number of notes written and orders entered by one intern to the average of their peers for each rotation would be a marker for workload intensity. Not surprisingly, they found that there were statistically significant differences in the number of notes and orders for different rotations (such as general floor versus a subspecialty rotation). They did not find any correlation between an intern’s perceived workload (their “cloud”) and actual intensity of work as described by the number of orders and notes. Of interest in pediatrics, there was little seasonal variation in workload intensity between the fall/winter and spring/summer.
This estimate of workload intensity does not capture the quality or complexity of the work or the time required to complete it, limiting its applicability to an individual intern. It may, however, offer program directors an objective method of describing how hard or busy one rotation is compared to another. In regard to an intern’s “cloud”, the more interesting comparison would have been the perception of an intern’s workload by their colleagues compared to workload intensity. — Michael J. Kelly, MD
Was A, Blankenburg R, Park KT. Pediatric Resident Workload Intensity and Variability. Pediatrics. 2016 Jul;138(1).
Link To Article
Gender Bias: In 1995, a representative sample of medical school faculty showed that, adjusted for confounders, female physician faculty made about $12,000 less than their male counterparts, a difference that increased with longer employment. In a follow-up survey 17 years after the first, there was a 48% response rate which did not differ by gender. The results are depressingly familiar. Women are compensated about $17,000 per year less than men, after adjustment for traditional factors such as academic rank, specialty, and clinical vs. teaching or research focus. Looking at compensation over time, it appeared that this was due more to a smaller starting salary for women vs. lower annual increases. A history of part-time employment or leave of absence for >2 months was associated with almost $28,000 lower total compensation, even though the median leave was 6 months and the median duration of part-time employment 2.75 years. As a side issue, a focus on teaching was particularly injurious to compensation – for each 1% increase in time devoted to teaching, pay decreased by over $1,000 per year. Women also appeared to pay a higher social price in balancing demands of work and home life. They were less likely than male faculty members to be married/partnered, to have children, or to attain full professor status.
In their discussion, the authors suggest that things are not getting better, citing recent data that female hospitalists “earn substantially less than their male colleagues, although working similar hours.” They end with a plea to academic institutions to address these inequities with robust policies and review. — Laura Willett, MD
Freund KM, Raj A, Kaplan SE, Terrin N, Breeze JL, Urech TH, Carr PL. Inequities in Academic Compensation by Gender: A Follow-up to the National Faculty Survey Cohort Study. Acad Med. 2016 Jun
Link To Article
Electronic Health Record: In this descriptive study of electronic heath record (EHR) use by medicine interns at a US academic medical center, 32 interns were asked to write progress notes in a simulated version of an EHR (pre-populated with all the usual clinical data such as vital signs, laboratory results, prior discharge summaries, etc). The progress notes were assessed for recognition of clinical issues as well as use of data importation tools (macros) and copy-paste. About half the notes had copy-paste elements (defined as reproduction of an entire section of the plan without any modification), 65% imported 3 days of data into a daily progress note, and 68% of notes failed to list active medications. Data not included in the laboratory macro (such as TSH level or microbiology results) were more likely to be missed, with 55% failing to recognize that the patient’s organism was resistant to the prescribed antibiotic. Sadly, these findings may not be surprising to those who have looked at electronic progress notes recently — lengthy documents filled with information but void of meaning — but the findings do quantify and highlight some of the common errors that are easily facilitated by, or unable to be reduced by, EHR. — Sarang Kim, MD
Senior Resident Perspective: As electronic health records (EHR) grow increasingly ubiquitous in our nation’s hospitals and outpatient offices, the question being asked with louder concern is: Have they become a disservice to residents and interns-in-training? A recent study of interns using a simulated EHR found that about half of notes had some copy-paste aspects and 68% did not include the patient’s active medications. Without doubt, this practice is dangerous to patient care and damaging to resident education. Too often, however, such critiques come embedded with deeper assumptions- that EHR’s themselves are to blame, and that such behavior is unique to the physicians of today. It would seem then that the glow of yesteryear casts dim shade on the truth, which is that poor documentation is nothing new. Surely we have not forgotten the spectacle of “chart-checking” rounds, with the entire team gathered squinting round a nearly illegible and clearly perfunctory hand-written note. And what of clinic charts? Too often they became bloated tomes collecting dust in a storeroom, years of information gathered on loose sheets, scribbled forms, even post-it notes. Today, with a few keystrokes, I can find a specific laboratory result run nearly a decade ago. I can easily cull recommendations from multiple specialists both on- and off-site, with remote access. The fact of the matter is that bad behavior is common now, but just as it was common then. Change will come with educating residents on the dangers of copy-paste, most importantly to their patients, but also in a medicolegal setting. It will come with improving EHR’s ability to recognize and prevent copy-pasting. And it will come with re-invigorating residents with a sense of pride in the necessary importance of the routine work they do, including yes, even that daily progress note. — Ahmed Khan, MD
March CA, Scholl G, Dversdal RK, Richards M, Wilson LM, Mohan V, Gold JA., Use of electronic health record simulation to understand the accuracy of intern progress notes. . JGME May 1, 2016, p237-239.
Link To Article