Kathleen Carlson – Winner of the 2022 Jess Kraus Award

The Jess Kraus Award is given each year to the author of the best paper published in Injury Epidemiology, selected by the Editorial Board. Editor-In-Chief, Professor Guohua Li, chats with this year’s recipient, Dr. Kathleen Carlson, about her award-winning study.
Published in Healthcare & Nursing
Like

Editor-in-Chief, Guohua Li: Dr. Carlson, congratulations to you on winning the Jess Kraus Award in Injury Epidemiology! Would you please briefly introduce yourself and describe how you felt when you first heard about the news?

Kathleen Carlson: Thank you so much! I’m an injury epidemiologist and health services researcher with the Oregon Health and Science University – Portland State University School of Public Health and the VA Portland Healthcare System. I have studied occupational injury, children’s injury, motor vehicle safety, traumatic brain injury, hearing injury, opioid overdose and other medication-related injuries, suicide prevention, and – relevant to the paper at hand – firearm injury. I am passionate about our field and feel a real kinship with all of you doing injury prevention research and practice. The news of this award was a big shock. It’s an honor to be recognized by my esteemed colleagues and something I wasn’t expecting at all. I share this honor with my team and partners at the VA and the Oregon Public Health Division who are also passionate and who made this analysis, and paper, come to fruition.  

GL: Both you and Dr. Kraus trained at the University of Minnesota School of Public Health. What do you know about Dr. Kraus? Any interaction with him at all? 

KC: Dr. Jess Kraus’s PhD predated mine by a few years (1967 and 2006, respectively), but his work undoubtedly influenced my own training and subsequent research. Dr. Kraus spoke of being influenced by the founding “parents” of our field, including Susan Baker and William Haddon Jr. My amazing and beloved mentor and PhD advisor, Susan Gerberich, was also trained by Sue Baker and Bill Haddon, and Dr. Gerberich was sure to pass that foundational training on to her students, including me. I am always so moved by our field’s (relatively young) history and the connections between all of our mentors and their histories.

GL: Your award-winning study is part of a large project about firearm-related injuries among veterans. Would you please tell us what the National Violent Death Reporting System (NVDRS) is and why this mortality surveillance system is important given the CDC’s well-established National Vital Statistics System? 

KC: The NVDRS is a phenomenal surveillance system that collects detailed data on the circumstances of fatal violent injuries (and all firearm injuries) across our 50 states plus territories. The vital statistics system is fundamental to public health as well, but cannot match the magnitude of information – and information that is specific to understanding violence – contained in the NVDRS. The NVDRS is an incredible success story, having grown in its 20 years from a handful of participating states to the surveillance system it is today. I have so much admiration and appreciation for those who conceived of the idea and for all who have played a part in bringing it to life. 

GL: What motivated you to examine the accuracy of the behavioral health data in NVDRS? What did you find?

KC: I was at the American Public Health Association annual meeting a number of years ago, and I was watching Dr. Alex Crosby present his work with the NVDRS in an Injury Control and Emergency Health Services (ICEHS) section oral session. Someone from the audience asked Dr. Crosby if we had any information about the validity of the mental health variables in the NVDRS and he responded that that was needed. My team had just finished linking our state’s VDRS data to veterans’ VA healthcare records for a completely different purpose (to examine health-related risk factors for veterans’ fatal and nonfatal firearm injuries) and it occurred to me that our linked data could be used to help answer that question! So the analysis was inspired at that moment at an APHA session. What we found is that, among veteran decedents in the Oregon VDRS data, only half (or fewer!) of those who had been treated by the VA for behavioral or mental health disorders in the two years preceding their death were indicated in the VDRS data as having the respective condition. This suggested that, at least for veterans and at least in Oregon (but likely more widely spread), our VDRS data are underreporting behavioral and mental health disorders.

GL: You used the medical record data from the Veteran Administration Healthcare System as the criterion standard. Could you briefly explain how the VA Healthcare System is operated and how good the criterion standard in your study is? 

KC: The VA Healthcare System is the largest integrated healthcare system in the country. Researchers in the VA can access a centralized database of all VA healthcare across the country, going back decades in time. We required a veteran to have been diagnosed two or more times in the two years preceding their death to consider them as having the respective diagnosis. If a veteran met these criteria, then we have a good degree of certainty that they received care for the condition and, in turn, should have been reported as having this condition in their VDRS record. These are the metrics we focused on the most (i.e., the sensitivity of VDRS data to detect “true” cases of VA diagnoses). Many veterans use both VA and non-VA healthcare, so the absence of a VA diagnosis should not necessarily equate to the absence of a VDRS indicator for the respective condition; therefore, we focused less on specificity.

GL: Your study indicates that the NVDRS may miss more than half of the behavioral health diagnoses such as depression, PTSD, and anxiety. What is the implication of your finding for other researchers using the NVDRS data? 

KC: Many of us report the proportions of decedents in the NVDRS who had these diagnoses and I think it’s important for us all to consider the potential misclassification when we do so. Our study helps quantify just how much misclassification there may be, at least when it comes to these variables. The NVDRS is a powerful surveillance system but, just like all data systems, it has its limitations. In this case, NVDRS abstractors rely on the information they have access to in medical examiner or coroner death investigations, law enforcement reports, and toxicology reports, when available. Medical records are not necessarily accessed by these entities if not useful for their reports; therefore, at no fault of the NVDRS system itself, we simply have a strong potential for actual diagnoses to be missed.

GL: The NVDRS has become a popular resource for injury and violence research in recent years. Do you have any recommendations to the CDC, state health departments and other stakeholders for improving the quality of this data system?

KC: So much has been learned, and continues to be learned, by the systematic collection and linking of data across states and territories for the NVDRS – the knowledge gained from this system has saved considerable human life. Is there additional information that I wish could be included (like healthcare records!) in the data sources that NVDRS abstractors get to use? Yes! Would there ever be an end to my wishlist of additional data? Probably no! Thus, many researchers (and NVDRS leadership as well!) are working creatively and linking to additional data sources, like we were able to do here with VA healthcare data. One recommendation I would make for anyone using NVDRS or state VDRS data for research is to work closely with the teams who oversee data collection and abstraction. It’s so enlightening (critical, actually) to hear data abstractors describe their operationalization of the NVDRS coding manual and how they find and enter data. We were fortunate to partner with our state VDRS team on our paper and to hear, first-hand from Oregon abstractors, how the data elements we were using were actually collected.

GL: You have established yourself as a leader in the field of injury epidemiology and prevention. Could you share a few pearls of wisdom with our graduate students and young colleagues? 

KC: That is the nicest thing to hear! I of course still feel like a student, though – so much more to learn. My pearls would involve networking with, and learning from, the “elders” in our field. Those who have been fighting the good injury and violence prevention fight for decades. Listen to what has been tried and what has or hasn’t worked. Ask yourself if now is the time something may work, or if something old is new again, or if it’s time to be totally innovative. Read David Hemenway’s “While we were Sleeping” – learn our field’s success stories, and disseminate them far and wide to spread hope, and to share the importance of injury prevention science. Read, listen, and learn from advocates and practitioners too (especially the communities most affected by injury and violence) – partnership and collaboration make this work possible, and these friendships and kinships also make the “heavy heart” days a little less heavy. Perhaps most relevantly, always understand the actual sources of your data, and question accuracy – quantify measurement error when you can! And listen closely to the Q&A sessions at conferences – you never know where, when, or by whom your ideas will be sparked!

GL: Thank you for taking the time to chat with me. I look forward to hosting you at the award ceremony in New York City this Fall.

KC: Thank you, Dr. Li! I’m really looking forward to meeting/seeing everyone in the Columbia Center for Injury Science and Prevention! Thank you for this opportunity.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Subscribe to the Topic

Health Care
Life Sciences > Health Sciences > Health Care