How to make the best of imperfect tests?

Like Comment
Read the paper

Since the start of the pandemic, almost every morning has begun the same way for hundreds of community-based health workers living in Dhaka, Bangladesh: searching their communities for potential COVID-19 cases. These individuals are at the forefront of Dhaka’s COVID-19 response, tasked with providing care where it is needed whilst collecting data to monitor the epidemic. They are part of Bangladesh’s proud history of community health initiatives over the last 50 years, delivering and advocating for most major public health policies. The COVID-19 pandemic, however, has taken their role to a new level.

Surveillance is the first line of defence against COVID-19. In high-income countries at the start of the pandemic, debates raged about the scale of PCR testing to deploy and the role of even more accurate diagnostics. Low- and middle-income countries like Bangladesh faced the same problem though they had just a fraction of the infrastructure and resources. With fewer laboratories, mass PCR testing was impossible but more affordable methods for diagnosing COVID-19 were seen as too imperfect to be useful. 

Over the years, community-based health workers have had to diagnose diseases from symptoms alone (symptomatic surveillance). But the many symptoms that can manifest from COVID-19 (e.g. fever, coughing, headaches) means that people with other illnesses can be misdiagnosed for COVID-19. As the pandemic progressed, an alternative diagnostic, the rapid antigen test, was developed, which detects the body's immune response to the virus. This test is extremely specific, so a positive rapid test result means a low risk of misdiagnosis. However, these tests are less sensitive and have been shown to miss positive cases. 

Both of these affordable tests, therefore, have strengths and weaknesses, but, crucially, they have different strengths and weaknesses. In our study, we show that these tests are therefore complementary and that by combining them we can much improve COVID-19 diagnosis. Our project trained the community health teams in Dhaka to use mobile phone apps to log the symptoms reported by each patient and rapid antigen test results and to carry out PCR tests on a sub-sample of the patients. 

Through the app, we had ready access to the data collected by the community health-workers, and, thanks to the PCR results, ground-truth information on who actually had COVID-19. We used these data to develop statistical models to predict the patient’s COVID-19 status (i.e. PCR result) using the patient’s symptomatic and rapid antigen test results. We then needed to identify which models performed best and should be integrated into the app to give the community health workers an instant diagnosis.

The problem with choosing a single “best” diagnostic is that models can get things wrong in two ways: they can say that someone without COVID-19 does have COVID-19 (false positive), or they can say that someone who has COVID-19 does not have COVID-19 (false negative). A major benefit of using statistical models is that we can tune them to change the rates of false positives and false negatives.  Unfortunately, it is impossible to reduce one without increasing the other, so how do we decide which to reduce? The answer is, as is often the case in science, that it depends.

One challenge of the pandemic is that the situation is constantly changing. Early on, when cases surged, restrictions were imposed. Population immunity has changed, both through exposure to infection and the deployment of vaccines, all complicated by new variants and changing control measures. The damage from missing a COVID-19 case or misdiagnosing someone with COVID-19 therefore depends on the situation. If cases are rapidly growing, every case leads to many more cases so the cost of missing a case goes up. If cases are declining, perhaps thanks to a lockdown, then the negatives of imposing isolation on citizens unnecessarily may greatly outweigh the positives of preventing a small number of transmission events. 

These shifting sands mean that there is not a single correct balance of missed and misdiagnoses when it comes to COVID-19. As the costs change, models have to adapt and reflect that change. A huge advantage of mobile technology for data collection is the ability to update central repositories instantly. The same is true in reverse. Behind the scenes, it is possible to adapt the diagnostic machinery to reflect the needs at the time as determined by policy-makers. This allows us to fight the pandemic as it morphs from one stage to another while minimising the changes to the process for the community health-workers on the front lines.

In our study, we explored how to implement this in practice by considering scenarios that represent different stages of the pandemic. We then asked policy-makers how they would like the diagnostic model to change in those situations. We found that in all the scenarios, combining the two testing methods (rapid tests and syndromic surveillance) gave an equal-to-better performance than either method on its own. The improvement was especially striking when cases were rising rapidly and policy-makers wanted to reduce missed diagnoses, a situation in which understanding case rates is crucial and these rapidly scaleable tests can be deployed quickly. 

The lessons here were learnt in the context of COVID-19 but have applications far beyond. Across the world, there is an ongoing need to find ways of monitoring diseases without access to or funding for gold-standard tests. For many diseases, there are imperfect but affordable diagnostics like rapid antigen tests and syndromic surveillance and often a dedicated workforce supporting healthcare in communities. By harnessing the phenomenal efforts of community health workers, using modern digital technologies and deploying state of the art statistical models, we hope to enter into a new era of disease surveillance.

The article resulting from this research can be found at https://www.nature.com/articles/s41467-022-30640-w.  

fergusjchadwick

Statistician, University of Glasgow