A new review out Wednesday paints a sorry picture about the state of antibody tests meant to find out whether you’ve ever had covid-19. It suggests that these tests range wildly in accuracy from manufacturer to manufacturer, with tests that quickly return results at the doctor’s office faring so badly that they probably shouldn’t be used at all for now.
Antibody, or serological, tests are designed to look for the specific antibodies that our bodies make in response to infection from the coronavirus that causes covid-19. While these tests are not meant to diagnose an active infection, they should ideally tell you if you’ve ever had the virus in the past, even if you didn’t feel sick at the time. In reality, it’s been more complicated than that.
Covid-19 antibody tests first became available in the U.S. around March and April, though some countries had developed their versions earlier. But many of the tests initially out in the global market were cleared for use with little outside validation of their accuracy by relevant health agencies in countries like the U.S. Eventually, the Food and Drug Administration placed stricter restrictions on the clearance or approval of these tests. It now maintains a list of tests that have been removed from the market, but the landscape of antibody tests still appears to be riddled with duds.
In this new review, published in the BMJ, researchers looked at 40 studies evaluating the accuracy of antibody tests for covid-19 developed across the world. These studies tried to measure the sensitivity (the higher the percentage, the less chance of a false negative) and specificity (the higher the percentage, the less chance of a false positive) of these tests. They also pooled together results, grouping them by the types of tests studied. The studies were conducted in China, the U.S., Italy, and Japan, among others.
Overall, the studies themselves weren’t necessarily high quality. Half hadn’t gone through peer review at the time, and nearly all of them were considered at high risk of bias in some way, both in the selection of patients chosen for the study and in how the results were interpreted.
G/O Media may get a commission
The risk of false negatives also varied widely between tests, with pooled sensitivity ranging from 66% to over 97.8%. The risk of false positives was less of a worry, with pooled specificity ranging from 96.6% to 99.7%. Of the types of tests studied, it was rapid point-of-care tests that fared the worst overall.
In a scenario where 10% of people in a city had contracted the coronavirus, the researchers estimated, these rapid tests would wrongly find 31 false positives out of every 1000 people tested, along with 34 false negatives. This might not seem like much at first, but when you consider that some countries are hoping to use these tests on a massive scale as a way to declare people safe from the virus, via so-called “immunity passports,” it can add up.
Indeed, the authors wrote that their findings should “give pause to governments that are contemplating the use of serological tests—in particular, point-of-care tests—to issue immunity ‘certificates’ or ‘passports.’”
There are other concerns about using antibody tests as a way to confirm immunity from the virus. Evidence is starting to suggest, for instance, that some types of antibodies may fade away from covid-19 survivors within a few months, especially if they were asymptomatic (that doesn’t necessarily mean that they’ll lose immunity, though). But in the short term, the authors write, there’s no justification for using these seriously flawed rapid tests. And generally, there needs to be a concerted effort to better validate the accuracy of antibody tests before they reach the public.
The authors conclude that although the number of tests available so early in the pandemic is impressive, we need to have higher standards for how and when those tests are applied. “While the scientific community should be lauded for the pace at which novel serological tests have been developed, this review underscores the need for high quality clinical studies to evaluate these tools,” they write.