Updated on 30.10.2016.
Evidence Based unbiased information
You often come across new tests at work when you need to decide whether the tests are really effective and beneficial. You might find several terminologies reported in the test results that appear obscure to you unless you have a clear grasp of those terminologies. Without that know how you would not be able to interpret the validity of the test results. This knowledge is important for professional examinations at advanced level as well. This article discusses those terminologies required to validate a test result.
How to evaluate a Test
– themedideas Facts & Figures
While evaluating a test, you need to know what is meant by True Positive, True Negative, False Positive, False Negative, Sensitivity, Specificity, Positive and Negative Predictive Values, Accuracy, Relative Risk, Likelihood Ratio, Receiver-Operating Characteristic Curve, Bayes’ Theorem etc.
These are the different parameters to evaluate a test:
True Positive (TP):
The test result is positive in the presence of the condition (e.g. clinical abnormality).
True Negative (TN):
The test result is negative in the absence of the condition (e.g. clinical abnormality).
False Positive (FP):
The test result is positive in the absence of the condition (e.g. clinical abnormality).
False Negative (FN):
The test result is negative in the presence of the condition (e.g. clinical abnormality).
The proportion of all cases of the condition (e.g. clinical abnormality) that have an abnormal test result. It indicates how good is the test at detecting people who have the condition (e.g. clinical abnormality).
Sensitivity = TP / (TP + FN)
The proportion of cases with absence of the condition (e.g. clinical abnormality) that have a normal test result. It indicates how good is the test at excluding people who do not have the condition (e.g. clinical abnormality).
Specificity = TN / (TN+FP)
Positive Predictive Value (PPV):
The proportion of cases with an abnormal test result that have the condition (e.g. clinical abnormality). It indicates the probability of the presence of the condition (e.g. clinical abnormality) in case of a positive test.
Positive Predictive Value (PPV) = TP / (TP+FP)
Negative Predictive Value (NPV):
The proportion of cases with a normal test result that do not have the condition (e.g. clinical abnormality). It indicates the probability of the absence of the condition (e.g. clinical abnormality) in case of a negative test.
Negative Predictive Value (NPV): TN / (TN+FN)
The proportion of all tests with the correct result.
Accuracy = (TP + TN) / All tests
A relative risk of 1 means that the observation does not indicate a distinguished risk that is any different from the population as a whole; above 1, the observation indicates an increased risk; below 1, a reduced risk.
Relative Risk = TP/ (TP+FP)
Likelihood Ratio (LR) or Odds Ratio (OR):
The concept is similar to that of relative risk, but combines the results of a series of tests. It indicates how much more likely is a positive test to be found in the presence of the condition (e.g. clinical abnormality) compared with that in the absence of the condition (e.g. clinical abnormality).
Likelihood Ratio = TP/ (TP+FN) or Sensitivity
FP/ (FP+TN) 1-Specificity
Receiver-Operating Characteristics (ROC) Curve:
It is a graph of sensitivity versus the false-positive rate.
It is a way to calculate the predictive value of test results by comparing the probability of a given abnormality before a test is done (the ‘prior probability’) with the probability of the abnormality after the test has been done (the ‘posterior probability’).
Predictive value of test results = the ‘prior probability’ (before the test) vs the ‘posterior probability’ (after the test)
How to assess
the validity of any study evaluating a test?
Check the following:
1. Whether an independent ‘blind’ comparison with a reference gold standard was present?
2. Was work up bias avoided?
3. Whether the subject (patient) sample was representative of an appropriate spectrum of subjects (patients) to whom the test would be applied in practice?
4. Whether the results of the test being evaluated influenced the decision to perform the reference standard?
5. Whether the methods to perform the test were described in sufficient detail to permit replication?
6. Whether the test would be “Good” to detect the condition and exclude those without the condition (as indicated by Sensitivity, Specificity, PPV, NPV, LR etc)?
7. Were confidence intervals of Sensitivity, Specificity, PPV, NPV, LR etc provided?
8. Was a sensible “normal range” of the test provided (if applicable)?
9. Does the potential benefits of doing the test outweigh the potential risks?
10. Is the test relevant to your work?
11. Is the test financially affordable?
1. Chard T, Lilford RJ. How useful is a test? In: Studd J, ed. Progress in Obstetrics and Gynaecology Vol 9. Edinburgh: Churchill Livingstone, 1991: 3-15.
2. Deeks JJ, Morris JM. Evaluating diagnostic tests. In: Cooke IE, Sackett DL, ed. Bailliere’s Clinical Obstetrics and Gynaecology International Practice and Research Vol 10, Evidence-based Obstetrics and Gynaecology. London: WB Saunders, 1996: 613-30.
3. Jaeschke R, Guyatt G, Sackett DL. Users’ guides to the medical literature. III. How to use an article about a diagnostic test. A. Are the results of the study valid? JAMA, 1994; 271: 389-91.
4. Greenhalgh T. How to Read a Paper; 2nd ed. London: BMJ Publishing Group, 2001.
Notice from themedideas.com to all using paid services/reading
All paid services/reading are non-refundable under any circumstances
This is pay per view, pay once read once. The article could be read only once immediately after paying. If the page is closed the article would not open again without paying again.
© Dr Sudipta Paul, themedideas.com, 2013