• Profile
Close

Which risk prediction models for lung cancer come out on top?

M3 Global Newsdesk Jul 09, 2018

Several lung cancer risk prediction models were evaluated to identify those that most accurately predict which smokers are at the highest risk for lung cancer.

 


Results were published in the Annals of Internal Medicine.

Growing evidence has shown that individualized risk calculators that account for demographic, clinical, and smoking characteristics could be more effective and efficient than using simple risk factors in selecting individuals who would benefit from screening programs. However, numerous risk prediction models are used in clinical practice, and each selects different populations for screening. Annual screening for lung cancer is recommended by the United States Preventive Services Task Force (USPSTF).

Hormuzd A. Katki, PhD, from the National Cancer Institute, Bethesda, MD, and colleagues, compared the screening populations selected by nine lung cancer risk models and examined their predictive performance in two cohorts to determine the models that could most accurately select individuals at risk in the US.


Models evaluated included:

  • Bach model
  • Spitz model
  • Liverpool Lung Project (LLP) model
  • LLP Incidence Risk Model (LLPi)
  • Hoggart model
  • Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial Model 2012 (PLCOM2012)
  • Pittsburgh Predictor
  • Lung Cancer Risk Assessment Tool (LCRAT)
  • Lung Cancer Death Risk Assessment Tool (LCDRAT)


Models were included if they provided a cumulative risk estimate for primary lung cancer or lung cancer mortality for at least one time point, were valid for general Western populations, did not require biospecimens or CT screening results, and were internally or externally validated in a disease-free cohort of smokers.

The models selected screening populations by using data from the National Health Interview Survey (NHIS), a nationally representative survey that annually assesses the health of civilians in the US.

Data from two large cohorts were used to validate the risk models: 337,388 ever-smokers in the National Institutes of Health-AARP (NIH-AARP) Diet and Health Study, and 72,338 ever-smokers in the Cancer Prevention Study II (CPS-II) Nutrition Survey.

From the two cohorts, each model selected a population based on a 5-year lung cancer risk threshold of 2.0% or a 5-year lung cancer death risk threshold of 1.2%; thresholds are similar to the USPSTF criteria in the number of ever-smokers selected.

Then, each model selected 8.9 million US ever-smokers with the highest model-estimated risk, based on the size of the population eligible for screening according to the USPSTF.

The investigators noted that in each of the research cohorts, minorities were underrepresented. The NIH-AARP and CPS-II cohorts had fewer current smokers (19.5% and 10.3%, respectively) than the US population (34.2%), but smokers in these cohorts smoked at greater intensity and had more pack-years of exposure.

Four models (Bach model, PLCOM2012, LCRAT, and LCDRAT) were well-calibrated in both cohorts. However, among those meeting the USPSTF criteria, even the well-calibrated models overestimated risk in the highest quintile. Area under the curve (AUC), which assessed discrimination, was also higher in the four well-calibrated models than in the other five models.

At a fixed risk threshold of 2.0% for lung cancer incidence or 1.2% for lung cancer death over five years, the four well-calibrated models performed similarly. No model achieved both the highest sensitivity and the highest specificity in either NIH-AARP or CPS-II cohort.

The four well-calibrated models chose populations ranging from 7.6 million to 10.9 million US ever-smokers.

To account for possible miscalibration of the models, 8.9 million US ever-smokers at the highest risk who were selected by each model were compared. Only 20% (1.8 million) ever-smokers were chosen by all nine models and the USPSTF criteria. However, the four well-calibrated models achieved consensus on 73% (6.5 million) of ever-smokers.

The performance of the models in racial/ethnic and other subgroups indicated that the Bach model and the Pittsburgh Predictor underestimated risk in African Americans and overestimated risk in Hispanics. PLCOM2012 substantially underestimated risk in Hispanics. Furthermore, the LCRAT underestimated risk in the “Asian/other” subgroup.

The numbers of ever-smokers chosen in other subgroups based on age, smoking status (current vs former), family history of lung cancer, or chronic obstructive pulmonary disease (COPD) status were also varied among the models.

The four best performing models as measured by calibration and discrimination showed close agreement on which ever-smokers to select for screening. However, the models differed widely in the number of ever-smokers they selected (from 7.6 million to 26 million), and there was no consensus on which ever-smokers to select for screening. These discrepancies were due to the different predictive performance of the models.


The authors noted that the study had some limitations. Some of the models evaluated (LLP, LLPi, and the Hoggart and Spitz models) used case-control data or were based on European data. Case control data can include recall bias of smoking history and may not be representative of the population.

Moreover, lung cancer risk may be generally higher in Europe, so models based on US data may poorly predict risk in Europe, which differs from the US in many aspects of tobacco consumption, such as cigarette composition, anti-tobacco policies, and cultural smoking habits.

Another drawback was that research cohorts used for validation are not representative of the US population and underrepresent racial/ethnic minorities. In addition, both cohorts recruited in the 1990s and are not representative of current smoking exposure in the US population.


The investigators suggested that effectively and efficiently targeting lung cancer screening to persons at highest risk can further reduce lung cancer mortality.

Four models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) performed best, as measured by calibration and discrimination, and showed close agreement on which ever-smokers to select for screening, concluded the authors.


The models should be further refined to improve their performance in certain subpopulations so that guidelines could allow cost-effective risk-based selection for lung cancer screening.

In an editorial accompanying the article, Martin C. Tammemägi, DVM, MSc, PhD, from Brock University, St. Catharines, Ontario, Canada, suggested that this study confirms that lung cancer screening is rapidly evolving and policymakers must be convinced to accept the use of models to identify screening-eligible persons.1

To read more about this study, click here.

 

This story is contributed by Robyn Boyle and is a part of our Global Content Initiative, where we feature selected stories from our Global network which we believe would be most useful and informative to our doctor members.

Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
  • Exclusive Write-ups & Webinars by KOLs

  • Nonloggedininfinity icon
    Daily Quiz by specialty
  • Nonloggedinlock icon
    Paid Market Research Surveys
  • Case discussions, News & Journals' summaries
Sign-up / Log In
x
M3 app logo
Choose easy access to M3 India from your mobile!


M3 instruc arrow
Add M3 India to your Home screen
Tap  Chrome menu  and select "Add to Home screen" to pin the M3 India App to your Home screen
Okay