Co-Learning from Chest CT and Clinical Demographics

  • High false positive rates

To address such challenges, co-learns from detailed clinical demographics and 3D CT images.

  • A 3D attention-based deep convolutional neural net (DCNN) is proposed to identify lung cancer from the chest CT scan without prior anatomical location of the suspicious nodule.
  • To improve upon the non-invasive discrimination between benign and malignant, we applied a random forest classifier to a dataset integrating clinical information to imaging data.
  • The results show that the AUC obtained from clinical demographics alone was 0.635 while the attention network alone reached an accuracy of 0.687.
  • In contrast when applying our proposed pipeline integrating clinical and imaging variables, we reached an AUC of 0.787 on the testing dataset.
    The proposed network both efficiently captures anatomical information for classification and also generates attention maps that explain the features that drive performance.
  • co-learns with 3D CT image volumes and clinical demographics

    • The proposed method integrates deep convolutional neural networks (DCNN) and other statistical and machine learning techniques to characterize early-stage lung cancer and distinguish benign from malignant nodules
    • To visualize the behavior of the deep neural
      network, the 3D attention mechanism [5] was integrated to the DCNN. This proposed method achieved superior
      performance on classification accuracy by co-learning with image features and clinical demographics than learning from
      a single resource respectively.
  • Co-Learning from Chest CT and Clinical Demographics_第1张图片
    image.png
  • Co-Learning from Chest CT and Clinical Demographics_第2张图片
    image.png
  • Preprocessing

    • intensities of 3D volume were resampled to 1x1x1mm isotropic resolution and the lung segmentations were computed from the original CT volume and zero-padded the non-lung regions to Hounsfield unit score of 170 and these were normalized to 0-1 scale intensities. The preprocessed whole lung was used as the first input channel of learning(basically the _segmentation_ can be considered as to extract the lung region from the original CT slices);
    • Using the Kaggle data science bowl 2017 pre-trained 3D U-Net nodule detection model to get he nodule mask(this part is to get the nodule location), this will be used as the second channel of input
    • Combine the two channels, co-rescaled to a 4D tensor(the first dimension is the number of channels) 2x128x128x128 as the final input to the attention-based CNN. The learning target for the network is a binary category with 1 indicating malignant and 0 indicating benign.
      • Noticeably, data argumentation is performed in this case.
  • Attention-based CNN

    • The attention-based model allows the network to captures some useful information as we human do, just like when you are translating one language to another language, at the beginning you tend to focus on some keyword in order to understand the whole idea.
    • The attention mechanism enhances the model by allowing the model to "glance back", therefore the output will not just depend on the last state of your input, rather it will look at the weighted combination of all your input states.
    • In medical image analysis, it allows the model to extract and look at the specific or important parts of your body in order to make a better prediction in the final output layer.
Co-Learning from Chest CT and Clinical Demographics_第3张图片
attention-based CNN
  • Random Forest Classifier with multi-source data
    • Xgboost
    • Xgboost Random Forest Classifier for both image features and clinical features
    • yielding final output which is better than using a single input source
Co-Learning from Chest CT and Clinical Demographics_第4张图片
image.png

你可能感兴趣的:(Co-Learning from Chest CT and Clinical Demographics)