文献阅读2

03-04

Date:2022.12.11--03

Title:Multimodal medical image fusion using convolutional neural network and extreme learning machine

Link:Frontiers | Multimodal medical image fusion using convolutional neural network and extreme learning machine (frontiersin.org)

文献阅读2_第1张图片

Framework

文献阅读2_第2张图片

文献阅读2_第3张图片

文献阅读2_第4张图片

Methodology

A novel fusion method on the multimodal medical images exploiting convolutional neural network (CNN) and extreme learning machine (ELM)
is proposed. (CELM)

Results

文献阅读2_第5张图片

Contributions

• A novel method based on CNN and ELM is proposed to deal with the fusion issue of multimodal medical images.

• The traditional CNN model is integrated with ELM to be a modified version called convolutional extreme learning machine (CELM) which has not only much better performance, but also much faster running speed.
• Experimental results demonstrate that the proposed method has obvious superiorities over the current typical ones in terms of both gray image fusion and color image fusion, which is beneficial to obviously enhancing the precision of disease detection and diagnosis directly.

Conclusion CELM combines the advantages of both CNN and ELM. Compared with other typical fusion methods, the proposed one has obvious superiorities in terms of both subjective visual quality and objective metric values.

Notes

Speed up the experiment with ELM.

ELM can get weight matrix.

Their writing skills is good, especially in the methond part.

文献阅读2_第6张图片

Date:2022.12.12--04

Title:A multistage multimodal deep learning model for disease severity assessment and early warnings of high-risk patients of COVID-19

Link:A multistage multimodal deep learning model for disease severity assessment and early warnings of high-risk patients of COVID-19 - PMC (nih.gov)文献阅读2_第7张图片

Framework

文献阅读2_第8张图片

文献阅读2_第9张图片

Methodology

Sequential stage-wise learning, weight paramenters,Multistage multimodal deep learning model.

Results

文献阅读2_第10张图片

文献阅读2_第11张图片

文献阅读2_第12张图片

Contributions

  • A multistage multimodal deep learning (MMDL) model to (1) first assess the patient’s current condition (i.e., the mild and severe symptoms), then (2) give early warnings to patients with mild symptoms who are at high risk to develop severe illness.
  • Build a sequential stage-wise learning architecture.
  • Design a two-layer multimodal feature extractor.The 1st-hierarchy aims to perform intra-modal feature learning and extraction, while the 2nd-hierarchy attempts to perform cross-modal feature fusion.
Conclusion In this paper, we have conceived and implemented a multistage, multimodal deep learning (MMDL) model to assess the disease severity and forecast the disease progression of patients with COVID-19. In summary, the novelty of MMDL embodies sequential stage-wise learning with multimodal inputs. MMDL shows the advantage of studying whole courses of the disease compared to single-stage learning.

Notes

Sequential stage-wise learning with weight paramenters.

你可能感兴趣的:(文献整理,人工智能)