论文笔记1210

1. A novel method for lung masses detection and location based on deep learning_2017

这篇文章做了一个我想做的实验,分别用VGG16和ResNet作为FasterRCNN的特征提取网络,应用于自己的肺部数据库

结果是ResNet效果更好,但是文章给出的实验结果也没有很好,测试集的ap只有50%.

We find that the methodology using RESNET for feature extraction is more satisfying than VGG16, the Ap achieved 52.38% by comparing the test results.

训练集结果接近于1


训练集结果


PR曲线
测试集结果

2. An evaluation of region based object detection strategies within X-ray baggage security imagery - IEEE Conference Publication_2017

应用于安检的行李检测,要求实时性.这篇文章对比了FasterRCNN+VGG16和RFCN+ResNet101,接下来了解一下RFCN,没有ROIpooling之后的全连接正好解决了我的问题,全部网络的参数都可以共享

R-FCN is proposed by Dai et al. [21] by pointing out the main limitation of Faster RCNN that each region proposal within RoI pooling layer is computed hundreds of time due to the two subsequent fully connected layers, which is computationally expensive (Figure 2B). They propose a new approach removing fully connected layers after RoI pooling, and employing a new approach called “position sensitive score map” [21], which handles translation variance issue in detection task (Figure 2C). Since no fully connected subnetwork is used, the proposed model shares weights within almost entire network. This leads to much faster convergence both in training and test stages, while achieving similar results to Fast RCNN.


结果

Faster RCNN (with VGG16) yields 88.3 mAP over a 6-class object detection problem whilst R-FCN (with ResNet-101) achieves 96.3 mAP for firearm detection requiring only 0.1 s per image.



3. Automatic detection of lung nodules: false positive reduction using convolution neural networks and handcrafted features_2017_medicalImage

先利用CNN得到候选区域,然后将候选区域提取九个面,每个面用CNN提取864个特征然后结合手选特征88个送入SVM,好多的方法都是提取九个面,使用自己建立的简单的网路结构



4. Boundary Regularized Convolutional Neural Network for Layer Parsing of Breast Anatomy in Automated Whole Breast Ultrasound_2017_medicalimage

本文针对ABVS数据进行二维数据的分层解析,乳腺分为皮肤,脂肪,肌肉,腺体等.16个病例,二维图像有3100左右


分层示意图


结果

将问题看做像素级的分类

The computational breast anatomy decomposition in the AWBUS images is formulated as a pixel classification problem with four classes of subcutaneous fat, breast parenchyma, pectolis muscle, and chest wall.

用深度监督的方法改进VGG16

deep supervision with the boundary maps is also implemented by the same 5 auxiliary side networks shown in Fig. 2


深度监督的VGG

5. Fully convolutional multi-scale residual DenseNets for cardiac segmentation and automated cardiac diagnosis using ensemble of classifiers_2019_Medical Image Analysis

关于分割的,结合可denseNet和inceptionNet用来解决FCN参数多的问题,证明参差连接的方法是有用的

参考文献

这是一个商用软件CAD系统的界面,那么在训练时是否要进行差值

CAD

6. The adaptive computer-aided diagnosis system based on tumor sizes for the classification of breast tumors detected at screening ultrasound_Ultrasonics_2017

将肿瘤大小以1cm为接线分类,划分肿瘤大小后分类的准确率有提升,这不是必然的吗?用形态学和纹理特征分类,在分割后的肿瘤进行自适应的大小选择,在针对不同大小的肿瘤设计特征.主要工作都在选取特征一共选了85个


流程



7. Using Multi-level Convolutional Neural Network for Classification of Lung Nodules on CT images_2018

主要解决肿块大小不均匀问题用多尺度的方法实现,提到两篇经典的多尺度方法[12][13]同一个团队提出PR文章2017年,本文的不同在于最后输出的时候先把三个网络的特征展开为一维在将这三个特征融合为一维,并使用droupout,感觉没有为什么就是随便试试.利用不同尺度的卷积核实现多尺度.

three levels of CNNs where each level has the same structureand same numbers of feature maps in the secondconvolutional layers, but they have different convolutionalkernels. This idea is aimed to extract multi-scale features ofinput effectively


结构

if we use multi-scale convolution strategy in a single level CNN, for example, a CNN contains 3 convolutional layers with 3 kinds of convolutional kernel sizes, each convolutional layer only can extract one scale features.

We designed 3 kinds of convolutional kernels to improve the ability of extracting features, they are 3×3, 5×5 and 7×7, respectively.


8. Multi-crop Convolutional Neural Networks for lung nodule malignancy suspiciousness classification _Pattern Recognition_2017

上文的引文[13]

We present a Multi-crop Convolutional Neural Network (MC-CNN) to automatically extract nodule salient information by employing a novel multi-crop pooling strategy which crops different regions from convolutional feature maps and then applies max-pooling different times.

实现不用多个网络进行多尺度特征提取

We propose a multi-crop pooling operation which is a specialized pooling strategy for producing multi-scale features to surrogate the conventional max-pooling operation. Without using multiple networks to produce multi-scale features, the proposed approach applying on a single network is effective in computational complexity

本文认为传统的maxpooling只是在单一水平上的特征降维,这种方法阻碍了当目标变换范围较大时的信息保留

Such a setting hinders it from capturing accurate object-specific information when the size of objects varied largely in the images.

这个改进也不难,在上一层的特征R0为输入,R1是R0的中心位置的剪裁,R2是R1中心的剪裁,0进行两次pooling,1进行1次,2不进行,然后进行拼接得到新的多尺度特征.

新的pooling

感觉创新点在PR这种期刊上略低,该团队2015年的一篇文章确实比这个复杂很多三个网络通过输入来调整特征尺度


9. Visualizing and Comparing AlexNet and VGG using Deconvolutional Layers_2016

本文目的是探究CNN网络的内部结构,通过可视化不同层间的不同表达,和每一层保留了什么信息,通过可视化的方法对比named as VGG-16 and AlexNet respectively.

visualizing patches in the representation spaces constructed by different layers, and visualizing visual information kept in each layer.

CNN一个厉害的地方就在于通过感受野和权值共享减少了神经网络需要训练的参数的个数,所谓权值共享就是同一个Feature Map中神经元权值共享,该Feature Map中的所有神经元使用同一个权值。因此参数个数与神经元的个数无关,只与卷积核的大小及Feature Map的个数相关。https://blog.csdn.net/whr_ws/article/details/82822680

输出像素的个数就是神经元的个数

input7*7*3,卷积核3*3*2,output5*5*2,神经元个数50个,通过参数共享1到25号神经元对应于第一个3*3的卷积核,26到50对应于第二个卷积核.

In CNN, neurons are organized by layer, each neuron receives neuron activations from previous layer and weighted by weights in the connections. In fully-connected layer, each neuron is connected to all neurons in previous layers with its own weights. While in convolutional layer, neurons are further organized by feature map and only locally connected to neurons in previous layer. Moreover, all neurons in a feature map share the same filter (weights bank), so neurons in a feature map are favoring the same kind of pattern.

VGG16的感知域

结论:更深的网络能够捕捉更多的信息

Through comparison of CNNs with different depths, it shows that deeper CNN is better at extracting the discriminant information, which improves the prediction performance



10. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?_TMI_2016

本文主要研究迁移学习的医学图像上的效果

Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch?

解决这几个问题:

1. We demonstrated how fine-tuning a pre-trained CNN in a layer-wise manner leads to incremental performance improvement.如何微调的问题

2. We analyzed how the availability of training samples influences the choice between pre-trained CNNs and CNNs trained from scratch.训练数据与微调模型的选择问题

你可能感兴趣的:(论文笔记1210)