论文研究

10-8

  1. The impact of image processing algorithms on digital radiography of patients with metalic hip implants
    图像处理算法对金属髋关节植入患者数字影像的影响
    Keywords:
    Digital radiography;数字化x线摄影;Image quality;图像质量;Image artifacts;图像伪影;Image processing algorithms图像处理算法
    Highlights:
    Image processing of digital radiographs aims to enhance the visibility of the anatomic areas of interest.数码x光照片的图像处理,目的是要提高解剖区域的能见度。
    Image processing algorithms may adversely affect image quality of digital radiographs and create artifacts.图像处理算法可能会对数字x光片的图像质量产生不利影响并产生伪影。
    Hip implants create local non-uniformities whose extent depends on the processing parameter settings.髋关节植入物产生局部不均匀性,其程度取决于加工参数设置。
    Occasional misrepresentation of imaged anatomy may be also observed in patients without implants.偶尔也可以在未植入的患者中观察到对影像解剖的错误描述。
    Methods:
    A quality control phantom was imaged using a digital radiographic unit and the standard examination protocol for Pelvis anteroposterior (AP) projection. The original image was reprocessed with all available selections of Diamond View, which is a processing algorithm for optimizing image quality of different anatomic regions. The same procedure was repeated for two other examination protocols, Femur AP and Hip AP, which differ in terms of harmonization kernel and gain, and look up table settings. The whole procedure was repeated with a Pb strip, 2 cm wide and 3 mm thick, positioned close to the right phantom edge, in order to simulate a metallic hip implant.
    使用数字放射线设备和骨盆正位(AP)投影标准检查程序对质控体模进行成像。利用所有可用的钻石视图选择对原始图像进行再处理,这是一种优化不同解剖区域图像质量的处理算法。对于股骨AP和髋关节AP这两个在协调内核和增益方面不同的检查方案,以及查找表设置,重复了相同的步骤。整个过程重复了Pb地带,2 厘米宽,3 毫米厚,定位接近正确的伪边缘,为了模拟金属髋关节植入物。
    Using ImageJ a number of regions of interest (ROIs) were positioned on the phantom images and the impact of processing parameters on certain image characteristics and image quality indices was evaluated.
    利用ImageJ对图像中的多个感兴趣区域(ROIs)进行定位,评价处理参数对图像某些特性和图像质量指标的影响。

  2. Objective evaluation on yarn hairiness detection based on multi-view imaging and processing method客观评价基于多视角成像处理的纱线毛羽检测方法
    Keywords:
    Yarn hairiness纱线毛羽;Multi-view image acquisition多视点图像采集;Image processing图像处理;Objective evaluation客观评价
    Highlights:
    A multi-view yarn image acquisition device was proposed.提出了一种多视角纱线图像采集装置。
    Explain the reason for the poor repeatability of the yarn parameters at a single angle.从单角度解释纱线参数重复性差的原因。
    The multi-view imaging and processing method could be obtained more comprehensive yarn hairiness parameter information.多视图成像处理方法可以获得更全面的纱线毛羽参数信息。
    Abstract:
    In this paper, a multi-view yarn image acquisition device was proposed to collect yarn images from many different viewing angles instead of a single viewing angle, for the purpose of obtaining the expected accurate measurement.
    本文提出了一种多视角纱线图像采集装置,从多个不同视角而不是单一视角采集纱线图像,以获得期望的准确测量结果。
    One set of the proposed image processing algorithms, quite qualified for processing the multi-view yarn image sequences, was employed to obtain the shape of the yarn hairiness viewed from different angles. Both lengths and numbers of yarn hairiness from different viewing angles could be identified, and besides, the average value of these hairiness parameters could be calculated to determine the quality of yarn hairiness.
    提出的一套图像处理算法能够较好地处理多视角纱线图像序列,获得从不同角度观察到的纱线毛羽形状。从不同的观察角度识别出纱线毛羽的长度和数量,并计算出这些毛羽参数的平均值,从而确定纱线毛羽的质量。
    Our experimental results show that the multi-view imaging and processing method could be used to avoid the maximum or minimum value of the detection results, with more comprehensive yarn hairiness parameter information. In addition, as the guidance for the subsequent processing on yarn products, the processing results obtained from multi-view imaging and processing algorithm are characterized by reproducible, convenient for further study of yarn hairiness. Combined with the existing image processing algorithms, the multi-view image acquisition device put forward in this paper could be adopted to form a complete yarn hairiness detection system, providing a favorable theoretical support for the future development of digital yarn quality evaluation system.
    实验结果表明,采用多视图成像处理方法可以避免检测结果的最大值或最小值,获得更全面的纱线毛羽参数信息。此外,作为纱线产品后续加工的指导,多视图成像和加工算法得到的加工结果具有重现性,便于进一步研究纱线毛羽。结合现有的图像处理算法,本文提出的多视图图像采集装置可用于形成完整的纱线毛羽检测系统,为数字纱线质量评价系统的未来发展提供了良好的理论支持。

  3. Experimental study on the flame stability and color characterization of cylindrical premixed perforated burner of condensing boiler by image processing method采用图像处理方法对冷凝锅炉圆筒预混穿孔燃烧器火焰稳定性及颜色表征进行了实验研究
    Keywords:
    Cylindrical perforated burner圆柱形穿孔燃烧器;Stable blue flame稳定的蓝色火焰;Premixed flame预混合火焰;Image processing图像处理;RGB color spaceRGB颜色空间
    Highlights:
    A cylindrical premixed perforated burner is investigated by image processing methods.采用图像处理方法对一种圆柱形预混多孔燃烧器进行了研究。
    Digital images by CCD camera and color processing techniques used for finding flame characteristics.数码图像的CCD相机和颜色处理技术用于寻找火焰特性。
    The stable blue flame are found in equivalence ratio of 0.7–0.73稳定的蓝色火焰的当量比为0.7 ~ 0.73
    The optimum condition of burner is defined as the intersection of two intensities’ component of R and B.将R和B两个强度分量的交点确定为燃烧器的最优工况。
    Abstract:
    In this paper, a cylindrical premixed perforated burner which is mostly used in condensing boilers is investigated by image processing methods. The burner is experimentally analyzed in its operating heating capacities (11.7–17.1 kW) and equivalence ratios (0.4–1.2). Flame properties were studied using digital images by CCD camera and color processing techniques. The method devised a procedure for finding a reliable relation between a digital image color and flame characteristics in the visible wavelength domain. It is observed that by decreasing the equivalence ratio from 1.2 to 0.4, the flame color changed from green with yellow and then blue. Besides, flame lift-off and blow off were also observed. Lower flammability limit is in the equivalence ratio of 0.44. The optimum conditions of the burner, which is defined by the stable blue flame, are found in the equivalence ratio of 0.7–0.73. Moreover, RGB analysis is used to find the stable operation of the burner. This stable optimum condition can be defined as the intersection of two intensities’ component of Red and Blue in image processing of the flame.
    本文采用图像处理方法对一种主要用于冷凝锅炉的圆柱形预混穿孔燃烧器进行了研究。实验分析了燃烧器的操作(11.7 - -17.1 千瓦)供热能力和等价比率(0.4 - -1.2)。采用CCD相机和彩色图像处理技术对火焰特性进行了研究。该方法设计了一种在可见波长域中寻找数字图像颜色与火焰特性之间可靠关系的程序。结果表明,当当量比由1.2降至0.4时,火焰颜色由绿色变为黄色,然后变为蓝色。此外,还观察到火焰上升和吹灭。较低的可燃性极限是在当量比0.44。根据稳定的蓝色火焰确定了最佳燃烧器条件,其当量比为0.7 ~ 0.73。通过RGB分析,确定了燃烧器的稳定工作状态。该稳定的最优条件可以定义为火焰图像处理中红色和蓝色两个强度分量的交点。

  4. Improved outdoor thermography and processing of infrared images for defect detection in PV modules改进的室外热成像和用于光伏组件缺陷检测的红外图像处理
    Keywords:
    Defect detection缺陷检测;Photovoltaic module光生伏打组件;Thermography温度记录;Infrared images红外图像;Image processing图像处理;Current-voltage (IV) measurements电流电压(IV)测量;
    Highlights:
    An improved thermography scheme is presented for defect detection in PV modules.
    提出了一种改进的光伏组件缺陷检测方法。
    Improved IR images are obtained providing more details about defects in PV modules.
    改进的红外图像提供了更多关于光伏组件缺陷的细节。
    Differences between indoor and outdoor thermography are highlighted.
    强调了室内和室外热成像的区别。
    Performance factor is estimated that represents quantitative impact of defects.
    估计了表征缺陷定量影响的性能因子。
    An image processing scheme for locating edges of severe & mild defects is presented.
    提出了一种用于严重和轻微缺陷边缘定位的图像处理方案。
    Abstract:
    Defect detection in photovoltaic (PV) modules and their impact assessment is important to enhance the PV system performance and reliability. To identify and analyze the defects, an improved outdoor infrared (IR) thermography scheme is presented in this study. The indoor (dark) and outdoor (illuminated) IR experiments are carried out on normal operating and defective PV modules. The indoor and outdoor measurements for normal operating modules are similar. However, the measurements for defective modules show difference i.e. the outdoor images show fewer or not at all defects in comparison to indoor images. Subsequent to this, outdoor imaging is carried out with our improved outdoor thermography scheme. This scheme is based on modulating the temperature of PV module through altering the electrical behavior of single cell. Therein, a PV cell is shaded in different fractions to attain different current conditions between open circuit and maximum power point, that causes temperature changes in series connected cells leading to different temperature conditions. The images obtained by this scheme provide clearer and detailed information about defects which is much similar to that given by indoor IR images. The severe and mild defective regions show temperature difference of more than 30° and 20° respectively in outdoor. The performance factor (PF) based on translated power output is also calculated for studied two modules that represents the quantitative impact of defects. The PF for PV module 1 and 2 is reduced from 97% to 31% and from 96% to 88% respectively with induction of defects. The PF values correlate with IR measurements of these modules. Furthermore, an image processing scheme comprising image filtering, color quantization and edge detection operations, is presented, that locate the edges of severe and mild defective regions in IR images.
    光伏组件缺陷检测及其影响评估对提高光伏系统性能和可靠性具有重要意义。为了识别和分析缺陷,提出了一种改进的室外红外热成像方案。室内(黑暗)和室外(照明)的红外实验在正常运行和有缺陷的光伏组件上进行。正常工作模块的室内外测量值相似。然而,缺陷模块的测量结果显示出差异,即室外图像显示的缺陷比室内图像更少或根本没有缺陷。在此之后,利用我们改进的室外热成像方案进行室外成像。该方案是通过改变单个电池的电行为来调节光伏组件的温度。其中,光伏电池以不同的分块进行遮荫,以获得开路与最大功率点之间不同的电流条件,从而引起串联电池的温度变化,导致不同的温度条件。该方案获得的图像与室内红外图像提供的缺陷信息非常相似,可以提供更清晰、详细的缺陷信息。室外严重和轻度次品区温度差分别大于30°和20°。针对两个表征缺陷定量影响的模块,计算了基于平移功率输出的性能因子。当存在缺陷时,PV模块1和2的PF分别从97%降至31%和96%降至88%。PF值与这些模块的红外测量值相关。在此基础上,提出了一种基于图像滤波、颜色量化和边缘检测的红外图像处理方案。

  5. The determination of age and gender by implementing new image processing methods and measurements to dental X-ray images通过对牙科x光图像实施新的图像处理方法和测量来确定年龄和性别
    Keywords:
    Age and gender estimation from teeth从牙齿估计年龄和性别;Morphological measurements形态学测量;Panoramic radyografi全景radyografi;Image processing techniques图像处理技术
    Highlights:
    Specific measurement calculations were made on dental x-ray images to determine age.
    对牙科x光图像进行了具体的测量计算,以确定年龄。
    Morphological features of teeth were made with new ideas and original techniques.
    对牙齿形态特征的研究有了新的思路和原始的技术。
    This study presents new and many techniques for age and gender determine.
    这项研究提出了许多新的技术来确定年龄和性别。
    The application is made dynamic and the images in the database can be changed.
    应用程序是动态的,数据库中的图像可以改变。
    Abstract:
    All of the features used to identify and distinguish people from others constitute that person’s identity. For any reason, a person’s identity may need to be identified and distinguished from other people. Authorities provided the credentials of a living or dead person in such cases from the forensic institutions. The identification process must be done correctly. In this study, specific measurement calculations were made on dental x-ray images to determine age and gender. Age and gender information of the persons were systematically determined by working with panoramic dental x-ray images. Panoramic dental x-ray images were taken out of bounds, and a total of 1315 tooth images and 162 different tooth groups were used. These images have been subjected to 3 different preprocess operations. Each preprocessed image is recorded in different (M1, M2, M3) folders. Then, image processing techniques applied for the first time to the tooth images (Area, Perimeter, Center of gravity, Similarity ratio, Radius calculation) were applied. This information of the teeth is also kept in separate XML (XMLlist-1, 2, 3) files. The application was developed in C # programming language. The user loads the tooth image into the application. This image can be predicted by comparing it with the comparison group (area, etc.) after the desired preprocessing. The highest estimated age and gender estimates are 100% and 95%, respectively.
    用来识别和区分他人的所有特征构成了那个人的身份。出于任何原因,一个人的身份可能需要被识别和区别于其他人。在这种情况下,当局向司法机构提供了活人或死者的证件。识别过程必须正确进行。在本研究中,我们对牙科x光图像进行了具体的测量计算,以确定年龄和性别。通过全景牙科x光图像系统地确定患者的年龄和性别信息。全景式牙科x线图像不受限制,总共使用了1315张牙齿图像和162个不同的牙齿组。这些图像经过了3种不同的预处理操作。每个预处理后的图像记录在不同的(M1, M2, M3)文件夹中。然后,首次将图像处理技术应用于牙齿图像(面积、周长、重心、相似比、半径计算)。牙齿的信息也保存在单独的XML (XMLlist-1、2、3)文件中。该应用程序是用c#编程语言开发的。用户将牙齿图像加载到应用程序中。通过与对照组(面积等)进行预期的预处理,可以预测该图像。年龄和性别的最高估计值分别为100%和95%。

10-10

  1. Image analysis of liberation spectrum of coarse particles粗粒子释放光谱的图像分析
    Keywords:
    Image analysis图象分析;Liberation spectrum释放频谱;Particle size distribution粒度分布;Stereological correction体视学的修正
    Highlights:
    The liberation spectrum for coarse particles was computed with image analysis.
    用图像分析方法计算了粗颗粒的释放光谱。
    The image analysis successfully predicted the liberation spectrum.
    图像分析成功地预测了释放光谱。
    The image analysis estimated precisely the particle size distribution.
    通过图像分析,可以准确地估计出颗粒的粒度分布。
    The proposed imaging method might be used for real high-grade coarse particles.
    所提出的成像方法可用于真实的高质量粗颗粒。
    Abstract:
    Determination of liberation spectrums by using MLA and QEMSCAN techniques require polished sections and fine particles. These techniques cannot be performed in-situ and for coarse particles. Thus, the focus of this technical note is to investigate whether the image analysis method can be used for the determination of liberation spectrum for coarse particles. Two methods were used to determine the liberation spectrum. In the first method, the liberation spectrum was obtained using the small and large diameter of particles. In the second method, the liberation spectrum was determined using the small and large diameter of particles as well as the shape correction diameter. The results showed that the image analysis can be used to successfully determine the liberation spectrum. The composition of composite particles was significantly improved when the stereological correction was used i.e. the square root of the mean square error for the particle composition using the method 1 was 1.25% while that using the method 2 was 0.60%. The proposed method might be used for the determination of liberation spectrum of high-grade real coarse particles. However, this requires a significant amount of future work.
    使用MLA和QEMSCAN技术测定释放光谱需要抛光切片和细颗粒。这些技术不能在现场和对粗颗粒进行。因此,本技术说明的重点是研究图像分析方法是否可以用于测定粗颗粒的释放光谱。用两种方法测定其释放谱。在第一种方法中,利用小颗粒和大颗粒的直径来获得释放谱。在第二种方法中,利用粒子的大小直径以及形状校正直径来确定释放光谱。结果表明,利用图像分析方法可以成功地确定释放光谱。使用体视校正后,复合粒子的组成有了明显的改善,即使用方法1的粒子组成的均方误差的平方根为1.25%,而使用方法2的均方误差为0.60%。该方法可用于测定高纯度真粗颗粒的释放光谱。然而,这需要大量的后续工作。
  2. Shape Characterization of Subvisible Particles Using Dynamic Imaging Analysis利用动态成像分析来表征亚可见粒子的形状
    Keywords:
    particle size颗粒大小;protein aggregation蛋白质聚集;protein formulation(s)蛋白质配方;image analysis图像分析;bioanalysis生物分析法
    Abstract:
    Protein aggregates and subvisible particles (SbvP), inherently present in all marketed protein drug products, have received increasing attention by health authorities. Dynamic imaging analysis was introduced to visualize SbvP and facilitate understanding of their origin. The educational United States Pharmacopeia chapter <1787> emphasizes that dynamic imaging analysis could be used for morphology measurements in the size range of 4-100 μm. However, adequate morphology characterization, as suggested in the United States Pharmacopeia <1787> proposed size range, remains challenging as nonspherical size standards are not commercially available. In this study, a homogenous and well-defined nonspherical particle standard was fabricated and used to investigate the capabilities of 2 dynamic imaging analysis systems (microflow imaging (MFI) and FlowCAM) to characterize SbvP shape in the size range of 2-10 μm. The actual aspect ratio of the SbvP was measured by scanning electron microscopy and compared to the results obtained by dynamic imaging analysis. The test procedure was used to assess the accuracy in determining the shape characteristics of the nonspherical particles. In general, dynamic imaging analysis showed decreasing accuracy in morphology characterization for 5 μm and 2 μm particles. The test procedure was also capable to compare and evaluate differences between the 2 dynamic imaging methods. The present study should help to define ranges of operation for dynamic imaging analysis systems.
    蛋白质聚集物和亚可见颗粒(SbvP)在所有已上市的蛋白药物产品中固有地存在,越来越受到卫生当局的关注。引入了动态成像分析来可视化SbvP并促进对其起源的理解。教育美国药典章< 1787 >强调动态成像分析可用于形态学测量的尺寸范围4 - 100μm。然而,充分的形态表征,如美国药典<1787>所建议的尺寸范围,仍然具有挑战性,因为非球形尺寸标准没有商业上可用。在这项研究中,一个同质和定义良好的nonspherical粒子标准制作,用于调查2动态成像分析系统的功能(微血流成像(MFI)和FlowCAM)来描述SbvP形状的尺寸范围2 - 10μm。通过扫描电镜测量了SbvP的实际展弦比,并与动态成像分析的结果进行了比较。该测试程序用于评估确定非球形颗粒形状特征的准确性。一般来说,动态图像分析显示,形态特征5μm降低准确性和2μm粒子。该测试程序还可以比较和评估两种动态成像方法之间的差异。本研究将有助于确定动态成像分析系统的操作范围。
  3. Experimental analysis of operational data for roundabouts through advanced image processing利用先进的图像处理技术对环形路运行数据进行实验分析
    Keywords:
    Transportation运输;Roundabout环岛;Image analysis/processing图像分析/处理;Vehicle tracking and classification车辆追踪及分类;Operating speed运转速度;Trajectory轨道
    Highlights:
    An investigation carried out to survey vehicle movements at roundabouts was presented.
    对环形交叉口的车辆运动情况进行了调查。
    O/D matrix, classification, trajectories tracking, speed and acceleration from video images analysis.
    O/D矩阵,分类,轨迹跟踪,速度和加速度的视频图像分析。
    A number of camera set-up configurations were adopted.
    采用了许多相机设置配置。
    Performance of installation set-ups with different vehicle tracking strategies has been evaluated.
    对采用不同车辆跟踪策略的安装装置的性能进行了评价。
    Abstract:
    Roundabout is still the focus of several investigations due to the relevant number of variables affecting their operational performances (i.e., capacity, safety, emissions). To develop reliable models, investigations should be supported by devices and related sensors to extract variables of interest (i.e., flow, speed, gap, lag, follow-up time, vehicle classification and trajectory). Notwithstanding that several sensors and technologies are currently used for data collection, most of them present limitations. The paper presents the investigation carried out to survey vehicle movements at roundabouts as a comprehensive video image analysis system is able to derive the origin/destination (O/D) matrix, compile a vehicle classification, track individual vehicle trajectories together with corresponding speeds and accelerations along paths. To this end, the authors collected video-sequences that were analysed with a piece of software developed for that task. To minimize the problems due to perspective distortion, environmental effects, and obstructions, a number of camera set-up configurations were adopted with equipment being placed on central or external poles, and on permanent fixtures such as raised working platforms outside the confines of the intersection area. Performance of those installation set-ups with different vehicle tracking strategies has been evaluated. Particularly, speed has been successfully related to trajectory tortuosity, the result of which emphasizes the tremendous potential of image analysis and opens up to further studies on the evaluation of the operational effects of roundabout geometrics.
    环岛仍然是一些调查的重点,因为相关的变量影响他们的经营业绩(即产能、安全、排放)。为了建立可靠的模型,研究工作应得到设备和相关传感器的支持,以提取感兴趣的变量(即、流量、速度、间隙、滞后、跟进时间、车辆分类及行驶轨迹)。尽管目前有一些传感器和技术用于数据收集,但大多数都存在局限性。摘要提出了一种综合的视频图像分析系统,该系统能够推导出车辆的起点/终点(O/D)矩阵,编制车辆分类,跟踪车辆的轨迹以及相应的速度和加速度。为此,研究人员收集了一些视频序列,并使用为这项任务开发的软件进行了分析。为了尽量减少由于视角扭曲、环境影响和障碍物造成的问题,一些相机设置配置被采用,设备被放置在中心或外部杆上,以及固定的固定装置上,如在交叉区域范围外的凸起的工作平台上。对采用不同车辆跟踪策略的安装装置的性能进行了评价。特别是速度已经成功地与轨道的弯曲度相关,这一结果强调了图像分析的巨大潜力,并为进一步研究回旋几何的操作效果打开了方便之门。
  4. Self-supervised learning for medical image analysis using image context restoration 基于图像上下文复原的医学图像分析的自监督学习
    Keywords:
    Self-supervised learning自监督学习;Context restoration上下文复原;Medical image analysis医学图像分析
    Highlights:
    A novel self-supervised learning strategy called context restoration.
    提出了一种新的自监督学习策略——上下文复原。
    It improves the subsequent learning performance.
    提高了后续的学习效果。
    Its implementation is simple and straightforward.
    它的实现简单而直接。
    It is useful for different types of subsequent tasks, including classification, detection, and segmentation.
    它适用于不同类型的后续任务,包括分类、检测和分割。
    Abstract:
    Machine learning, particularly deep learning has boosted medical image analysis over the past years. Training a good model based on deep learning requires large amount of labelled data. However, it is often difficult to obtain a sufficient number of labelled images for training. In many scenarios the dataset in question consists of more unlabelled images than labelled ones. Therefore, boosting the performance of machine learning models by using unlabelled as well as labelled data is an important but challenging problem. Self-supervised learning presents one possible solution to this problem. However, existing self-supervised learning strategies applicable to medical images cannot result in significant performance improvement. Therefore, they often lead to only marginal improvements. In this paper, we propose a novel self-supervised learning strategy based on context restoration in order to better exploit unlabelled images. The context restoration strategy has three major features: 1) it learns semantic image features; 2) these image features are useful for different types of subsequent image analysis tasks; and 3) its implementation is simple. We validate the context restoration strategy in three common problems in medical imaging: classification, localization, and segmentation. For classification, we apply and test it to scan plane detection in fetal 2D ultrasound images; to localise abdominal organs in CT images; and to segment brain tumours in multi-modal MR images. In all three cases, self-supervised learning based on context restoration learns useful semantic features and lead to improved machine learning models for the above tasks.
    机器学习,尤其是深度学习在过去几年里促进了医学图像分析。基于深度学习的良好模型的训练需要大量的标记数据。然而,通常很难获得足够数量的标记图像用于培训。在许多场景中,有问题的数据集由更多的未标记图像组成。因此,利用无标记数据和有标记数据来提高机器学习模型的性能是一个重要而具有挑战性的问题。自监督学习是解决这一问题的一种可能的方法。然而,现有的适用于医学图像的自监督学习策略并不能显著提高性能。因此,它们往往只能带来微小的改进。为了更好地利用未标记图像,提出了一种基于上下文恢复的自监督学习策略。上下文恢复策略主要有三个特点:1)学习语义图像特征;2)这些图像特征适用于不同类型的后续图像分析任务;3)实现简单。我们验证了上下文恢复策略在医学影像中的三个常见问题:分类、定位和分割。将其应用于胎儿二维超声图像的扫描平面检测,并进行检测;在CT图像中定位腹部器官;并在多模态MR图像中分割脑肿瘤。在这三种情况下,基于上下文恢复的自监督学习学习了有用的语义特征,从而改进了上述任务的机器学习模型。
  5. Image analysis of Spirodela polyrhiza for the semiquantitative detection of copper螺旋藻图像分析用于铜的半定量检测
    Keywords:
    Spirodela polyrhiza紫萍翻译;Copper铜;Image analysis图像分析;
    Abstract:
    Digital image analysis is a processing technique that allows users to extract quantifiable data from digital images. In this study, digital camera photography was used in the determination of leaf chlorophyll content. By analyzing the degree of color change, image analysis served as a method for fast, inexpensive and non-destructive measurement of overall plant health. This study applied image analysis methods on Spirodela polyrhiza plantlets which were exposed to copper, to determine if the rate and degree of leaf color change is proportional to the concentration of copper present in the growth medium. Within 1 day, chlorophyll concentrations of plantlets grown in 2.5 mg/L and 5mg/L Cu(II)SO4 were 0.52 and 0.47 mg/g compared to a control of 0.64 mg/g. Additionally, higher copper concentrations in the growing medium resulted in higher measured mean colour distance, ΔEab. Plantlets grown in 2.5 mg/L and 5 mg/L Cu(II)SO4 solutions showed a ΔEab divergence of 0.2 and 0.25 from the control. It was concluded that the leaf color change can be used as a measure of copper concentration within the range of 1.25 mg/L and 5 mg/L” Lower concentrations of copper did not produce a consistent measurable effect on the plantlets, while higher concentrations exceeded the uptake ability of the plant and could not be accurately distinguished from one another.
    数字图像分析是一种允许用户从数字图像中提取可量化数据的处理技术。本研究采用数码相机摄影法测定叶片叶绿素含量。通过对颜色变化程度的分析,提出了一种快速、经济、无损检测植物整体健康状况的方法。本研究应用图像分析方法对暴露于铜环境下的水绵幼苗叶片颜色变化的速率和程度是否与生长培养基中铜的浓度成正比。1天内,2.5 mg/L和5mg/L Cu(II)SO4处理的植株叶绿素浓度分别为0.52和0.47 mg/g,而对照组为0.64 mg/g。此外,高铜浓度的生长介质导致更高的测量意味着颜色距离,ΔE * ab。植株生长在2.5 mg / L和5 mg / L铜(II) SO4解决方案显示ΔE * ab散度0.2和0.25的控制。得出结论,叶子的颜色变化可以作为衡量铜浓度的范围内1.25 mg / L和5 mg / L“低浓度的铜没有产生一致的植株可衡量的影响,而更高的浓度超过了植物的吸收能力,不能准确区分开来。

10-14

  1. Quality and content analysis of fundus images using deep learning基于深度学习的眼底图像质量与内容分析
    Keywords:
    Retinal image quality analysis视网膜图像质量分析;Fundus images眼底图像;Deep learning深度学习;Transfer learning转移学习
    Highlights:
    Pre-trained deep convolutional neural networks (DCNN) using transfer learning detects low quality and outlier images.
    利用转移学习对深度卷积神经网络(DCNN)进行预处理,检测出低质量和离群的图像。
    Unsupervised level two classification helps in robust detection of medically suitable retinal image (MSRI).
    无监督的二级分类有助于医学上合适的视网膜图像(MSRI)的稳健检测。
    Transfer learning using fine-tuned DCNN pre-trained on millions of images, negotiates large labelled dataset requirement.
    转移学习使用微调的DCNN预先培训数百万的图像,谈判大标签数据集的要求。
    Overall sensitivity, specificity, positive predictive value, negative predictive value and accuracy achieved is above 90%.
    总体敏感性、特异性、阳性预测值、阴性预测值及准确率均达到90%以上。
    7007 images from seven different public databases are used for validation.
    来自7个不同公共数据库的7007张图像被用于验证。
    Abstract:
    Automatic retinal image analysis has remained an important topic of research in the last ten years. Various algorithms and methods have been developed for analysing retinal images. The majority of these methods use public retinal image databases for performance evaluation without first examining the retinal image quality. Therefore, the performance metrics reported by these methods are inconsistent. In this article, we propose a deep learning-based approach to assess the quality of input retinal images. The method begins with a deep learning-based classification that identifies the image quality in terms of sharpness, illumination and homogeneity, followed by an unsupervised second stage that evaluates the field definition and content in the image. Using the inter-database cross-validation technique, our proposed method achieved overall sensitivity, specificity, positive predictive value, negative predictive value and accuracy of above 90% when tested on 7007 images collected from seven different public databases, including our own developed database—the UoA-DR database. Therefore, our proposed method is generalised and robust, making it more suitable than alternative methods for adoption in clinical practice.
    近十年来,自动视网膜图像分析一直是一个重要的研究课题。各种算法和方法已被开发用于分析视网膜图像。这些方法大多使用公共视网膜图像数据库进行性能评估,而没有首先检查视网膜图像质量。因此,这些方法报告的性能指标是不一致的。在这篇文章中,我们提出了一种基于深度学习的方法来评估输入视网膜图像的质量。该方法首先进行基于深度学习的分类,根据清晰度、光照和均匀性来识别图像质量,然后进行无监督的第二阶段,评估图像中的字段定义和内容。使用数据库间交叉验证技术,我们提出的方法在对七个不同的公共数据库(包括我们自己开发的数据库——UoA-DR数据库)收集的7007幅图像进行测试时,获得了总体敏感性、特异性、阳性预测值、阴性预测值和90%以上的准确性。因此,我们提出的方法是通用的和稳健的,使它比其他方法更适合在临床实践中采用。

10-16

  1. Artery–vein segmentation in fundus images using a fully convolutional network利用全卷积网络对眼底图像的动静脉分割
    Keywords:
    Fundus image眼底图像;Fully convolutional network全卷积网络;Artery–vein segmentation动脉分割
    Highlights:
    A novel application of fundus image segmentation based on deep learning that achieves A/V discrimination in an automated setting.
    一个基于深度学习的眼底图像分割的新应用,实现了自动设置的A/V识别。
    Ablation study leading to insights for semantic segmentation applied to fundus images.
    消融研究为语义分割在眼底图像中的应用提供了新的思路。
    Detailed benchmarking with previous work.
    与以前的工作进行详细的基准测试。
    Development of A/V ground truth for High Resolution Fundus Image Database. A/V annotations and evaluation code are available at https://github.com/rubenhx/av-segmentation.
    高分辨率眼底图像数据库A/V地面真值的开发。A/V注释和评估代码可在https://github.com/rubenhx/av-segmentation找到。
    Abstract:
    Epidemiological studies demonstrate that dimensions of retinal vessels change with ocular diseases, coronary heart disease and stroke. Different metrics have been described to quantify these changes in fundus images, with arteriolar and venular calibers among the most widely used. The analysis often includes a manual procedure during which a trained grader differentiates between arterioles and venules. This step can be time-consuming and can introduce variability, especially when large volumes of images need to be analyzed. In light of the recent successes of fully convolutional networks (FCNs) applied to biomedical image segmentation, we assess its potential in the context of retinal artery–vein (A/V) discrimination. To the best of our knowledge, a deep learning (DL) architecture for simultaneous vessel extraction and A/V discrimination has not been previously employed. With the aim of improving the automation of vessel analysis, a novel application of the U-Net semantic segmentation architecture (based on FCNs) on the discrimination of arteries and veins in fundus images is presented. By utilizing DL, results are obtained that exceed accuracies reported in the literature. Our model was trained and tested on the public DRIVE and HRF datasets. For DRIVE, measuring performance on vessels wider than two pixels, the FCN achieved accuracies of 94.42% and 94.11% on arteries and veins, respectively. This represents a decrease in error of 25% over the previous state of the art reported by Xu et al. (2017). Additionally, we introduce the HRF A/V ground truth, on which our model achieves 96.98% accuracy on all discovered centerline pixels. HRF A/V ground truth validated by an ophthalmologist, predicted A/V annotations and evaluation code are available at https://github.com/rubenhx/av-segmentation.
    流行病学研究表明,视网膜血管的大小随着眼疾、冠心病和中风的发生而改变。不同的指标已被描述来量化这些变化的眼底图像,其中最广泛使用的是小动脉和静脉口径。分析通常包括一个手工操作过程,在此过程中,一个训练有素的分级员区分小动脉和小静脉。这个步骤可能很耗时,并且可能会引入可变性,特别是在需要分析大量图像的情况下。鉴于全卷积网络(FCNs)最近在生物医学图像分割中的成功应用,我们评估了它在视网膜动静脉(A/V)识别中的潜力。据我们所知,用于血管提取和a /V识别的深度学习(DL)体系结构以前从未使用过。为了提高血管分析的自动化程度,提出了一种基于FCNs的U-Net语义分割结构在眼底图像动静脉识别中的应用。利用DL,得到的结果超过了文献报道的准确性。我们的模型在公共驱动器和HRF数据集上进行了训练和测试。对于驱动,测量两个像素以上的血管的性能,FCN对动脉和静脉的准确率分别为94.42%和94.11%。这比Xu等人(2017)报告的先前技术状态减少了25%的误差。此外,我们还引入了HRF A/V ground truth,我们的模型对所有发现的中心线像素的准确率达到了96.98%。HRF A/V地面真相由眼科医生验证,预测A/V注释和评估代码可在https://github.com/rubenhx/av-segmentation获得。

10-17

Joint segmentation and classification of retinal arteries/veins from fundus images眼底图像视网膜动脉/静脉的联合分割与分类
Keywords:
CNN卷积神经网络;Artery and vein classification动脉和静脉分类;Vessel segmentation血管分割;Fundus images眼底图像;Retina视网膜
Highlights:
A fast deep-learning method that simultaneously segments and classifies vessels into arteries and veins is proposed.
提出了一种同时对血管进行血管切分和分类的快速深度学习方法。
An efficient graph-based method is used to propagate the CNN’s labeling through the vascular tree.
采用一种有效的基于图的方法,将CNN的标记通过血管树进行传播。
Our method outperforms the leading previous works on a public dataset for A/V classification and is by far the fastest.
我们的方法在a /V分类的公共数据集上比以前领先的工作做得更好,而且是目前为止最快的。
The proposed global arterio-venous ratio (AVR) calculated on the whole fundus image using our automatic A/V segmentation method can better track vessel changes associated to diabetic retinopathy than the standard local AVR.
采用自动A/V分割方法计算的全眼底动静脉比(AVR)比标准的局部AVR更能跟踪糖尿病视网膜病变相关血管的变化。
Abstract:
Objective
Automatic artery/vein (A/V) segmentation from fundus images is required to track blood vessel changes occurring with many pathologies including retinopathy and cardiovascular pathologies. One of the clinical measures that quantifies vessel changes is the arterio-venous ratio (AVR) which represents the ratio between artery and vein diameters. This measure significantly depends on the accuracy of vessel segmentation and classification into arteries and veins. This paper proposes a fast, novel method for semantic A/V segmentation combining deep learning and graph propagation.

Methods
A convolutional neural network (CNN) is proposed to jointly segment and classify vessels into arteries and veins. The initial CNN labeling is propagated through a graph representation of the retinal vasculature, whose nodes are defined as the vessel branches and edges are weighted by the cost of linking pairs of branches. To efficiently propagate the labels, the graph is simplified into its minimum spanning tree.

Results
The method achieves an accuracy of 94.8% for vessels segmentation. The A/V classification achieves a specificity of 92.9% with a sensitivity of 93.7% on the CT-DRIVE database compared to the state-of-the-art-specificity and sensitivity, both of 91.7%.

Conclusion
The results show that our method outperforms the leading previous works on a public dataset for A/V classification and is by far the fastest.

Significance
The proposed global AVR calculated on the whole fundus image using our automatic A/V segmentation method can better track vessel changes associated to diabetic retinopathy than the standard local AVR calculated only around the optic disc.

目的
需要从眼底图像中自动进行动脉/静脉(A / V)分割,以跟踪多种疾病(包括视网膜病变和心血管疾病)发生的血管变化。量化血管变化的临床措施之一是动静脉比率(AVR),它代表动脉和静脉直径之间的比率。该措施在很大程度上取决于血管分割和分类为动脉和静脉的准确性。本文提出了一种结合深度学习和图传播的快速,新颖的语义A / V分割方法。
方法
提出了卷积神经网络(CNN),以联合将血管划分和分类为动脉和静脉。最初的CNN标记通过视网膜脉管系统的图形表示传播,该图的节点定义为血管分支,边缘通过连接分支对的成本加权。为了有效地传播标签,将图简化为其最小生成树。
结果
该方法对血管分割的准确性达到94.8%。A / V分类在CT-DRIVE数据库上的特异性达到92.9%,灵敏度为93.7%,而最新的特异性和灵敏度均为91.7%。
结论
结果表明,我们的方法优于以前在公共数据集上进行音频/视频分类的领先方法,是迄今为止最快的方法。
意义
与仅在视盘周围计算的标准局部AVR相比,使用我们的自动A / V分割方法在整个眼底图像上计算的拟议全球AVR可以更好地跟踪与糖尿病性视网膜病变相关的血管变化。

你可能感兴趣的:(Machine,Learning)