Nature Machine Intelligence: Volume 1 Issue 10, October 2019 Nature机器智能10月份论文

Deep learning optoacoustic tomography with sparse data

 

The rapidly evolving field of optoacoustic (photoacoustic) imaging and tomography is driven by a constant need for better imaging performance in terms of resolution, speed, sensitivity, depth and contrast. In practice, data acquisition strategies commonly involve sub-optimal sampling of the tomographic data, resulting in inevitable performance trade-offs and diminished image quality. We propose a new framework for efficient recovery of image quality from sparse optoacoustic data based on a deep convolutional neural network and demonstrate its performance with whole body mouse imaging in vivo. To generate accurate high-resolution reference images for optimal training, a full-view tomographic scanner capable of attaining superior cross-sectional image quality from living mice was devised. When provided with images reconstructed from substantially undersampled data or limited-view scans, the trained network was capable of enhancing the visibility of arbitrarily oriented structures and restoring the expected image quality. Notably, the network also eliminated some reconstruction artefacts present in reference images rendered from densely sampled data. No comparable gains were achieved when the training was performed with synthetic or phantom data, underlining the importance of training with high-quality in vivo images acquired by full-view scanners. The new method can benefit numerous optoacoustic imaging applications by mitigating common image artefacts, enhancing anatomical contrast and image quantification capacities, accelerating data acquisition and image reconstruction approaches, while also facilitating the development of practical and affordable imaging systems. The suggested approach operates solely on image-domain data and thus can be seamlessly applied to artefactual images reconstructed with other modalities.

 

Unsupervised data to content transformation with histogram-matching cycle-consistent generative adversarial networks

 

The segmentation of images is a common task in a broad range of research fields. To tackle increasingly complex images, artificial intelligence-based approaches have emerged to overcome the shortcomings of traditional feature detection methods. Owing to the fact that most artificial intelligence research is made publicly accessible and programming the required algorithms is now possible in many popular languages, the use of such approaches is becoming widespread. However, these methods often require data labelled by the researcher to provide a training target for the algorithms to converge to the desired result. This labelling is a limiting factor in many cases and can become prohibitively time consuming. Inspired by the ability of cycle-consistent generative adversarial networks to perform style transfer, we outline a method whereby a computer-generated set of images is used to segment the true images. We benchmark our unsupervised approach against a state-of-the-art supervised cell-counting network on the VGG Cells dataset and show that it is not only competitive but also able to precisely locate individual cells. We demonstrate the power of this method by segmenting bright-field images of cell cultures, images from a live/dead assay of C. elegans, and X-ray computed tomography of metallic nanowire meshes.

 

论文原文

 

A fast neural network approach for direct covariant forces prediction in complex multi-element extended systems

 

Neural network force field (NNFF) is a method for performing regression on atomic structure–force relationships, bypassing the expensive quantum mechanics calculations that prevent the execution of long ab initio quality molecular dynamics (MD) simulations. However, most NNFF methods for complex multi-element atomic systems indirectly predict atomic force vectors by exploiting just atomic structure rotation-invariant features and network-feature spatial derivatives, which are computationally expensive. Here, we show a staggered NNFF architecture that exploits both rotation-invariant and -covariant features to directly predict atomic force vectors without using spatial derivatives, and we demonstrate 2.2× NNFF–MD acceleration over a state-of-the-art C++ engine using a Python engine. This fast architecture enables us to develop NNFF for complex ternary- and quaternary-element extended systems composed of long polymer chains, amorphous oxide and surface chemical reactions. The rotation-invariant–covariant architecture described here can also directly predict complex covariant vector outputs from local environments, in other domains beyond computational material science.

论文地址

 

Clinically applicable deep learning framework for organs at risk delineation in CT images

Radiation therapy is one of the most widely used therapies for cancer treatment. A critical step in radiation therapy planning is to accurately delineate all organs at risk (OARs) to minimize potential adverse effects to healthy surrounding organs. However, manually delineating OARs based on computed tomography images is time-consuming and error-prone. Here, we present a deep learning model to automatically delineate OARs in head and neck, trained on a dataset of 215 computed tomography scans with 28 OARs manually delineated by experienced radiation oncologists. On a hold-out dataset of 100 computed tomography scans, our model achieves an average Dice similarity coefficient of 78.34% across the 28 OARs, significantly outperforming human experts and the previous state-of-the-art method by 10.05% and 5.18%, respectively. Our model takes only a few seconds to delineate an entire scan, compared to over half an hour by human experts. These findings demonstrate the potential for deep learning to improve the quality and reduce the treatment planning time of radiation therapy.

代码地址

 

 

你可能感兴趣的:(Nature机器学习)