TEDxASU: How Self-taught Artificial Intelligence Understands Medical Images?

Author: Zongwei Zhou | 周纵苇
Weibo: @MrGiovanni
Email: [email protected]

How do your studies/endeavors impact your field?


My research focuses on developing novel computational methodologies to minimize the annotation efforts for computer-aided diagnosis, therapy, and surgery. Intense interest in applying convolutional neural networks (CNNs) in biomedical image analysis is widespread, but its success is impeded by the lack of large annotated datasets in biomedical imaging. Annotating biomedical images is not only tedious and time-consuming, but also demanding of costly, specialty-oriented knowledge and skills, which are not easily accessible. Therefore, we seek to answer this critical question: How to dramatically reduce the cost of annotation when applying CNNs in medical imaging. I believe that developing novel learning algorithms is essential to this quest.

To dramatically reduce annotation cost, one of our studies presents a novel method called AIFT (active, incremental fine-tuning) to naturally integrate active learning and transfer learning into a single framework. By repeatedly recommending the most informative and representative samples for experts to label, this work has shown that the cost of expert annotation can be cut by at least half. Due to its precise diagnosis performance with a significant budget reduction, this novel active learning framework has led to one US patent with an additional two patents pending.

To further reduce annotation efforts on varying visual tasks in medical imaging, our recent work has built a set of models, called Models Genesis, because they learn representation directly from a large number of unlabeled images to generate powerful target models through transfer learning. We envision that Models Genesis may serve as a primary source of transfer learning for 3D medical imaging applications, in particular, with limited annotated data. In recognition of my contributions, I have received a Young Scientist Award from MICCAI-2019, one of the two most prestigious conferences in medical image analysis.

In as few words as possible, what is the essence of the idea you are trying to share?


Topic: How self-taught artificial intelligence understands medical images?
Introduction: Vision, one of the oldest perceptual systems, starts in animals 540 million years ago. Today, computer vision systems manage not only to recognize objects in natural images but also to detect and analyze diseases in medical images, and eventually support radiologists and pathologists to diagnose a wide variety of conditions, leading to adequate clinical decision-making. How? Our team provides a generic algorithm that can enlighten computers with common visual representation from hundreds of available medical images. This common representation shows off remarkable progress for more advanced medical visual applications like numerous disease detection, identification, and segmentation.

What does FLUX mean to you?


The trends in artificial intelligence (AI) are bridging the state of medical image analysis between today and tomorrow. Working on medical imaging for nearly five years, I have witnessed, been influenced and, meanwhile, producing FLUX in medical imaging. The application of AI in radiology to process and analyze a medical image is becoming speedier, more efficient, and more precise. To me, the changes in the past few years in this field are unprecedented.

First, the marriage between AI and medical imaging has, by far, been the hottest topic at nearly all medical conferences for radiology, cardiology, and several other subspecialties over the past two years. Taking the example of MICCAI, one of the most prestigious conferences in medical image analysis, a total of 538 papers were accepted this year from a record of more than 1,800 submissions, increased by 63% compared with last year; there were over 2,300 registered attendees this year, doubled from 2017.

Second, computer-aided diagnosis becomes more accurate and applicable to a variety of medical imaging tasks, resulting in eye-catching headlines in media like “The AI Doctor Will See You Now”, “Your Future Doctor May Not Be Human”, and “This AI Just Beat Human Doctors on a Clinical Exam.” Ten years ago, the machine learning algorithms could merely achieve 70-80% accuracy; now, thanks to these innovative analytics strategies named Deep Learning, the accuracy of most disease recognition was boosted into a 90% level, subsequently applicable in clinical practice. In recent years, a steeply increased number of FDA approvals reveals the versatility of AI in medical applications.

As seen, the accelerating prosperity of AI in both academy and industry encourages me, as an AI researcher, to explore the frontier of AI in healthcare applications. My own research objective aims at promoting application-orientated intelligence into general intelligence, capable across disease, organs, and, most importantly, modalities. Such an overwhelming FLUX of AI in medical imaging, therefore, definitely will serve both spurs and opportunity for me.

How will your talk relate to our theme of FLUX?


Artificial intelligence (AI) technologies have the potential to transform medical image analysis by deriving new and important insights from the vast amount of image data generated during the delivery of health care every day. However, its success is impeded by the lack of large annotated datasets because a high-performing AI system is extremely data-hungry. Annotating biomedical images is not only tedious and time-consuming, but also demanding of costly, specialty-oriented knowledge and skills, which has severely obstructed the development of AI in medical imaging.

In my talk, I will discuss the next generation of AI systems, Models Genesis, that learn directly from medical images, without requiring as much data. They are faster, more flexible, and, like humans, more innately intelligent. The computers navigate the medical images to understand characteristic organ texture, layout, and anatomical structure.

The idea of self-taught Models Genesis is not only highly innovative methodologically but also expected to exert substantial impacts clinically. For instance, we offered the leading AI solution in the world four years ago for detecting pulmonary embolism (PE) from CT scans, which is one of the leading causes of preventable hospital deaths with early diagnosis and treatment. Today, upon the state-of-the-art PE detection system, we have further increased the diagnosis accuracy by 8% using self-taught Models Genesis learning from hundreds of thousands of patient CT scans. Other than pulmonary embolism, we have demonstrated that Models Genesis will benefit many other medical applications, including multiple types of disease localization, identification, and segmentation, tested on CT, MRI, Ultrasound, and X-ray images.

As in TED’s fashion, to benefit the research and industry community, we make the development of Models Genesis open science, releasing our Models Genesis to the public for free, and inviting researchers around the world to contribute to this effort. We believe this self-taught AI, learning common knowledge from a tremendous number of patient medical images, will lead to a remarkable slash on diverse medical imaging applications, offering a noticeable impact in such a FLUX of AI trend.

Why is a TEDx talk the best format to showcase your idea?


TEDxASU audiences are largely distributed to a variety of fields. While most people are not very familiar with the concept of artificial intelligence (AI), they are somewhat transformed by this technology every walk of life, such as face recognition, news recommendation, and, in the coming future, disease diagnosis. I would like to bring the recent progress of AI in medical image analysis to the TEDxASU community with an illustration of our state-of-the-art pulmonary embolism detection system. My hope through this brief talk is to narrate a self-taught AI, as a “gift”, to an audience from various subjects, opinions, and interests. Thereby, the broad, multi-domain TEDxASU audience is a perfect example of a mixed audience.

Through this stage, I would also like to spotlight our team (JLiang Lab) in the Department of Biomedical Informatics at ASU. To help achieve ASU Charter and Goals: “establish, with Mayo Clinic, innovative health solutions pathways capable of ... enhancing treatment for 2 million patients”, Prof. Jianming Liang has established strong collaborations with Mayo Clinic across multiple departments and divisions. Cooperating with Mayo Clinic, the No. 1 hospital in the nation, our team is one of the leading teams of medical imaging, especially in pulmonary embolism detection. As a medical AI researcher, I am fully convinced that a joint research agenda will give the hospital and academy the ability to develop the high-tech healthcare of tomorrow.

We are continuing to seek a worldwide corporation, which encourages me to stand on the TEDxASU stage and to share our self-taught medical AI technology, Models Genesis. In fact, we envision that Models Genesis may serve as a primary source of transfer learning for 3D medical imaging applications, in particular, with limited annotated data. As an open science, we will release Models Genesis to the public for free and invite researchers around the world to contribute to this effort. We hope that our collective efforts will lead to the Holy Grail of Models Genesis, effective across diseases, organs, and modalities.

What experience do you have with public speaking? Can you provide link(s) to recording(s) of previous talks?


https://www.youtube.com/watch?v=PPKGCvBbj_k&t=206s
Title: “Oral Presentation in MICCAI 2019”
This is the record of my talk at the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI-2019). I am a finalist of the Best Presentation Award in MICCAI-2019.

你可能感兴趣的:(TEDxASU: How Self-taught Artificial Intelligence Understands Medical Images?)