Categories
IPMI 2021 poster

Deep MCEM for Weakly-Supervised Learning to Jointly Segment and Recognize Objects using Very Few Expert Segmentations

Authors: Akshay V Gaikwad (Indian Institute of Technology (IIT), Bombay)*; Suyash P. Awate (Indian Institute of Technology (IIT) Bombay)

Poster

Abstract: Typical methods for semantic image segmentation rely on large training sets comprising pixel-level segmentations and pixel-level classifications. In medical applications, a large number of training images with per-pixel segmentations are diffcult to obtain. In addition, many applications involve images or image tiles containing a single object/region of interest, where the image/tile-level information about object/region class is readily available. We propose a novel deep-neural-network (DNN) framework for joint segmentation and recognition of objects relying on weakly-supervised learning from training sets having very few expert segmentations, but with object-class labels available for all images/tiles. For weakly-supervised learning, we propose a variational-learning framework relying on Monte Carlo expectation maximization (MCEM), infering a posterior distribution on the missing segmentations. We design an effective Metropolis-Hastings posterior sampler coupled with suitable sample reparametrizations to enable end-to-end backpropagation. We design an effective posterior sampler coupled with suitable sample reparametrizations to enable end-to-end backpropagation. Our end-to-end learning DNN first produces probabilistic segmentations of objects, and then their probabilistic classifications. Results on two publicly available real-world datasets show the benefits of our strategies of (i) joint object segmentation and recognition as well as (ii) weakly-supervised MCEM-based learning.

Categories
IPMI 2021 oral

Knowledge Transfer for Few-shot Segmentation of Novel White Matter Tracts

Authors: Qi Lu (Beijing Institute of Technology); Chuyang Ye (Beijing Institute of Technology)*

Poster

Abstract: Convolutional neural networks (CNNs) have achieved state-of-the-art performance for white matter (WM) tract segmentation based on diffusion magnetic resonance imaging (dMRI). These CNNs require a large number of manual delineations of the WM tracts of interest for training, which are generally labor-intensive and costly. The expensive manual delineation can be a particular disadvantage when novel WM tracts, i.e., tracts that have not been included in existing manual delineations, are to be analyzed. To accurately segment novel WM tracts, it is desirable to transfer the knowledge learned about existing WM tracts, so that even with only a few delineations of the novel WM tracts, CNNs can learn adequately for the segmentation. In this paper, we explore the transfer of such knowledge to the segmentation of novel WM tracts in the few-shot setting. Although a classic fine tuning strategy can be used for the purpose, the information in the last task specific layer for segmenting existing WM tracts is completely discarded. We hypothesize that the weights of this last layer can bear valuable information for segmenting the novel WM tracts and thus completely discarding the information is not optimal. In particular, we assume that the novel WM tracts can correlate with existing WM tracts and the segmentation of novel WM tracts can be predicted with the logits of existing WM tracts. In this way, better initialization of the last layer than random initialization can be achieved for fine-tuning. Further, we show that a more adaptive use of the knowledge in the last layer for segmenting existing WM tracts can be conveniently achieved by simply inserting a warmup stage before classic fine-tuning. The proposed method was evaluated on a publicly available dMRI dataset, where we demonstrate the benefit of our method for few-shot segmentation of novel WM tracts.

Categories
IPMI 2021 poster

A Higher Order Manifold-valued Convolutional Neural Network with Applications in Diffusion MRI Processing

Authors: Jose Bouza (University of Florida)*; Chun-Hao Yang (University of Florida); David Vaillancourt (University of Florida); Baba C Vemuri (University of Florida)

Poster

Abstract: In this paper, we present a novel generalization of the Volterra Series, which can be viewed as a higher-order convolution, to manifold-valued functions. A special case of the manifold-valued Volterra Series (MVVS) gives us a natural extension of the ordinary convolution to manifold-valued functions that we call, the manifold-valued convolution (MVC). We prove that these generalizations preserve the equivariance properties of the Euclidean Volterra Series and the traditional convolution operator. We present novel deep network architectures using the MVVS and the MVC operations which are then validated via two experiments. These include, (i) movement disorder classification from diffusion magnetic resonance images (dMRI), and (ii) fODF reconstruction from compressed sensed dMRIs. Both experiments outperform the state-of-the-art

Categories
IPMI 2021 oral

Geodesic Tubes for Uncertainty Quantification in Diffusion MRI

Authors: Rick Sengers (Eindhoven University of Technology)*; Luc Florack (Eindhoven University of Technology); Andrea Fuster (Eindhoven University of Technology)

Poster

Abstract:

Based on diffusion tensor imaging (DTI), one can construct a Riemannian manifold in which the dual metric is proportional to the DTI tensor.
Geodesic tractography then amounts to solving a coupled system of nonlinear differential equations, either as initial value problem (given seed location and initial direction) or as boundary value problem (given seed and target location).
We propose to furnish the tractography framework with an uncertainty quantification paradigm
that captures the behaviour of geodesics under small perturbations in (both types of) boundary conditions.
For any given geodesic this yields a coupled system of linear differential equations,
for which we derive an exact solution.
This solution can be used to construct a geodesic tube,
a volumetric region around the fiducial geodesic that captures the behaviour of
perturbed geodesics in the vicinity of the original one.

Categories
IPMI 2021 poster

Enabling Data Diversity: Efficient Automatic Augmentation via Regularized Adversarial Training

Authors: Yunhe Gao (Rutgers University); Zhiqiang Tang (Rutgers); Dimitris N. Metaxas (Rutgers); Mu Zhou (Rutgers university)

Poster

Abstract: Data augmentation has proved extremely useful by increasing training data variance to alleviate overfitting and improve deep neural networks’ generalization performance. In medical image analysis, a well-designed augmentation policy usually requires much expert knowledge and is difficult to generalize to multiple tasks due to the vast discrepancies among pixel intensities, image appearances, and object shapes in different medical tasks. To automate medical data augmentation, we propose a regularized adversarial training framework via two min-max objectives and three differentiable augmentation models covering affine transformation, deformation, and appearance changes. Our method is more automatic and efficient than previous automatic augmentation methods, which still rely on pre-defined operations with human-specified ranges and costly bi-level optimization. Extensive experiments demonstrated that our approach, with less training overhead, achieves superior performance over state-of-the-art auto-augmentation methods on both tasks of 2D skin cancer classification and 3D organs-at-risk segmentation.

Categories
IPMI 2021 poster

Learning image quality assessment by reinforcing task amenable data selection

Authors: Shaheer Ullah Saeed (University College London); Yunguan Fu (University College London); Zachary M C Baum (University College London); Qianye Yang (University College London); Mirabela Rusu (Stanford University); Richard Fan (Stanford University); Geoffrey A Sonn (Stanford University); Dean Barratt (University College London); Yipeng Hu (University College London)

Poster

Abstract: In this paper, we consider a type of image quality assessment (IQA) as a task-specific measurement, which can be used to select images that are more amenable to a given target task, such as image classification or segmentation. We propose to train simultaneously two neural networks for image selection and a target task using reinforcement learning. A controller network learns an image selection policy by maximising an accumulated reward based on the target task performance on the controller-selected validation set, whilst the target task predictor is optimised using the training set. The trained controller is therefore able to reject images that lead to poor accuracy in the target task. In this work, we show that controller-predicted IQA can be significantly different from task-specific quality labels manually defined by humans. Furthermore, we demonstrate that it is possible to learn effective IQA without a “clean” validation set, thereby avoiding the requirement for human labels of task amenability. Using 6712, labelled and segmented, clinical ultrasound images from 259 patients, experimental results on holdout data show that the proposed IQA achieved a mean classification accuracy of 0.94±0.01 and a mean segmentation Dice of 0.89±0.02, by discarding 5% and 15% of the acquired images, respectively. The significantly improved performance was observed for both tested tasks, compared with the respective 0.90±0.01 and 0.82±0.02 from networks without considering task amenability. This enables IQA feedback during real-time ultrasound acquisition among many other medical imaging applications.

Categories
IPMI 2021 poster

Semi-Supervised Screening of COVID-19 from Positive and Unlabeled Data with Constraint Non-Negative Risk Estimator

Authors: Zhongyi Han (Shandong University)*; Rundong He (Shandong University); Tianyang Li (Shandong University of Traditional Chinese Medicine); Benzheng Wei (Shandong University of Traditional Chinese Medicine); Jian Wang (ShanDong JiaoTong University); Yilong Yin (Shandong University)

Poster

Abstract: With the COVID-19 pandemic bringing about a severe global crisis, our health systems are under tremendous pressure. Automated screening plays a critical role in the fight against this pandemic, and much of the previous work has been very successful in designing effective screening models. However, they would lose effectiveness under the semi-supervised learning environment with only positive and unlabeled (PU) data, which is easy to collect clinically. In this paper, we report our attempt towards achieving semi-supervised screening of COVID-19 from PU data. We propose a new PU learning method called Constraint Non-Negative Positive Unlabeled Learning (cnPU). It suggests the constraint non-negative risk estimator, which is more robust against overfitting than previous PU learning methods when giving limited positive data. It also embodies a new and efficient optimization algorithm that can make the model learn well on positive data and avoid overfitting on unlabeled data. To the best of our knowledge, this is the first work that realizes PU learning of COVID-19. A series of empirical studies show that our algorithm remarkably outperforms state of the art in real datasets of two medical imaging modalities, including X-ray and computed tomography. These advantages endow our algorithm as a robust and useful computer-assisted tool in the semi-supervised screening of COVID-19.

Categories
IPMI 2021 poster

Deep Label Fusion: A 3D End-to-End Hybrid Multi-Atlas Segmentation and Deep Learning Pipeline

Authors: Long Xie (University of Pennsylvania); Laura Wisse (Lund University); Jiancong Wang (University of Pennsylvania); Sadhana Ravikumar (University of Pennsylvania); Trevor Glenn (University of Pennsylvania); Anica Luther (Lund University); Sydney Lim (University of Pennsylvania); David Wolk (University of Pennsylvania); Paul Yushkevich (UPENN)

Poster

Abstract: Deep learning (DL) is the state-of-the-art methodology in various medical image segmentation tasks. However, it requires relatively large amounts of manually labeled training data, which may be infeasible to generate in some applications. In addition, DL methods have relatively poor generalizability to out-of-sample data. Multi-atlas segmentation (MAS), on the other hand, has promising performance using limited amounts of training data and good generalizability. A hybrid method that integrates the high accuracy of DL and good generalizability of MAS is highly desired and could play an important role in segmentation problems where manually labeled data is hard to generate. Most of the prior work focuses on improving single components of MAS using DL rather than directly optimizing the final segmentation accuracy via an end-to-end pipeline. Only one study explored this idea in binary segmentation of 2D images, but it remains unknown whether it generalizes well to multi-class 3D segmentation problems. In this study, we propose a 3D end-to-end hybrid pipeline, named deep label fusion (DLF), that takes advantage of the strengths of MAS and DL. Experimental results demonstrate that DLF yields significant improvements over conventional label fusion methods as well as U-Net, a direct DL approach, in the context of segmenting medial temporal lobe subregions using 3T T1-weighted and T2-weighted structural MRI. Further, when applied to an unseen similar dataset that is acquired in 7T, DLF maintains its superior performance, which demonstrates its good generalizability.

Categories
IPMI 2021 oral

Multiple-shooting adjoint method for whole-brain dynamic causal modeling

Authors: Juntang Zhuang (Yale University); Nicha Dvornek (Yale University); Sekhar Tatikonda (Yale); Xenophon Papademetris (Yale University); Pamela Ventola (Yale University); James S Duncan (Yale University)

Poster

Abstract:

Dynamic causal modeling (DCM) is a Bayesian framework to infer directed connections between compartments, and has been used to describe the interactions between underlying neural populations based on functional neuroimaging data. DCM is typically analyzed with the expectation-maximization (EM) algorithm.
However, because the inversion of a large-scale continuous system is difficult when noisy observations are present, DCM by EM is typically limited to a small number of compartments ($<10$). Another drawback with the current method is its complexity; when the forward model changes, the posterior mean changes, and we need to re-derive the algorithm for optimization.
In this project, we propose the Multiple-Shooting Adjoint (MSA) method to address these limitations. MSA uses the multiple-shooting method for parameter estimation in ordinary differential equations (ODEs) under noisy observations, and is suitable for large-scale systems such as whole-brain analysis in functional MRI (fMRI). Furthermore, MSA uses the adjoint method for accurate gradient estimation in the ODE; since the adjoint method is generic, MSA is a generic method for both linear and non-linear systems, and does not require re-derivation of the algorithm as in EM. We validate MSA in extensive experiments: 1) in toy examples with both linear and non-linear models, we show that MSA achieves better accuracy in parameter value estimation than EM and MSA can be successfully applied to large systems with up to 100 compartments; and 2) using real fMRI data, we apply MSA to the estimation of the whole-brain effective connectome and show improved classification of autism spectrum disorder (ASD) vs. control compared to using the functional connectome. The package is provided \url{https://jzkay12.github.io/TorchDiffEqPack}