Information Processing in Medical Imaging 2021 http://ipmi2021.org IPMI 2021 Wed, 23 Jun 2021 17:45:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 http://ipmi2021.org/wp-content/uploads/2019/11/cropped-logo-32x32.png Information Processing in Medical Imaging 2021 http://ipmi2021.org 32 32 A Probabilistic Framework for Modeling the Variability Across Federated Datasets of Heterogeneous Multi-View Observations http://ipmi2021.org/papers/195/ Tue, 22 Jun 2021 11:51:05 +0000 http://ipmi2021.org/?p=821 Authors: Irene Balelli (INRIA)*; Santiago Silva (INRIA); Marco Lorenzi (INRIA)

Poster

Abstract: We propose a novel federated learning paradigm to model data variability among heterogeneous clients in multi-centric studies. Our method is expressed through a hierarchical Bayesian latent variable model, where client-specific parameters are assumed to be realization from a global distribution at the master level, which is in turn estimated to account for data bias and variability across clients. We show that our framework can be effectively optimized through expectation maximization over latent master’s distribution and clients’ parameters. We tested our method on the analysis of multi-modal medical imaging data and clinical scores from distributed clinical datasets of patients affected by Alzheimer’s disease. We demonstrate that our method is robust when data is distributed either in iid and non-iid manners: it allows to quantify the variability of data, views and centers, while guaranteeing high-quality data reconstruction as compared to the state-of-the-art autoencoding models and federated learning schemes.

]]>
Quantile Regression for Uncertainty Estimation in VAEs with Applications to Brain Lesion Detection http://ipmi2021.org/papers/183/ Tue, 22 Jun 2021 11:50:36 +0000 http://ipmi2021.org/?p=818 Authors: Haleh Akrami (Signal and Image Processing Institute at University of Southern California)*; Anand Joshi (University of Southern California); Sergul Aydore (Amazon Web Services); Richard Leahy (Signal and Image Processing Institute at University of Southern California)

Poster

Abstract: The Variational AutoEncoder (VAE) has become one of the most popular models for anomaly detection in applications such as lesion detection in medical images. The VAE is a generative graphical model that is used to learn the data distribution from samples and then generate new samples from this distribution. By training on normal samples, the VAE can be used to detect inputs that deviate from this learned distribution. The VAE models the output as a conditionally independent Gaussian characterized by means and variances for each output dimension. VAEs can therefore use reconstruction probability instead of reconstruction error for anomaly detection. Unfortunately, joint optimization of both mean and variance in the VAE leads to the well-known problem of shrinkage or underestimation of variance. We describe an alternative VAE model, Quantile-Regression VAE (QR-VAE), that avoids this variance shrinkage problem by estimating conditional quantiles for the given input image.
Using the estimated quantiles, we compute the conditional mean and variance for input images under the Gaussian model. We then compute reconstruction probability using this model as a principled approach to outlier or anomaly detection.
We also show how our approach can be used for heterogeneous thresholding of images for detecting lesions in brain images.

]]>
Distributional Gaussian Process Layers for Outlier Detection in Image Segmentation http://ipmi2021.org/papers/196/ Tue, 22 Jun 2021 11:49:12 +0000 http://ipmi2021.org/?p=816 Authors: Sebastian G Popescu (Imperial College London)*; David Sharp (Imperial College London); James Cole (University College London); Konstantinos Kamnitsas (Imperial College London); Ben Glocker (Imperial College London)

Poster

Abstract: We propose a parameter efficient Bayesian layer for hierarchical convolutional Gaussian Processes that incorporates Gaussian Processes operating in Wasserstein-2 space to reliably propagate uncertainty. This directly replaces convolving Gaussian Processes with a distance-preserving affine operator on distributions. Our experiments on brain tissue-segmentation show that the resulting architecture approaches the performance of well-established deterministic segmentation algorithms (U-Net), which has never been achieved with previous hierarchical Gaussian Processes. Moreover, by applying the same segmentation model to out-of-distribution data (i.e., images with pathology such as brain tumors), we show that our uncertainty estimates result in out-of-distribution detection that outperforms the capabilities of previous Bayesian networks and reconstruction-based approaches that learn normative distributions.

]]>
Variational Knowledge Distillation for Disease Classification in Chest X-Rays http://ipmi2021.org/papers/176/ Tue, 22 Jun 2021 11:48:28 +0000 http://ipmi2021.org/?p=813 Authors: Tom J van Sonsbeek (University of Amsterdam)*; Xiantong Zhen (University of Amsterdam); Marcel Worring (University of Amsterdam); Ling Shao (Inception Institute of Artificial Intelligence)

Poster

Abstract: Disease classification relying solely on imaging data attracts great interest in medical image analysis. Current models could be further improved, however, by also employing Electronic Health Records (EHRs), which contain rich information on patients and findings from clinicians. It is challenging to incorporate this information into disease classification due to the high reliance on clinician input in EHRs, limiting the possibility for automated diagnosis. In this paper, we propose variational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays that leverages knowledge from EHRs. Specifically, we introduce a conditional latent variable model, where we infer the latent representation of the X-ray image with the variational posterior conditioning on the associated EHR text. By doing so, the model acquires the ability to extract the visual features relevant to the disease during learning and can therefore perform more accurate classification for unseen patients at inference based solely on their X-ray scans. We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs. The results show that the proposed variational knowledge distillation can consistently improve the performance of medical image classification and significantly surpasses current methods.

]]>
Estimation of Causal Effects in the Presence of Unobserved Confounding in the Alzheimer’s Continuum http://ipmi2021.org/papers/197/ Tue, 22 Jun 2021 11:47:44 +0000 http://ipmi2021.org/?p=811 Authors: Sebastian Pölsterl (Ludwig-Maximilians Universität)*; Christian Wachinger (LMU Munich)

Poster

Abstract: Studying the relationship between neuroanatomy and cognitive decline due to Alzheimer’s has been a major research focus in the last decade. However, to infer cause-effect relationships rather than simple associations from observational data, we need to (i) express the causal relationships leading to cognitive decline in a graphical model, and (ii) ensure the causal effect of interest is identifiable from the collected data. We derive a causal graph from the current clinical knowledge on cause and effect in the Alzheimer’s disease continuum, and show that identifiability of the causal effect requires all confounders to be known and measured. However, in complex neuroimaging studies, we neither know all potential confounders nor do we have data on them. To alleviate this requirement, we leverage the dependencies among multiple causes by deriving a substitute confounder via a probabilistic latent factor model. In our theoretical analysis, we prove that using the substitute confounder enables identifiability of the causal effect of neuroanatomy on cognition. We quantitatively evaluate the effectiveness of our approach on semi-synthetic data, where we know the true causal effects, and illustrate its use on real data on the Alzheimer’s disease continuum, where it reveals important causes that otherwise would have been missed.

]]>
Geodesic B-Score for Improved Assessment of Knee Osteoarthritis http://ipmi2021.org/papers/174/ Tue, 22 Jun 2021 11:46:37 +0000 http://ipmi2021.org/?p=806 Authors: Felix Ambellan (Zuse Institute Berlin)*; Stefan Zachow (Zuse Institute Berlin); Christoph von Tycowicz (Zuse Institute Berlin)

Poster

Abstract: Three-dimensional medical imaging enables detailed understanding of osteoarthritis structural status. However, there remains a vast need for automatic, thus, reader-independent measures that provide reliable assessment of subject-specific clinical outcomes. To this end, we derive a consistent generalization of the recently proposed B-score to Riemannian shape spaces. We further present an algorithmic treatment yielding simple, yet efficient computations allowing for analysis of large shape populations with several thousand samples. Our intrinsic formulation exhibits improved discrimination ability over its Euclidean counterpart, which we demonstrate for predictive validity on assessing risks of total knee replacement. This result highlights the potential of the geodesic B-score to enable improved personalized assessment and stratification for interventions.

]]>
Multimodal Self-Supervised Learning for Medical Image Analysis http://ipmi2021.org/papers/201/ Tue, 22 Jun 2021 11:46:07 +0000 http://ipmi2021.org/?p=809 Authors: Aiham Taleb (Digital Health Center, Hasso-plattner-institute)*; Christoph Lippert (Hasso Plattner Insitute for Digital Engineering, Universität Potsdam); Tassilo Klein (SAP); Moin Nabi (SAP)

Poster

Abstract:

Self-supervised learning approaches leverage unlabeled samples to acquire generic knowledge about different concepts, hence allowing for annotation-efficient downstream task learning.
In this paper, we propose a novel self-supervised method that leverages multiple imaging modalities.
We introduce the multimodal puzzle task, which facilitates representation learning from multiple image modalities. The learned modality-agnostic representations are obtained by confusing image modalities at the data-level.
Together with the Sinkhorn operator, with which we formulate the puzzle solving optimization as permutation matrix inference instead of classification, they allow for efficient solving of multimodal puzzles with varying levels of complexity.
In addition, we also propose to utilize generation techniques for multimodal data augmentation used for self-supervised pretraining, instead of downstream tasks directly. This aims to circumvent quality issues associated with synthetic images, while improving data-efficiency and the representations learned by self-supervised methods.
Our experimental results show that solving our multimodal puzzles yields better semantic representations, compared to treating each modality independently. Our results also highlight the benefits of exploiting synthetic images for self-supervised pretraining.
We showcase our approach on three segmentation tasks, and we outperform many solutions and our results are competitive to state-of-the-art.

]]>
A^3DSegNet: Anatomy-aware artifact disentanglement and segmentation network for unpaired segmentation, artifact reduction, and modality translation http://ipmi2021.org/papers/204/ Tue, 22 Jun 2021 11:44:08 +0000 http://ipmi2021.org/?p=803 Authors: Yuanyuan Lyu (Z2Sky Technologies Inc.)*; Haofu Liao (University of Rochester); Heqin Zhu (University of Science and Technology of China); S. Kevin Zhou (CAS)

Poster

Abstract:

Spinal surgery planning necessitates automatic segmentation of vertebrae in cone-beam computed tomography (CBCT), an intraoperative imaging modality that is widely used in intervention. However, CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects, causing vertebra segmentation, even manually, a demanding task. In contrast, there exists a wealth of artifact-free, high quality CT images with vertebra annotations. This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations. To overcome the \textit{domain and artifact gaps} between CBCT and CT, it is a must to address the \textit{three heterogeneous tasks} of vertebra segmentation, artifact reduction and modality translation all together. To this, we propose a novel \textit{anatomy-aware artifact disentanglement and segmentation network} (\textbf{A$^3$DSegNet}) that intensively leverages knowledge sharing of these three tasks to promote learning. Specifically, it takes a random pair of CBCT and CT images as the input and manipulates the synthesis and segmentation via different decoding combinations from the disentangled latent layers. Then, by proposing various forms of consistency among the synthesized images and among segmented vertebrae, the learning is achieved without paired (i.e., anatomically identical) data. Finally, we stack 2D slices together and build 3D networks on top to obtain final 3D segmentation result. Extensive experiments on a large number of clinical CBCT (21,364) and CT (17,089) images show that the proposed \textbf{A$^3$DSegNet} performs significantly better than state-of-the-art
competing methods trained independently for each task and, remarkably, it achieves an average Dice coefficient of 0.926 for unpaired 3D CBCT vertebra segmentation.

]]>
Hypermorph: Amortized Hyperparameter Learning for Image Registration http://ipmi2021.org/papers/170/ Tue, 22 Jun 2021 11:43:53 +0000 http://ipmi2021.org/?p=802 Authors: Andrew Hoopes (MGH)*; Malte Hoffmann (Harvard Medical School); Bruce Fischl (Massachusetts General Hospital / Harvard Medical School); John Guttag (MIT); Adrian V Dalca (MIT)

Poster

Abstract: We present HyperMorph, a learning-based strategy for deformable image registration that removes the need to tune important registration hyperparameters during training. Classical registration methods solve an optimization problem to find a set of spatial correspondences between two images, while learning-based methods leverage a training dataset to learn a function that generates these correspondences. The quality of the results for both types of techniques depends greatly on the choice of hyperparameters. Unfortunately, hyperparameter tuning is time-consuming and typically involves training many separate models with various hyperparameter values, potentially leading to suboptimal results. To address this inefficiency, we introduce amortized hyperparameter learning for image registration, a novel strategy to learn the effects of hyperparameters on deformation fields. The proposed framework learns a hypernetwork that takes in an input hyperparameter and modulates a registration network to produce the optimal deformation field for that hyperparameter value. In effect, this strategy trains a single, rich model that enables rapid, fine-grained discovery of hyperparameter values from a continuous interval at test-time. We demonstrate that this approach can be used to optimize multiple hyperparameters considerably faster than existing search strategies, leading to a reduced computational and human burden as well as increased flexibility. We also show several important benefits, including increased robustness to initialization and the ability to rapidly identify optimal hyperparameter values specific to a registration task, dataset, or even a single anatomical region, all without retraining the HyperMorph model. Our code is publicly available at http://voxelmorph.mit.edu.

]]>
Information-based Disentangled Representation Learning for Unsupervised MR Harmonization http://ipmi2021.org/papers/168/ Tue, 22 Jun 2021 11:41:34 +0000 http://ipmi2021.org/?p=796 Authors: Lianrui Zuo (Johns Hopkins University)*; Blake E Dewey (Johns Hopkins University); Aaron Carass (Johns Hopkins University, USA); Yihao Liu (Johns Hopkins University); Yufan He (Johns Hopkins University); Peter Calabresi (johns hopkins university); Jerry L Prince (Johns Hopkins University)

Poster

Abstract: Accuracy and consistency are two key factors in computer-assisted magnetic resonance (MR) image analysis. However, contrast variation from site to site caused by lack of standardization in MR acquisition impedes consistent measurements. In recent years, image harmonization approaches have been proposed to compensate for contrast variation in MR images. Current harmonization approaches either require cross-site traveling subjects for supervised training or heavily rely on site-specific harmonization models to encourage harmonization accuracy. These requirements potentially limit the application of current harmonization methods in large-scale multi-site studies. In this work, we propose an unsupervised MR harmonization framework, CALAMITI (Contrast Anatomy Learning and Analysis for MR Intensity Translation and Integration), based on information bottleneck theory. CALAMITI learns a disentangled latent space using a unified structure for multi-site harmonization without the need for traveling subjects. Our model is also able to adapt itself to harmonize MR images from a new site with fine tuning solely on images from the new site. Both qualitative and quantitative results show that the proposed method achieves superior performance compared with other unsupervised harmonization approaches.

]]>