Categories
IPMI 2021 poster

A Probabilistic Framework for Modeling the Variability Across Federated Datasets of Heterogeneous Multi-View Observations

Authors: Irene Balelli (INRIA)*; Santiago Silva (INRIA); Marco Lorenzi (INRIA)

Poster

Abstract: We propose a novel federated learning paradigm to model data variability among heterogeneous clients in multi-centric studies. Our method is expressed through a hierarchical Bayesian latent variable model, where client-specific parameters are assumed to be realization from a global distribution at the master level, which is in turn estimated to account for data bias and variability across clients. We show that our framework can be effectively optimized through expectation maximization over latent master’s distribution and clients’ parameters. We tested our method on the analysis of multi-modal medical imaging data and clinical scores from distributed clinical datasets of patients affected by Alzheimer’s disease. We demonstrate that our method is robust when data is distributed either in iid and non-iid manners: it allows to quantify the variability of data, views and centers, while guaranteeing high-quality data reconstruction as compared to the state-of-the-art autoencoding models and federated learning schemes.

Categories
IPMI 2021 oral

Distributional Gaussian Process Layers for Outlier Detection in Image Segmentation

Authors: Sebastian G Popescu (Imperial College London)*; David Sharp (Imperial College London); James Cole (University College London); Konstantinos Kamnitsas (Imperial College London); Ben Glocker (Imperial College London)

Poster

Abstract: We propose a parameter efficient Bayesian layer for hierarchical convolutional Gaussian Processes that incorporates Gaussian Processes operating in Wasserstein-2 space to reliably propagate uncertainty. This directly replaces convolving Gaussian Processes with a distance-preserving affine operator on distributions. Our experiments on brain tissue-segmentation show that the resulting architecture approaches the performance of well-established deterministic segmentation algorithms (U-Net), which has never been achieved with previous hierarchical Gaussian Processes. Moreover, by applying the same segmentation model to out-of-distribution data (i.e., images with pathology such as brain tumors), we show that our uncertainty estimates result in out-of-distribution detection that outperforms the capabilities of previous Bayesian networks and reconstruction-based approaches that learn normative distributions.

Categories
IPMI 2021 oral

Estimation of Causal Effects in the Presence of Unobserved Confounding in the Alzheimer’s Continuum

Authors: Sebastian Pölsterl (Ludwig-Maximilians Universität)*; Christian Wachinger (LMU Munich)

Poster

Abstract: Studying the relationship between neuroanatomy and cognitive decline due to Alzheimer’s has been a major research focus in the last decade. However, to infer cause-effect relationships rather than simple associations from observational data, we need to (i) express the causal relationships leading to cognitive decline in a graphical model, and (ii) ensure the causal effect of interest is identifiable from the collected data. We derive a causal graph from the current clinical knowledge on cause and effect in the Alzheimer’s disease continuum, and show that identifiability of the causal effect requires all confounders to be known and measured. However, in complex neuroimaging studies, we neither know all potential confounders nor do we have data on them. To alleviate this requirement, we leverage the dependencies among multiple causes by deriving a substitute confounder via a probabilistic latent factor model. In our theoretical analysis, we prove that using the substitute confounder enables identifiability of the causal effect of neuroanatomy on cognition. We quantitatively evaluate the effectiveness of our approach on semi-synthetic data, where we know the true causal effects, and illustrate its use on real data on the Alzheimer’s disease continuum, where it reveals important causes that otherwise would have been missed.

Categories
IPMI 2021 poster

Multimodal Self-Supervised Learning for Medical Image Analysis

Authors: Aiham Taleb (Digital Health Center, Hasso-plattner-institute)*; Christoph Lippert (Hasso Plattner Insitute for Digital Engineering, Universität Potsdam); Tassilo Klein (SAP); Moin Nabi (SAP)

Poster

Abstract:

Self-supervised learning approaches leverage unlabeled samples to acquire generic knowledge about different concepts, hence allowing for annotation-efficient downstream task learning.
In this paper, we propose a novel self-supervised method that leverages multiple imaging modalities.
We introduce the multimodal puzzle task, which facilitates representation learning from multiple image modalities. The learned modality-agnostic representations are obtained by confusing image modalities at the data-level.
Together with the Sinkhorn operator, with which we formulate the puzzle solving optimization as permutation matrix inference instead of classification, they allow for efficient solving of multimodal puzzles with varying levels of complexity.
In addition, we also propose to utilize generation techniques for multimodal data augmentation used for self-supervised pretraining, instead of downstream tasks directly. This aims to circumvent quality issues associated with synthetic images, while improving data-efficiency and the representations learned by self-supervised methods.
Our experimental results show that solving our multimodal puzzles yields better semantic representations, compared to treating each modality independently. Our results also highlight the benefits of exploiting synthetic images for self-supervised pretraining.
We showcase our approach on three segmentation tasks, and we outperform many solutions and our results are competitive to state-of-the-art.

Categories
IPMI 2021 poster

A^3DSegNet: Anatomy-aware artifact disentanglement and segmentation network for unpaired segmentation, artifact reduction, and modality translation

Authors: Yuanyuan Lyu (Z2Sky Technologies Inc.)*; Haofu Liao (University of Rochester); Heqin Zhu (University of Science and Technology of China); S. Kevin Zhou (CAS)

Poster

Abstract:

Spinal surgery planning necessitates automatic segmentation of vertebrae in cone-beam computed tomography (CBCT), an intraoperative imaging modality that is widely used in intervention. However, CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects, causing vertebra segmentation, even manually, a demanding task. In contrast, there exists a wealth of artifact-free, high quality CT images with vertebra annotations. This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations. To overcome the \textit{domain and artifact gaps} between CBCT and CT, it is a must to address the \textit{three heterogeneous tasks} of vertebra segmentation, artifact reduction and modality translation all together. To this, we propose a novel \textit{anatomy-aware artifact disentanglement and segmentation network} (\textbf{A$^3$DSegNet}) that intensively leverages knowledge sharing of these three tasks to promote learning. Specifically, it takes a random pair of CBCT and CT images as the input and manipulates the synthesis and segmentation via different decoding combinations from the disentangled latent layers. Then, by proposing various forms of consistency among the synthesized images and among segmented vertebrae, the learning is achieved without paired (i.e., anatomically identical) data. Finally, we stack 2D slices together and build 3D networks on top to obtain final 3D segmentation result. Extensive experiments on a large number of clinical CBCT (21,364) and CT (17,089) images show that the proposed \textbf{A$^3$DSegNet} performs significantly better than state-of-the-art
competing methods trained independently for each task and, remarkably, it achieves an average Dice coefficient of 0.926 for unpaired 3D CBCT vertebra segmentation.

Categories
IPMI 2021 oral

Going Beyond Saliency Maps: Training Deep Models to Interpret Deep Models

Authors: Zixuan Liu (Stanford University); Ehsan Adeli (Stanford University); Kilian Pohl (Stanford University); Qingyu Zhao (Stanford University)*

Poster

Abstract: Interpretability is a critical factor in applying complex deep learning models to advance the understanding of brain disorders in neuroimaging studies. To interpret the decision process of a trained classifier, existing techniques typically rely on saliency maps to quantify the voxel-wise or feature-level importance for classification through partial derivatives. Despite providing some level of localization, these maps are not human-understandable from the neuroscience perspective as they often do not inform the specific type of morphological changes linked to the brain disorder. Inspired by the image-to-image translation scheme, we propose to train simulator networks to inject (or remove) patterns of the disease into a given MRI based on a warping operation, such that the classifier increases (or decreases) its confidence in labeling the simulated MRI as diseased. To increase the robustness of training, we propose to couple the two simulators into a unified model based on conditional convolution. We applied our approach to interpreting classifiers trained on a synthetic dataset and two neuroimaging datasets to visualize the effect of Alzheimer’s disease and alcohol dependence. Compared to the saliency maps generated by baseline approaches, our simulations and visualizations based on the Jacobian determinants of the warping field reveal meaningful and understandable patterns related to the diseases.

Categories
IPMI 2021 poster

Mixture modeling for identifying subtypes in disease course mapping

Authors: Pierre-Emmanuel Poulet (Inria)*; Stanley Durrleman (INRIA)

Poster

Abstract: Disease modeling techniques summarize the possible trajectories of progression from multimodal and longitudinal data. These techniques often assume that individuals form a homogeneous cluster, thus ignoring possible disease subtypes within the population. We extend a non-linear mixed-effect model used for disease course mapping with a mixture framework. We jointly estimate model parameters and subtypes with a tempered version of a stochastic approximation of the Expectation Maximisation algorithm. We show that our model recovers the ground truth parameters from synthetically generated data, in contrast to the naive solution consisting in post hoc clustering of individual parameters from a one-class model. Applications to Alzheimer’s disease data allows the unsupervised identification of disease subtypes associated with regional atrophy and cognitive decline.

Categories
IPMI 2021 poster

Equivariant Spherical Deconvolution: Learning Sparse Orientation Distribution Functions from Spherical Data

Authors: Axel Elaldi (New York University)*; Neel Dey (New York University); Heejong Kim (New York University); Guido Gerig (NYU)

Poster

Abstract: We present a rotation-equivariant self-supervised learning framework for the sparse deconvolution of non-negative scalar fields on the unit sphere. Spherical signals with multiple peaks naturally arise in Diffusion MRI (dMRI), where each voxel consists of one or more signal sources corresponding to anisotropic tissue structure such as white matter. Due to spatial and spectral partial voluming, clinically-feasible dMRI struggles to resolve crossing-fiber white matter configurations, leading to extensive development in spherical deconvolution methodology to recover underlying fiber directions. However, these methods are typically linear and struggle with small crossing-angles and partial volume fraction estimation. In this work, we improve on current methodologies by nonlinearly estimating fiber structures via self-supervised spherical convolutional networks with guaranteed equivariance to spherical rotation. We perform validation via extensive single and multi-shell synthetic benchmarks demonstrating competitive performance against common baselines. We further show improved downstream performance on fiber tractography measures on the Tractometer benchmark dataset. Finally, we show downstream improvements in terms of tractography and partial volume estimation on a multi-shell dataset of human subjects.

Categories
IPMI 2021 poster

Unsupervised Learning of Local Discriminative Representation for Medical Images

Authors: Huai Chen (Shanghai Jiao Tong University); Jieyu Li (Shanghai Jiao Tong University); Renzhen Wang (Xi’an Jiaotong University); Yi-Jie Huang (Shanghai Jiao Tong University); Fanrui Meng (Shanghai Jiaotong University); Deyu Meng (Xi’an Jiaotong University); Qing Peng (Department of Ophthalmology, Shanghai Tenth People’s Hospital, Tongji University, Shanghai); Lisheng Wang (Shanghai Jiao Tong University)*

Poster

Abstract: Local discriminative representation is needed in many medical image analysis tasks such as identifying sub-types of lesion or segmenting detailed components of anatomical structures. However, the commonly applied supervised representation learning methods require a large amount of annotated data, and unsupervised discriminative representation learning distinguishes different images by learning a global feature, both of which are not suitable for localized medical image analysis tasks. In order to avoid the limitations of these two methods, we introduce local discrimination into unsupervised representation learning in this work. The model contains two branches: one is an embedding branch which learns an embedding function to disperse dissimilar pixels over a low-dimensional hypersphere; and the other is a clustering branch which learns a clustering function to classify similar pixels into the same cluster. These two branches are trained simultaneously in a mutually beneficial pattern, and the learnt local discriminative representations are able to well measure the similarity of local image regions. These representations can be transferred to enhance various downstream tasks. Meanwhile, they can also be applied to cluster anatomical structures from unlabeled medical images under the guidance of topological priors from simulation or other structures with similar topological characteristics. The effectiveness and usefulness of the proposed method are demonstrated by enhancing various downstream tasks and clustering anatomical structures in retinal images and chest X-ray images.

Categories
IPMI 2021 oral

Structural Connectome Atlas Construction in the Space of Riemannian Metrics

Authors: Kris Campbell (University of Utah)*; Haocheng Dai (University of Utah); Zhe Su (University of California, Los Angelese); Martin Bauer (Florida State University); Tom Fletcher (University of Virginia); Sarang Joshi (University of Utah, USA)

Poster

Abstract: The structural connectome is often represented by fiber bundles generated from various types of tractography. We propose a method of analyzing connectomes by representing them as a Riemannian metric, thereby viewing them as points in an infinite-dimensional manifold. After equipping this space with a natural metric structure, the Ebin metric, we apply object-oriented statistical analysis to define an atlas as the Fr\’echet mean of a population of Riemannian metrics. We demonstrate connectome registration and atlas formation using connectomes derived from diffusion tensors estimated from a subset of subjects from the Human Connectome Project.