Categories

## A Probabilistic Framework for Modeling the Variability Across Federated Datasets of Heterogeneous Multi-View Observations

Authors: Irene Balelli (INRIA)*; Santiago Silva (INRIA); Marco Lorenzi (INRIA)

Poster

Abstract: We propose a novel federated learning paradigm to model data variability among heterogeneous clients in multi-centric studies. Our method is expressed through a hierarchical Bayesian latent variable model, where client-specific parameters are assumed to be realization from a global distribution at the master level, which is in turn estimated to account for data bias and variability across clients. We show that our framework can be effectively optimized through expectation maximization over latent master’s distribution and clients’ parameters. We tested our method on the analysis of multi-modal medical imaging data and clinical scores from distributed clinical datasets of patients affected by Alzheimer’s disease. We demonstrate that our method is robust when data is distributed either in iid and non-iid manners: it allows to quantify the variability of data, views and centers, while guaranteeing high-quality data reconstruction as compared to the state-of-the-art autoencoding models and federated learning schemes.

Categories

## Quantile Regression for Uncertainty Estimation in VAEs with Applications to Brain Lesion Detection

Authors: Haleh Akrami (Signal and Image Processing Institute at University of Southern California)*; Anand Joshi (University of Southern California); Sergul Aydore (Amazon Web Services); Richard Leahy (Signal and Image Processing Institute at University of Southern California)

Poster

Abstract: The Variational AutoEncoder (VAE) has become one of the most popular models for anomaly detection in applications such as lesion detection in medical images. The VAE is a generative graphical model that is used to learn the data distribution from samples and then generate new samples from this distribution. By training on normal samples, the VAE can be used to detect inputs that deviate from this learned distribution. The VAE models the output as a conditionally independent Gaussian characterized by means and variances for each output dimension. VAEs can therefore use reconstruction probability instead of reconstruction error for anomaly detection. Unfortunately, joint optimization of both mean and variance in the VAE leads to the well-known problem of shrinkage or underestimation of variance. We describe an alternative VAE model, Quantile-Regression VAE (QR-VAE), that avoids this variance shrinkage problem by estimating conditional quantiles for the given input image.
Using the estimated quantiles, we compute the conditional mean and variance for input images under the Gaussian model. We then compute reconstruction probability using this model as a principled approach to outlier or anomaly detection.
We also show how our approach can be used for heterogeneous thresholding of images for detecting lesions in brain images.

Categories

## Geodesic B-Score for Improved Assessment of Knee Osteoarthritis

Authors: Felix Ambellan (Zuse Institute Berlin)*; Stefan Zachow (Zuse Institute Berlin); Christoph von Tycowicz (Zuse Institute Berlin)

Poster

Abstract: Three-dimensional medical imaging enables detailed understanding of osteoarthritis structural status. However, there remains a vast need for automatic, thus, reader-independent measures that provide reliable assessment of subject-specific clinical outcomes. To this end, we derive a consistent generalization of the recently proposed B-score to Riemannian shape spaces. We further present an algorithmic treatment yielding simple, yet efficient computations allowing for analysis of large shape populations with several thousand samples. Our intrinsic formulation exhibits improved discrimination ability over its Euclidean counterpart, which we demonstrate for predictive validity on assessing risks of total knee replacement. This result highlights the potential of the geodesic B-score to enable improved personalized assessment and stratification for interventions.

Categories

## Multimodal Self-Supervised Learning for Medical Image Analysis

Authors: Aiham Taleb (Digital Health Center, Hasso-plattner-institute)*; Christoph Lippert (Hasso Plattner Insitute for Digital Engineering, Universität Potsdam); Tassilo Klein (SAP); Moin Nabi (SAP)

Poster

Abstract:

Self-supervised learning approaches leverage unlabeled samples to acquire generic knowledge about different concepts, hence allowing for annotation-efficient downstream task learning.
In this paper, we propose a novel self-supervised method that leverages multiple imaging modalities.
We introduce the multimodal puzzle task, which facilitates representation learning from multiple image modalities. The learned modality-agnostic representations are obtained by confusing image modalities at the data-level.
Together with the Sinkhorn operator, with which we formulate the puzzle solving optimization as permutation matrix inference instead of classification, they allow for efficient solving of multimodal puzzles with varying levels of complexity.
In addition, we also propose to utilize generation techniques for multimodal data augmentation used for self-supervised pretraining, instead of downstream tasks directly. This aims to circumvent quality issues associated with synthetic images, while improving data-efficiency and the representations learned by self-supervised methods.
Our experimental results show that solving our multimodal puzzles yields better semantic representations, compared to treating each modality independently. Our results also highlight the benefits of exploiting synthetic images for self-supervised pretraining.
We showcase our approach on three segmentation tasks, and we outperform many solutions and our results are competitive to state-of-the-art.

Categories

## A^3DSegNet: Anatomy-aware artifact disentanglement and segmentation network for unpaired segmentation, artifact reduction, and modality translation

Authors: Yuanyuan Lyu (Z2Sky Technologies Inc.)*; Haofu Liao (University of Rochester); Heqin Zhu (University of Science and Technology of China); S. Kevin Zhou (CAS)

Poster

Abstract:

Spinal surgery planning necessitates automatic segmentation of vertebrae in cone-beam computed tomography (CBCT), an intraoperative imaging modality that is widely used in intervention. However, CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects, causing vertebra segmentation, even manually, a demanding task. In contrast, there exists a wealth of artifact-free, high quality CT images with vertebra annotations. This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations. To overcome the \textit{domain and artifact gaps} between CBCT and CT, it is a must to address the \textit{three heterogeneous tasks} of vertebra segmentation, artifact reduction and modality translation all together. To this, we propose a novel \textit{anatomy-aware artifact disentanglement and segmentation network} (\textbf{A$^3$DSegNet}) that intensively leverages knowledge sharing of these three tasks to promote learning. Specifically, it takes a random pair of CBCT and CT images as the input and manipulates the synthesis and segmentation via different decoding combinations from the disentangled latent layers. Then, by proposing various forms of consistency among the synthesized images and among segmented vertebrae, the learning is achieved without paired (i.e., anatomically identical) data. Finally, we stack 2D slices together and build 3D networks on top to obtain final 3D segmentation result. Extensive experiments on a large number of clinical CBCT (21,364) and CT (17,089) images show that the proposed \textbf{A$^3$DSegNet} performs significantly better than state-of-the-art
competing methods trained independently for each task and, remarkably, it achieves an average Dice coefficient of 0.926 for unpaired 3D CBCT vertebra segmentation.

Categories

## Information-based Disentangled Representation Learning for Unsupervised MR Harmonization

Authors: Lianrui Zuo (Johns Hopkins University)*; Blake E Dewey (Johns Hopkins University); Aaron Carass (Johns Hopkins University, USA); Yihao Liu (Johns Hopkins University); Yufan He (Johns Hopkins University); Peter Calabresi (johns hopkins university); Jerry L Prince (Johns Hopkins University)

Poster

Abstract: Accuracy and consistency are two key factors in computer-assisted magnetic resonance (MR) image analysis. However, contrast variation from site to site caused by lack of standardization in MR acquisition impedes consistent measurements. In recent years, image harmonization approaches have been proposed to compensate for contrast variation in MR images. Current harmonization approaches either require cross-site traveling subjects for supervised training or heavily rely on site-specific harmonization models to encourage harmonization accuracy. These requirements potentially limit the application of current harmonization methods in large-scale multi-site studies. In this work, we propose an unsupervised MR harmonization framework, CALAMITI (Contrast Anatomy Learning and Analysis for MR Intensity Translation and Integration), based on information bottleneck theory. CALAMITI learns a disentangled latent space using a unified structure for multi-site harmonization without the need for traveling subjects. Our model is also able to adapt itself to harmonize MR images from a new site with fine tuning solely on images from the new site. Both qualitative and quantitative results show that the proposed method achieves superior performance compared with other unsupervised harmonization approaches.

Categories

## Mixture modeling for identifying subtypes in disease course mapping

Authors: Pierre-Emmanuel Poulet (Inria)*; Stanley Durrleman (INRIA)

Poster

Abstract: Disease modeling techniques summarize the possible trajectories of progression from multimodal and longitudinal data. These techniques often assume that individuals form a homogeneous cluster, thus ignoring possible disease subtypes within the population. We extend a non-linear mixed-effect model used for disease course mapping with a mixture framework. We jointly estimate model parameters and subtypes with a tempered version of a stochastic approximation of the Expectation Maximisation algorithm. We show that our model recovers the ground truth parameters from synthetically generated data, in contrast to the naive solution consisting in post hoc clustering of individual parameters from a one-class model. Applications to Alzheimer’s disease data allows the unsupervised identification of disease subtypes associated with regional atrophy and cognitive decline.

Categories

## Equivariant Spherical Deconvolution: Learning Sparse Orientation Distribution Functions from Spherical Data

Authors: Axel Elaldi (New York University)*; Neel Dey (New York University); Heejong Kim (New York University); Guido Gerig (NYU)

Poster

Abstract: We present a rotation-equivariant self-supervised learning framework for the sparse deconvolution of non-negative scalar fields on the unit sphere. Spherical signals with multiple peaks naturally arise in Diffusion MRI (dMRI), where each voxel consists of one or more signal sources corresponding to anisotropic tissue structure such as white matter. Due to spatial and spectral partial voluming, clinically-feasible dMRI struggles to resolve crossing-fiber white matter configurations, leading to extensive development in spherical deconvolution methodology to recover underlying fiber directions. However, these methods are typically linear and struggle with small crossing-angles and partial volume fraction estimation. In this work, we improve on current methodologies by nonlinearly estimating fiber structures via self-supervised spherical convolutional networks with guaranteed equivariance to spherical rotation. We perform validation via extensive single and multi-shell synthetic benchmarks demonstrating competitive performance against common baselines. We further show improved downstream performance on fiber tractography measures on the Tractometer benchmark dataset. Finally, we show downstream improvements in terms of tractography and partial volume estimation on a multi-shell dataset of human subjects.

Categories

## Unsupervised Learning of Local Discriminative Representation for Medical Images

Authors: Huai Chen (Shanghai Jiao Tong University); Jieyu Li (Shanghai Jiao Tong University); Renzhen Wang (Xi’an Jiaotong University); Yi-Jie Huang (Shanghai Jiao Tong University); Fanrui Meng (Shanghai Jiaotong University); Deyu Meng (Xi’an Jiaotong University); Qing Peng (Department of Ophthalmology, Shanghai Tenth People’s Hospital, Tongji University, Shanghai); Lisheng Wang (Shanghai Jiao Tong University)*

Poster

Abstract: Local discriminative representation is needed in many medical image analysis tasks such as identifying sub-types of lesion or segmenting detailed components of anatomical structures. However, the commonly applied supervised representation learning methods require a large amount of annotated data, and unsupervised discriminative representation learning distinguishes different images by learning a global feature, both of which are not suitable for localized medical image analysis tasks. In order to avoid the limitations of these two methods, we introduce local discrimination into unsupervised representation learning in this work. The model contains two branches: one is an embedding branch which learns an embedding function to disperse dissimilar pixels over a low-dimensional hypersphere; and the other is a clustering branch which learns a clustering function to classify similar pixels into the same cluster. These two branches are trained simultaneously in a mutually beneficial pattern, and the learnt local discriminative representations are able to well measure the similarity of local image regions. These representations can be transferred to enhance various downstream tasks. Meanwhile, they can also be applied to cluster anatomical structures from unlabeled medical images under the guidance of topological priors from simulation or other structures with similar topological characteristics. The effectiveness and usefulness of the proposed method are demonstrated by enhancing various downstream tasks and clustering anatomical structures in retinal images and chest X-ray images.

Categories

## TopoTxR: A Topological Biomarker for Predicting Treatment Response in Breast Cancer

Authors: Fan Wang (Stony Brook University)*; Saarthak Kapse (Indian Institute of Technology Bombay); Steven H. Liu (Stony Brook University); Prateek Prasanna (Stony Brook University); Chao Chen (Stony Brook University)

Poster

Abstract: Characterization of breast parenchyma on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a challenging task owing to the complexity of underlying tissue structures. Current quantitative approaches, including radiomics and deep learning models, do not explicitly capture the complex and subtle parenchymal structures, such as fibroglandular tissue. In this paper, we propose a novel method to direct a neural network’s attention to a dedicated set of voxels surrounding geometrically relevant tissue structures. By extracting multi-dimensional topological structures with high saliency, we build a topology-derived biomarker, TopoTxR. We demonstrate the efficacy of TopoTxR in predicting response to neoadjuvant chemotherapy in breast cancer. Our qualitative and quantitative results suggest differential topological behavior, on treatment-naïve imaging, in patients who respond favorably to therapy versus those who do not.