Authors: Junbo Ma (University of North Carolina at Chapel Hill); Oleh Krupa (UNC-Ch); Rose Glass (University of North Carolina at Chapel Hill); Carolyn McCormick (University of North Carolina at Chapel Hill); David Borland (Renaissance Computing Institute); Minjeong Kim (University of North Carolina at Greensboro); Jason Stein (University of North Carolina at Chapel Hill, USA); Guorong Wu (University of North Carolina)*
Abstract: Tissue clearing and light-sheet microscopy technologies offer new opportunities to quantify the three-dimensional (3D) neural structure at a cellular or even sub-cellular resolution. Although many efforts have been made to recognize nuclei in 3D using deep learning techniques, current state-of-the-art approaches often work in a two-step manner, i.e., first segment nucleus regions within a 2D optical slice and then assemble the regions into the 3D instance of a nucleus. Due to the poor inter-slice resolution in many volumetric microscopy images and lack of contextual information across image slices, the current two-step approaches yield less accurate instance segmentation results. To address these limitations, a novel neural network for 3D nucleus instance segmentation (NIS) is proposed, called NIS-Net, which jointly segments and assembles the 3D instances of nuclei. Specifically, a pretext task is designed to predict the image appearance of the to-be-processed slice using the learned context from the processed slices, where the well-characterized contextual information is leveraged to guide the assembly of 3D nuclei instances. Since our NIS-Net progressively identifies nuclei instances by sliding over the entire image stack, our method is capable of segmenting nuclei instances for the whole mouse brain. Experimental results show that our proposed NIS-Net achieves higher accuracy and more reasonable nuclei instances than the current counterpart methods.