A general algorithm for consensus 3D cell segmentation from 2D segmented stacks.
Felix Yuran ZhouClarence YappZhiguo ShangStephan DaetwylerZach MarinMd Torikul IslamBenjamin A NanesEdward JenkinsGabriel M GihanaBo-Jui ChangAndrew WeemsMichael L DustinSean J MorrisonReto Paul FiolkaKevin M DeanAndrew JamiesonPeter Karl SorgerGaudenz Karl DanuserPublished in: bioRxiv : the preprint server for biology (2024)
Cell segmentation is the fundamental task. Only by segmenting, can we define the quantitative spatial unit for collecting measurements to draw biological conclusions. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation and computation. However 3D cell segmentation, which requires dense annotation of 2D slices still poses significant challenges. Labelling every cell in every 2D slice is prohibitive. Moreover it is ambiguous, necessitating cross-referencing with other orthoviews. Lastly, there is limited ability to unambiguously record and visualize 1000s of annotated cells. Here we develop a theory and toolbox, u-Segment3D for 2D-to-3D segmentation, compatible with any 2D segmentation method. Given optimal 2D segmentations, u-Segment3D generates the optimal 3D segmentation without data training, as demonstrated on 11 real life datasets, >70,000 cells, spanning single cells, cell aggregates and tissue.