The recognition of different cell compartments, types of cells, and their interactions is a critical aspect of quantitative cell biology. However, automating this problem has proven to be non-trivial, and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. To alleviate this, graphical models are useful due to their ability to make use of prior knowledge and model inter-class dependencies. Directed acyclic graphs, such as trees have been widely used to model top-down statistical dependencies as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, we propose polytree graphical models that capture label proximity relations more naturally compared to tree based approaches. A novel recursive mechanism based on two-pass message passing was developed to efficiently calculate closed form posteriors of graph nodes on polytrees. The algorithm is evaluated on simulated data and on two publicly available fluorescence microscopy datasets, outperforming directed trees and three state-of-the-art convolutional neural networks, namely SegNet, DeepLab and PSPNet. Polytrees are shown to outperform directed trees in predicting segmentation error, by highlighting areas in the segmented image that do not comply with prior knowledge. This paves the way to uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement.

, , ,
doi.org/10.1109/tip.2019.2895455, hdl.handle.net/1765/117226
IEEE Transactions on Image Processing
Department of Radiology

Fehri, H., Gooya, A., Lu, YJ, Meijering, E., Johnston, S.A., & Frangi, A. (2019). Bayesian Polytrees With Learned Deep Features for Multi-Class Cell Segmentation. IEEE Transactions on Image Processing, 28(7), 3246–3260. doi:10.1109/tip.2019.2895455