# Introducing the Boundary-Aware Loss for Deep Image Segmentation

### From LRDE

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

## Abstract

Most contemporary supervised image segmentation methods do not preserve the initial topology of the given input (like the closeness of the contours). One can generally remark that edge points have been inserted or removed when the binary prediction and the ground truth are compared. This can be critical when accurate localization of multiple interconnected objects is required. In this paper, we present a new loss function, called, Boundary-Aware loss (BALoss), based on the Minimum Barrier Distance (MBD) cut algorithm. It is able to locate what we call the it leakage pixels and to encode the boundary information coming from the given ground truth. Thanks to this adapted loss, we are able to significantly refine the quality of the predicted boundaries during the learning procedure. Furthermore, our loss function is differentiable and can be applied to any kind of neural network used in image processing. We apply this loss function on the standard U-Net and DC U-Net on Electron Microscopy datasets. They are well-known to be challenging due to their high noise level and to the close or even connected objects covering the image space. Our segmentation performance, in terms of Variation of Information (VOI) and Adapted Rank Index (ARI), are very promising and lead to ${\displaystyle \approx {}15\%}$ better scores of VOI and ${\displaystyle \approx {}5\%}$ better scores of ARI than the state-of-the-art. The code of boundary-awareness loss is freely available at https://github.com/onvungocminh/MBD_BAL

## Bibtex (lrde.bib)

@InProceedings{	  movn.21.bmvc,
author	= {Minh \^On V\~{u} Ng\d{o}c and Yizi Chen and Nicolas Boutry
and Joseph Chazalon and Edwin Carlinet and Jonathan
Fabrizio and Cl\'ement Mallet and Thierry G\'eraud},
title		= {Introducing the Boundary-Aware Loss for Deep Image
Segmentation},
booktitle	= {Proceedings of the 32nd British Machine Vision Conference
(BMVC)},
year		= 2021,
abstract	= {Most contemporary supervised image segmentation methods do
not preserve the initial topology of the given input (like
the closeness of the contours). One can generally remark
that edge points have been inserted or removed when the
binary prediction and the ground truth are compared. This
can be critical when accurate localization of multiple
interconnected objects is required. In this paper, we
present a new loss function, called, Boundary-Aware loss
(BALoss), based on the Minimum Barrier Distance (MBD) cut
algorithm. It is able to locate what we call the {\it
leakage pixels} and to encode the boundary information
coming from the given ground truth. Thanks to this adapted
loss, we are able to significantly refine the quality of
the predicted boundaries during the learning procedure.
Furthermore, our loss function is differentiable and can be
applied to any kind of neural network used in image
processing. We apply this loss function on the standard
U-Net and DC U-Net on Electron Microscopy datasets. They
are well-known to be challenging due to their high noise
level and to the close or even connected objects covering
the image space. Our segmentation performance, in terms of
Variation of Information (VOI) and Adapted Rank Index
(ARI), are very promising and lead to $\approx{}15\%$
better scores of VOI and $\approx{}5\%$ better scores of
ARI than the state-of-the-art. The code of
boundary-awareness loss is freely available at
\url{https://github.com/onvungocminh/MBD_BAL}},
note		= {https://www.bmvc2021-virtualconference.com/assets/papers/1546.pdf}
}