What is a good evaluation protocol for text localization systems? Concerns, arguments, comparisons and solutions
From LRDE
- Authors
- Stefania Calarasanu, Jonathan Fabrizio, Séverine Dubuisson
- Journal
- Image and Vision Computing
- Type
- article
- Projects
- Olena
- Date
- 2016-02-01
Abstract
A trustworthy protocol is essential to evaluate a text detection algorithm in order to, first measure its efficiency and adjust its parameters and, second to compare its performances with those of other algorithms. Howevercurrent protocols do not give precise enough evaluations because they use coarse evaluation metrics, and deal with inconsistent matchings between the output of detection algorithms and the ground truth, both often limited to rectangular shapes. In this paper, we propose a new evaluation protocol, named EvaLTex, that solves some of the current problems associated with classical metrics and matching strategies. Our system deals with different kinds of annotations and detection shapes. It also considers different kinds of granularity between detections and ground truth objects and hence provides more realistic and accurate evaluation measures. We use this protocol to evaluate text detection algorithms and highlight some key examples that show that the provided scores are more relevant than those of currently used evaluation protocols.
Documents
Bibtex (lrde.bib)
@Article{ calarasanu.16.ivc, author = {Stefania Calarasanu and Jonathan Fabrizio and S\'everine Dubuisson}, title = {What is a good evaluation protocol for text localization systems? Concerns, arguments, comparisons and solutions}, journal = {Image and Vision Computing}, year = 2016, volume = 46, month = feb, pages = {1--17}, abstract = {A trustworthy protocol is essential to evaluate a text detection algorithm in order to, first measure its efficiency and adjust its parameters and, second to compare its performances with those of other algorithms. However, current protocols do not give precise enough evaluations because they use coarse evaluation metrics, and deal with inconsistent matchings between the output of detection algorithms and the ground truth, both often limited to rectangular shapes. In this paper, we propose a new evaluation protocol, named EvaLTex, that solves some of the current problems associated with classical metrics and matching strategies. Our system deals with different kinds of annotations and detection shapes. It also considers different kinds of granularity between detections and ground truth objects and hence provides more realistic and accurate evaluation measures. We use this protocol to evaluate text detection algorithms and highlight some key examples that show that the provided scores are more relevant than those of currently used evaluation protocols. }, doi = {10.1016/j.imavis.2015.12.001} }