What is a good evaluation protocol for text localization systems? Concerns, arguments, comparisons and solutions

From LRDE

Revision as of 17:55, 4 January 2018 by Bot (talk | contribs)
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Abstract

A trustworthy protocol is essential to evaluate a text detection algorithm in order to, first measure its efficiency and adjust its parameters and, second to compare its performances with those of other algorithms. Howevercurrent protocols do not give precise enough evaluations because they use coarse evaluation metrics, and deal with inconsistent matchings between the output of detection algorithms and the ground truth, both often limited to rectangular shapes. In this paper, we propose a new evaluation protocol, named EvaLTex, that solves some of the current problems associated with classical metrics and matching strategies. Our system deals with different kinds of annotations and detection shapes. It also considers different kinds of granularity between detections and ground truth objects and hence provides more realistic and accurate evaluation measures. We use this protocol to evaluate text detection algorithms and highlight some key examples that show that the provided scores are more relevant than those of currently used evaluation protocols.

Documents

Bibtex (lrde.bib)

@Article{	  calarasanu.16.ivc,
  author	= {Stefania Calarasanu and Jonathan Fabrizio and S\'everine
		  Dubuisson},
  title		= {What is a good evaluation protocol for text localization
		  systems? Concerns, arguments, comparisons and solutions},
  journal	= {Image and Vision Computing},
  year		= 2016,
  volume	= 46,
  month		= feb,
  pages		= {1--17},
  abstract	= {A trustworthy protocol is essential to evaluate a text
		  detection algorithm in order to, first measure its
		  efficiency and adjust its parameters and, second to compare
		  its performances with those of other algorithms. However,
		  current protocols do not give precise enough evaluations
		  because they use coarse evaluation metrics, and deal with
		  inconsistent matchings between the output of detection
		  algorithms and the ground truth, both often limited to
		  rectangular shapes. In this paper, we propose a new
		  evaluation protocol, named EvaLTex, that solves some of the
		  current problems associated with classical metrics and
		  matching strategies. Our system deals with different kinds
		  of annotations and detection shapes. It also considers
		  different kinds of granularity between detections and
		  ground truth objects and hence provides more realistic and
		  accurate evaluation measures. We use this protocol to
		  evaluate text detection algorithms and highlight some key
		  examples that show that the provided scores are more
		  relevant than those of currently used evaluation protocols.
		  }
}