Difference between revisions of "Publications/calarasanu.16.ivc"

From LRDE

Line 6: Line 6:
 
| journal = Image and Vision Computing
 
| journal = Image and Vision Computing
 
| volume = 46
 
| volume = 46
| pages = 1–17
+
| pages = 1 to 17
 
| lrdeprojects = Olena
 
| lrdeprojects = Olena
 
| abstract = A trustworthy protocol is essential to evaluate a text detection algorithm in order to, first measure its efficiency and adjust its parameters and, second to compare its performances with those of other algorithms. Howevercurrent protocols do not give precise enough evaluations because they use coarse evaluation metrics, and deal with inconsistent matchings between the output of detection algorithms and the ground truth, both often limited to rectangular shapes. In this paper, we propose a new evaluation protocol, named EvaLTex, that solves some of the current problems associated with classical metrics and matching strategies. Our system deals with different kinds of annotations and detection shapes. It also considers different kinds of granularity between detections and ground truth objects and hence provides more realistic and accurate evaluation measures. We use this protocol to evaluate text detection algorithms and highlight some key examples that show that the provided scores are more relevant than those of currently used evaluation protocols.
 
| abstract = A trustworthy protocol is essential to evaluate a text detection algorithm in order to, first measure its efficiency and adjust its parameters and, second to compare its performances with those of other algorithms. Howevercurrent protocols do not give precise enough evaluations because they use coarse evaluation metrics, and deal with inconsistent matchings between the output of detection algorithms and the ground truth, both often limited to rectangular shapes. In this paper, we propose a new evaluation protocol, named EvaLTex, that solves some of the current problems associated with classical metrics and matching strategies. Our system deals with different kinds of annotations and detection shapes. It also considers different kinds of granularity between detections and ground truth objects and hence provides more realistic and accurate evaluation measures. We use this protocol to evaluate text detection algorithms and highlight some key examples that show that the provided scores are more relevant than those of currently used evaluation protocols.

Revision as of 18:55, 4 January 2018

Abstract

A trustworthy protocol is essential to evaluate a text detection algorithm in order to, first measure its efficiency and adjust its parameters and, second to compare its performances with those of other algorithms. Howevercurrent protocols do not give precise enough evaluations because they use coarse evaluation metrics, and deal with inconsistent matchings between the output of detection algorithms and the ground truth, both often limited to rectangular shapes. In this paper, we propose a new evaluation protocol, named EvaLTex, that solves some of the current problems associated with classical metrics and matching strategies. Our system deals with different kinds of annotations and detection shapes. It also considers different kinds of granularity between detections and ground truth objects and hence provides more realistic and accurate evaluation measures. We use this protocol to evaluate text detection algorithms and highlight some key examples that show that the provided scores are more relevant than those of currently used evaluation protocols.

Documents

Bibtex (lrde.bib)

@Article{	  calarasanu.16.ivc,
  author	= {Stefania Calarasanu and Jonathan Fabrizio and S\'everine
		  Dubuisson},
  title		= {What is a good evaluation protocol for text localization
		  systems? Concerns, arguments, comparisons and solutions},
  journal	= {Image and Vision Computing},
  year		= 2016,
  volume	= 46,
  month		= feb,
  pages		= {1--17},
  abstract	= {A trustworthy protocol is essential to evaluate a text
		  detection algorithm in order to, first measure its
		  efficiency and adjust its parameters and, second to compare
		  its performances with those of other algorithms. However,
		  current protocols do not give precise enough evaluations
		  because they use coarse evaluation metrics, and deal with
		  inconsistent matchings between the output of detection
		  algorithms and the ground truth, both often limited to
		  rectangular shapes. In this paper, we propose a new
		  evaluation protocol, named EvaLTex, that solves some of the
		  current problems associated with classical metrics and
		  matching strategies. Our system deals with different kinds
		  of annotations and detection shapes. It also considers
		  different kinds of granularity between detections and
		  ground truth objects and hence provides more realistic and
		  accurate evaluation measures. We use this protocol to
		  evaluate text detection algorithms and highlight some key
		  examples that show that the provided scores are more
		  relevant than those of currently used evaluation protocols.
		  }
}