Difference between revisions of "Evaltex"

From LRDE

Line 27: Line 27:
 
The Recall computes the amount of detected text. We provide 3 Recall (<math>R</math>) measures: a ''global'' <math>R</math>, a ''quantitative'' <math>R</math> that measures the amount detected objects (regardless of the matched area) and a ''qualitative'' <math>R</math> that corresponds to the rate of the detected text area with respect to the number of true positives (<math>TP</math>)
 
The Recall computes the amount of detected text. We provide 3 Recall (<math>R</math>) measures: a ''global'' <math>R</math>, a ''quantitative'' <math>R</math> that measures the amount detected objects (regardless of the matched area) and a ''qualitative'' <math>R</math> that corresponds to the rate of the detected text area with respect to the number of true positives (<math>TP</math>)
 
* <math>R_G = \frac{\sum Cov}{G}</math>
 
* <math>R_G = \frac{\sum Cov}{G}</math>
* <math>R_{quant} = \frac{\sum TP}{G}</math>
+
* <math>R_{quant} = \frac{TP}{G}</math>
 
* <math>R_{qual} = \frac{\sum Cov}{TP}</math>
 
* <math>R_{qual} = \frac{\sum Cov}{TP}</math>
   

Revision as of 16:47, 8 January 2016

EvaLTex (Evaluating Text Localization) is an evaluation tool used to measure the performance of text detection algorithms. It takes as input text detection results that can be represented either by coordinates or by masks and outputs performance scores.

XML format

Input

The framework takes as input XML files containing the coordinates of the bounding boxes surrounding the text objects and compares it to another XML file representing the ground truth (GT). The GT XML format differs slightly from the result format. Its attributes are:

  • name : the image name
  • size : image size
  • region : 1st level components

Mask representation

In addition to bounding boxes, text objects can also be represented using masks. EvaLTex takes as input binary images (white corresponds to text). TODO add images

Output

The output consists in a local valuation (for each image), as well as a global evaluation (one XML file for a whole database).

TODO add local XML file and global XML file

Performance measurements

Local evaluation

For each matched GT object we assign two quality measures: Coverage (Cov) and Accuracy (Acc);

  • Cov computes the rate of the matched area with respect to the GT object area
  • Acc computes the rate of the matched area with respect to the detection area
Recall

The Recall computes the amount of detected text. We provide 3 Recall () measures: a global , a quantitative that measures the amount detected objects (regardless of the matched area) and a qualitative that corresponds to the rate of the detected text area with respect to the number of true positives ()

Precision

The Precision computes the rate of detections that have a match in the GT.

  • P_G = Global Precision
  • P_quant = Quantitative Precision
  • P_qual = Qualitative Precision

How to compute the measurements

Parameters to run the tool