Difference between revisions of "Publications/chen.21.icdar"

From LRDE

Line 1: Line 1:
 
{{Publication
 
{{Publication
 
| published = true
 
| published = true
| date = 2020-05-17
+
| date = 2021-05-17
 
| title = Vectorization of Historical Maps Using Deep Edge Filtering and Closed Shape Extraction
 
| title = Vectorization of Historical Maps Using Deep Edge Filtering and Closed Shape Extraction
 
| authors = Yizi Chen, Edwin Carlinet, Joseph Chazalon, Clément Mallet, Bertrand Duménieu, Julien Perret
 
| authors = Yizi Chen, Edwin Carlinet, Joseph Chazalon, Clément Mallet, Bertrand Duménieu, Julien Perret
Line 11: Line 11:
 
| lrdeprojects = Olena
 
| lrdeprojects = Olena
 
| lrdekeywords = Image
 
| lrdekeywords = Image
| lrdenewsdate = 2020-05-17
+
| lrdenewsdate = 2021-05-17
 
| note = To appear
 
| note = To appear
 
| type = inproceedings
 
| type = inproceedings

Revision as of 17:11, 25 May 2021

Abstract

Maps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing the complex spatial transformation of landscapes over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains (social scienceseconomy, etc.). The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects under a vectorial shape. The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades. We propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers). It is built upon the complementary strength of mathematical morphology and convolutional neural networks through efficient edge filtering. Evenmore, we modify ConnNet and combine with deep edge filtering architecture to make use of pixel connectivity information and built an end-to-end system without requiring any post-processing techniques. In this paper, we focus on the comprehensive benchmark on various architectures on multiple datasets coupled with a novel vectorization step. Our experimental results on a new public dataset using COCO Panoptic metric exhibit very encouraging results confirmed by a qualitative analysis of the success and failure cases of our approach. Codedataset, results and extra illustrations are freely available at https://github.com/soduco/ICDAR-2021-Vectorization.

Documents

Bibtex (lrde.bib)

@InProceedings{	  chen.21.icdar,
  title		= {Vectorization of Historical Maps Using Deep Edge Filtering
		  and Closed Shape Extraction},
  author	= { Yizi Chen and Edwin Carlinet and Joseph Chazalon and
		  Cl\'ement Mallet and Bertrand Dum\'enieu and Julien Perret},
  booktitle	= {Proceedings of the 16th International Conference on
		  Document Analysis and Recognition (ICDAR'21)},
  year		= {2021},
  month		= sep,
  pages		= {},
  address	= {Lausanne, Switzerland},
  abstract	= { Maps have been a unique source of knowledge for
		  centuries. Such historical documents provide invaluable
		  information for analyzing the complex spatial
		  transformation of landscapes over important time frames.
		  This is particularly true for urban areas that encompass
		  multiple interleaved research domains (social sciences,
		  economy, etc.). The large amount and significant diversity
		  of map sources call for automatic image processing
		  techniques in order to extract the relevant objects under a
		  vectorial shape. The complexity of maps (text, noise,
		  digitization artifacts, etc.) has hindered the capacity of
		  proposing a versatile and efficient raster-to-vector
		  approaches for decades. We propose a learnable,
		  reproducible, and reusable solution for the automatic
		  transformation of raster maps into vector objects (building
		  blocks, streets, rivers). It is built upon the
		  complementary strength of mathematical morphology and
		  convolutional neural networks through efficient edge
		  filtering. Evenmore, we modify ConnNet and combine with
		  deep edge filtering architecture to make use of pixel
		  connectivity information and built an end-to-end system
		  without requiring any post-processing techniques. In this
		  paper, we focus on the comprehensive benchmark on various
		  architectures on multiple datasets coupled with a novel
		  vectorization step. Our experimental results on a new
		  public dataset using COCO Panoptic metric exhibit very
		  encouraging results confirmed by a qualitative analysis of
		  the success and failure cases of our approach. Code,
		  dataset, results and extra illustrations are freely
		  available at
		  \url{https://github.com/soduco/ICDAR-2021-Vectorization}. },
  note		= {To appear}
}