Difference between revisions of "Publications/chen.23.phd"

From LRDE

(Created page with "{{Publication | published = true | date = 2023-03-22 | authors = Yizi Chen | title = Modern vectorization and alignement of historical maps: An application to Paris atlas (178...")
 
Line 10: Line 10:
 
| lrdenewsdate = 2023-03-22
 
| lrdenewsdate = 2023-03-22
 
| lrdeprojects = Olena, SoDUCo
 
| lrdeprojects = Olena, SoDUCo
| review = Maps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing complex spatial transformations over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains: humanities, social sciences, etc. The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects as vector features. The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades. In this thesis, we propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers), focusing on the extraction of closed shapes. Our approach is built upon the complementary strengths of convolutional neural networks which excel at filtering edges while presenting poor topological properties for their outputs, and mathematical morphology, which offers solid guarantees regarding closed shape extraction while being very sensitive to noise. In order to improve the robustness of deep edge filters to noise, we review several, and propose new topology-preserving loss functions which enable to improve the topological properties of the results. We also introduce a new contrast convolution (CConv) layer to investigate how architectural changes can impact such properties. Finally, we investigate the different approaches which can be used to implement each stage, and how to combine them in the most efficient way. Thanks to a shape extraction pipeline, we propose a new alignment procedure for historical map images, and start to leverage the redundancies contained in map sheets with similar contents to propagate annotations, improve vectorization quality, and eventually detect evolution patterns for later analysis or to automatically assess vectorization quality. To evaluate the performance of all methods mentioned abovewe released a new dataset of annotated historical map images. It is the first public and open dataset targeting the task of historical map vectorization. We hope that thanks to our publications, public and open releases of datasets, codes and results, our work will benefit a wide range of historical map-related applications.
+
| abstract = Maps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing complex spatial transformations over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains: humanities, social sciences, etc. The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects as vector features. The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades. In this thesis, we propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers), focusing on the extraction of closed shapes. Our approach is built upon the complementary strengths of convolutional neural networks which excel at filtering edges while presenting poor topological properties for their outputs, and mathematical morphology, which offers solid guarantees regarding closed shape extraction while being very sensitive to noise. In order to improve the robustness of deep edge filters to noise, we review several, and propose new topology-preserving loss functions which enable to improve the topological properties of the results. We also introduce a new contrast convolution (CConv) layer to investigate how architectural changes can impact such properties. Finally, we investigate the different approaches which can be used to implement each stage, and how to combine them in the most efficient way. Thanks to a shape extraction pipeline, we propose a new alignment procedure for historical map images, and start to leverage the redundancies contained in map sheets with similar contents to propagate annotations, improve vectorization quality, and eventually detect evolution patterns for later analysis or to automatically assess vectorization quality. To evaluate the performance of all methods mentioned abovewe released a new dataset of annotated historical map images. It is the first public and open dataset targeting the task of historical map vectorization. We hope that thanks to our publications, public and open releases of datasets, codes and results, our work will benefit a wide range of historical map-related applications.
 
| id = chen.23.phd
 
| id = chen.23.phd
 
| identifier = doi:FIXME
 
| identifier = doi:FIXME
Line 24: Line 24:
 
month = mar,
 
month = mar,
 
doi = <nowiki>{</nowiki>FIXME<nowiki>}</nowiki>,
 
doi = <nowiki>{</nowiki>FIXME<nowiki>}</nowiki>,
review = <nowiki>{</nowiki>Maps have been a unique source of knowledge for centuries.
+
abstract = <nowiki>{</nowiki>Maps have been a unique source of knowledge for centuries.
 
Such historical documents provide invaluable information
 
Such historical documents provide invaluable information
 
for analyzing complex spatial transformations over
 
for analyzing complex spatial transformations over

Revision as of 16:12, 9 May 2023

Abstract

Maps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing complex spatial transformations over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains: humanities, social sciences, etc. The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects as vector features. The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades. In this thesis, we propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers), focusing on the extraction of closed shapes. Our approach is built upon the complementary strengths of convolutional neural networks which excel at filtering edges while presenting poor topological properties for their outputs, and mathematical morphology, which offers solid guarantees regarding closed shape extraction while being very sensitive to noise. In order to improve the robustness of deep edge filters to noise, we review several, and propose new topology-preserving loss functions which enable to improve the topological properties of the results. We also introduce a new contrast convolution (CConv) layer to investigate how architectural changes can impact such properties. Finally, we investigate the different approaches which can be used to implement each stage, and how to combine them in the most efficient way. Thanks to a shape extraction pipeline, we propose a new alignment procedure for historical map images, and start to leverage the redundancies contained in map sheets with similar contents to propagate annotations, improve vectorization quality, and eventually detect evolution patterns for later analysis or to automatically assess vectorization quality. To evaluate the performance of all methods mentioned abovewe released a new dataset of annotated historical map images. It is the first public and open dataset targeting the task of historical map vectorization. We hope that thanks to our publications, public and open releases of datasets, codes and results, our work will benefit a wide range of historical map-related applications.


Bibtex (lrde.bib)

@PhDThesis{	  chen.23.phd,
  author	= {Yizi Chen},
  title		= {Modern vectorization and alignement of historical maps: An
		  application to Paris atlas (1789-1950)},
  school	= {Gustave Eiffel University},
  year		= {2023},
  type		= {phdthesis},
  address	= {Saint-Mand{\'e}, France},
  month		= mar,
  doi		= {FIXME},
  abstract	= {Maps have been a unique source of knowledge for centuries.
		  Such historical documents provide invaluable information
		  for analyzing complex spatial transformations over
		  important time frames. This is particularly true for urban
		  areas that encompass multiple interleaved research domains:
		  humanities, social sciences, etc. The large amount and
		  significant diversity of map sources call for automatic
		  image processing techniques in order to extract the
		  relevant objects as vector features. The complexity of maps
		  (text, noise, digitization artifacts, etc.) has hindered
		  the capacity of proposing a versatile and efficient
		  raster-to-vector approaches for decades. In this thesis, we
		  propose a learnable, reproducible, and reusable solution
		  for the automatic transformation of raster maps into vector
		  objects (building blocks, streets, rivers), focusing on the
		  extraction of closed shapes. Our approach is built upon the
		  complementary strengths of convolutional neural networks
		  which excel at filtering edges while presenting poor
		  topological properties for their outputs, and mathematical
		  morphology, which offers solid guarantees regarding closed
		  shape extraction while being very sensitive to noise. In
		  order to improve the robustness of deep edge filters to
		  noise, we review several, and propose new
		  topology-preserving loss functions which enable to improve
		  the topological properties of the results. We also
		  introduce a new contrast convolution (CConv) layer to
		  investigate how architectural changes can impact such
		  properties. Finally, we investigate the different
		  approaches which can be used to implement each stage, and
		  how to combine them in the most efficient way. Thanks to a
		  shape extraction pipeline, we propose a new alignment
		  procedure for historical map images, and start to leverage
		  the redundancies contained in map sheets with similar
		  contents to propagate annotations, improve vectorization
		  quality, and eventually detect evolution patterns for later
		  analysis or to automatically assess vectorization quality.
		  To evaluate the performance of all methods mentioned above,
		  we released a new dataset of annotated historical map
		  images. It is the first public and open dataset targeting
		  the task of historical map vectorization. We hope that
		  thanks to our publications, public and open releases of
		  datasets, codes and results, our work will benefit a wide
		  range of historical map-related applications.}
}