LRDE
https://www.lrde.epita.fr/wiki/Special:Ask/-3Cq-3E-5B-5BCategory:News-7C-7CLrdeBulletin-5D-5D-20OR-20-5B-5BCategory:Publications-5D-5D-20-5B-5BPublished-20has-20news::true-5D-5D-20OR-20-5B-5BCategory:OlenaNews-5D-5D-20-5B-5BIs-20global-20news::true-5D-5D-3C-2Fq-3E/mainlabel%3D/limit%3D50/order%3D-20descending/sort%3D-20News-20date/offset%3D0/format%3D-20feed/searchlabel%3D-20-3Csmall-3ERSS-3C-2Fsmall-3E/page%3D-20full
From LRDEenMediaWiki 1.35.3Fri, 24 Sep 2021 13:18:34 GMTSeminar/2021-10-06
https://www.lrde.epita.fr/wiki/Seminar/2021-10-06
https://www.lrde.epita.fr/wiki/Seminar/2021-10-06<div class="mw-parser-output"><p><a class="mw-selflink selflink">Seminar/2021-10-06</a>
</p></div><div class="mw-parser-output"><h3><span id="Mercredi_6_octobre_2021,_11h_-_12h,_Https://meet.jit.si/SeminaireLRDE"></span><span class="mw-headline" id="Mercredi_6_octobre_2021.2C_11h_-_12h.2C_Https:.2F.2Fmeet.jit.si.2FSeminaireLRDE"><a class="mw-selflink selflink"> Mercredi 6 octobre 2021, 11h - 12h, Https://meet.jit.si/SeminaireLRDE</a></span></h3>
<p><br />
</p>
<h4><span class="mw-headline" id="Scaling_Optimal_Transport_for_High_Dimensional_Learning">Scaling Optimal Transport for High Dimensional Learning</span></h4>
<p><i>Gabriel Peyré, CNRS and Ecole Normale Supérieure</i>
<br />
<br />
</p><p>Optimal transport (OT) has recently gained a lot of interest in machine learning. It is a natural tool to compare in a geometrically faithful way probability distributions. It finds applications in both supervised learning (using geometric loss functions) and unsupervised learning (to perform generative model fitting). OT is however plagued by the curse of dimensionality, since it might require a number of samples which grows exponentially with the dimension. In this talk, I will review entropic regularization methods which define geometric loss functions approximating OT with a better sample complexity.
<br />
<br />
</p><p><small>Gabriel Peyré is a CNRS senior researcher and professor at Ecole Normale Supérieure, Paris. He works at the interface between applied mathematics, imaging and machine learning. He obtained 2 ERC grants (Starting in 2010 and Consolidator in 2017), the Blaise Pascal prize from the French academy of sciences in 2017, the Magenes Prize from the Italian Mathematical Union in 2019 and the silver medal from CNRS in 2021. He is invited speaker at the European Congress for Mathematics in 2020. He is the deputy director of the Prairie Institute for artificial intelligence, the director of the ENS center for data science and the former director of the GdR CNRS MIA. He is the head of the ELLIS (European Lab for Learning & Intelligent Systems) Paris Unit. He is engaged in reproducible research and code education.</small>
<br />
<br />
<a rel="nofollow" class="external text" href="https://optimaltransport.github.io/">https://optimaltransport.github.io/</a>, <a rel="nofollow" class="external text" href="http://www.numerical-tours.com/">http://www.numerical-tours.com/</a>, <a rel="nofollow" class="external text" href="https://ellis-paris.github.io/">https://ellis-paris.github.io/</a>
</p></div>Mon, 06 Sep 2021 16:17:53 GMTBotTowards better Heuristics for solving Bounded Model Checking Problems
https://www.lrde.epita.fr/wiki/Publications/kheireddine.21.cp
https://www.lrde.epita.fr/wiki/Publications/kheireddine.21.cp<div class="mw-parser-output"><p><a class="mw-selflink selflink">Towards better Heuristics for solving Bounded Model Checking Problems</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Anissa Kheireddine, <a href="/wiki/User:Renault" title="User:Renault">Étienne Renault</a>, Souheib Baarrir</dd>
<dt>Where</dt>
<dd>Proceedings of the 27th International Conference on Principles and Practice of Constraint Programmings (CP)</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Projects</dt>
<dd><a href="/wiki/Spot" title="Spot">Spot</a></dd>
<dt>Date</dt>
<dd>2021-08-31</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>This paper presents a new way to improve the performance of the SAT-based bounded model checking problem by exploiting relevant information identified through the characteristics of the original problem. This led us to design a new way of building interesting heuristics based on the structure of the underlying problem. The proposed methodology is generic and can be applied for any SAT problem. This paper compares the state-of-the-art approach with two new heuristics: Structure-based and Linear Programming heuristics and show promising results.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/kheireddine.21.cp.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ kheireddine.21.cp,
author = {Anissa Kheireddine and \'Etienne Renault and Souheib
Baarrir},
title = {Towards better Heuristics for solving Bounded Model
Checking Problems},
booktitle = {Proceedings of the 27th International Conference on
Principles and Practice of Constraint Programmings (CP)},
year = {2021},
month = oct,
abstract = {This paper presents a new way to improve the performance
of the SAT-based bounded model checking problem by
exploiting relevant information identified through the
characteristics of the original problem. This led us to
design a new way of building interesting heuristics based
on the structure of the underlying problem. The proposed
methodology is generic and can be applied for any SAT
problem. This paper compares the state-of-the-art approach
with two new heuristics: Structure-based and Linear
Programming heuristics and show promising results.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:56:47 GMTBotVerSe: A Vertebrae Labelling and Segmentation Benchmark for Multi-detector CT Images
https://www.lrde.epita.fr/wiki/Publications/sekuboyina.21.media
https://www.lrde.epita.fr/wiki/Publications/sekuboyina.21.media<div class="mw-parser-output"><p><a class="mw-selflink selflink">VerSe: A Vertebrae Labelling and Segmentation Benchmark for Multi-detector CT Images</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Anjany Sekuboyina, Malek E Husseini, Amirhossein Bayat, Maximilian Löffler, Hans Liebl, Hongwei Li, Giles Tetteh, Jan Kukačka, Christian Payer, Darko Stern, Martin Urschler, Maodong Chen, Dalong Cheng, Nikolas Lessmann, Yujin Hu, Tianfu Wang, Dong Yang, Daguang Xu, and Felix Ambellan, Tamaz Amiranashvili, Moritz Ehlke, Hans Lamecker, Sebastian Lehnert, Marilia Lirio, Nicolás Pérez de Olaguer, Heiko Ramm, Manish Sahu, Alexander Tack, Stefan Zachow, Tao Jiang, Xinjun Ma, Christoph Angerman, Xin Wang, Kevin Brown, Matthias Wolf, Alexandre Kirszenberg, <a href="/wiki/User:Elodie" title="User:Elodie">Élodie Puybareau</a>, Di Chen, Yiwei Bai, Brandon H Rapazzo, Timyoas Yeah, Amber Zhang, Shangliang Xu, Feng Houa, Zhiqiang He, Chan Zeng, Zheng Xiangshang, Xu Liming, Tucker J Netherton, Raymond P Mumme, Laurence E Court, Zixun Huang, Chenhang He, Li-Wen Wang, Sai Ho Ling, <a href="/wiki/User:Dhuynh" title="User:Dhuynh">Lê Duy Huỳnh</a>, <a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, Roman Jakubicek, Jiri Chmelik, Supriti Mulay, Mohanasankar Sivaprakasam, Johannes C Paetzold, Suprosanna Shit, Ivan Ezhov, Benedikt Wiestler, Ben Glocker, Alexander Valentinitsch, Markus Rempfler, Björn H Menze, Jan S Kirschke</dd>
<dt>Journal</dt>
<dd>Medical Image Analysis</dd>
<dt>Type</dt>
<dd>article</dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Date</dt>
<dd>2021-07-22</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosissurgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (<a rel="nofollow" class="external free" href="https://osf.io/nqjyw/">https://osf.io/nqjyw/</a>, urlhttps://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: <a rel="nofollow" class="external free" href="https://github.com/anjany/verse">https://github.com/anjany/verse</a>.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/sekuboyina.21.media.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@Article{ sekuboyina.21.media,
author = {Anjany Sekuboyina and Malek E. Husseini and Amirhossein
Bayat and Maximilian L\"offler and Hans Liebl and Hongwei
Li and Giles Tetteh and Jan Kuka\v{c}ka and Christian Payer
and Darko Stern and Martin Urschler and Maodong Chen and
Dalong Cheng and Nikolas Lessmann and Yujin Hu and Tianfu
Wang and Dong Yang and Daguang Xu and and Felix Ambellan
and Tamaz Amiranashvili and Moritz Ehlke and Hans Lamecker
and Sebastian Lehnert and Marilia Lirio and Nicol\'as
{P\'erez de Olaguer} and Heiko Ramm and Manish Sahu and
Alexander Tack and Stefan Zachow and Tao Jiang and Xinjun
Ma and Christoph Angerman and Xin Wang and Kevin Brown and
Matthias Wolf and Alexandre Kirszenberg and \'Elodie
Puybareau and Di Chen and Yiwei Bai and Brandon H. Rapazzo
and Timyoas Yeah and Amber Zhang and Shangliang Xu and Feng
Houa and Zhiqiang He and Chan Zeng and Zheng Xiangshang and
Xu Liming and Tucker J. Netherton and Raymond P. Mumme and
Laurence E. Court and Zixun Huang and Chenhang He and
Li-Wen Wang and Sai Ho Ling and L\^e Duy Hu\`ynh and
Nicolas Boutry and Roman Jakubicek and Jiri Chmelik and
Supriti Mulay and Mohanasankar Sivaprakasam and Johannes C.
Paetzold and Suprosanna Shit and Ivan Ezhov and Benedikt
Wiestler and Ben Glocker and Alexander Valentinitsch and
Markus Rempfler and Bj\"orn H. Menze and Jan S. Kirschke},
title = {{VerSe}: {A} Vertebrae Labelling and Segmentation
Benchmark for Multi-detector {CT} Images},
journal = {Medical Image Analysis},
number = {102166},
year = {2021},
month = jul,
doi = {10.1016/j.media.2021.102166},
abstract = {Vertebral labelling and segmentation are two fundamental
tasks in an automated spine processing pipeline. Reliable
and accurate processing of spine images is expected to
benefit clinical decision support systems for diagnosis,
surgery planning, and population-based analysis of spine
and bone health. However, designing automated algorithms
for spine processing is challenging predominantly due to
considerable variations in anatomy and acquisition
protocols and due to a severe shortage of publicly
available data. Addressing these limitations, the Large
Scale Vertebrae Segmentation Challenge (VerSe) was
organised in conjunction with the International Conference
on Medical Image Computing and Computer Assisted
Intervention (MICCAI) in 2019 and 2020, with a call for
algorithms tackling the labelling and segmentation of
vertebrae. Two datasets containing a total of 374
multi-detector CT scans from 355 patients were prepared and
4505 vertebrae have individually been annotated at voxel
level by a human-machine hybrid algorithm
(\url{https://osf.io/nqjyw/}, \url{https://osf.io/t98fz/}).
A total of 25 algorithms were benchmarked on these
datasets. In this work, we present the results of this
evaluation and further investigate the performance
variation at the vertebra level, scan level, and different
fields of view. We also evaluate the generalisability of
the approaches to an implicit domain shift in data by
evaluating the top-performing algorithms of one challenge
iteration on data from the other iteration. The principal
takeaway from VerSe: the performance of an algorithm in
labelling and segmenting a spine scan hinges on its ability
to correctly identify vertebrae in cases of rare anatomical
variations. The VerSe content and code can be accessed at:
\url{https://github.com/anjany/verse}.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:58:09 GMTBotNewsEntry (2021/07/05)
https://www.lrde.epita.fr/wiki/NewsEntry_(2021/07/05)
https://www.lrde.epita.fr/wiki/NewsEntry_(2021/07/05)<div class="mw-parser-output"><p><a class="mw-selflink selflink">NewsEntry (2021/07/05)</a>
</p></div><div class="mw-parser-output"><table class="wikitable">
<tbody><tr>
<th>Title
</th>
<td>LRDE researcher Guillaume Tochon participates in LEMONADE project selected by ANR for a JCJC 2021 grant
</td></tr>
<tr>
<th>Sub-Title
</th>
<td>
<p>The LEMONADE project (LEarning and MOdeliNg spectrAl Dynamics of satellite image time sEries) has been selected by the French National Research Agency as a research project coordinated by young researchers (JCJC). The project’s principal investigator is Lucas Drumetz (<a rel="nofollow" class="external text" href="https://www.imt-atlantique.fr">IMT-Atlantique, Lab-STICC</a>), with Mauro Dalla Mura (<a rel="nofollow" class="external text" href="http://www.gipsa-lab.fr">Grenoble-INP, GIPSA-Lab</a>) and Guillaume Tochon from LRDE as partners. The project will start in October 2021. The goal of this project is to learn and model, with deep neural network approaches, the spectral dynamics of satellite image time series.
</p>
</td></tr>
<tr>
<th>Date
</th>
<td>
<p>2021/07/05
</p>
</td></tr></tbody></table>
</div>Wed, 28 Jul 2021 12:53:00 GMTDanielaGo2Pins: A Framework for the LTL Verification of Go Programs
https://www.lrde.epita.fr/wiki/Publications/kirszenberg.21.spin
https://www.lrde.epita.fr/wiki/Publications/kirszenberg.21.spin<div class="mw-parser-output"><p><a class="mw-selflink selflink">Go2Pins: A Framework for the LTL Verification of Go Programs</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Alexandre Kirszenberg, Antoine Martin, Hugo Moreau, <a href="/wiki/User:Renault" title="User:Renault">Etienne Renault</a></dd>
<dt>Where</dt>
<dd>Proceedings of the 27th International SPIN Symposium on Model Checking of Software (SPIN'21)</dd>
<dt>Place</dt>
<dd>Aarhus, Denmark (online)</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer,_Cham&action=edit&redlink=1" class="new" title="Springer, Cham (page does not exist)">Springer, Cham</a></dd>
<dt>Keywords</dt>
<dd>Spot</dd>
<dt>Date</dt>
<dd>2021-06-08</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>We introduce Go2Pins, a tool that takes a program written in Go and links it with two model-checkers: LTSMin [19] and Spot [7]. Go2Pins is an effort to promote the integration of both formal verifica- tion and testing inside industrial-size projects. With this goal in mind, we introduce black-box transitions, an efficient and scalable technique for handling the Go runtime. This approachinspired by hardware ver- ification techniques, allows easy, automatic and efficient abstractions. Go2Pins also handles basic concurrent programs through the use of a dedicated scheduler. In this paper we demonstrate the usage of Go2Pins over benchmarks inspired by industrial problems and a set of LTL formulae. Even if Go2Pins is still at the early stages of development, our results are promising and show the the benefits of using black-box transitions.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/kirszenberg.21.spin.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ kirszenberg.21.spin,
author = {Alexandre Kirszenberg and Antoine Martin and Hugo Moreau
and Etienne Renault},
title = {{Go2Pins}: {A} Framework for the {LTL} Verification of
{Go} Programs},
booktitle = {Proceedings of the 27th International SPIN Symposium on
Model Checking of Software (SPIN'21)},
year = {2021},
series = {Lecture Notes in Computer Science},
volume = {12864},
month = may,
address = {Aarhus, Denmark (online)},
publisher = {Springer, Cham},
pages = {140--156},
abstract = {We introduce Go2Pins, a tool that takes a program written
in Go and links it with two model-checkers: LTSMin [19] and
Spot [7]. Go2Pins is an effort to promote the integration
of both formal verifica- tion and testing inside
industrial-size projects. With this goal in mind, we
introduce black-box transitions, an efficient and scalable
technique for handling the Go runtime. This approach,
inspired by hardware ver- ification techniques, allows
easy, automatic and efficient abstractions. Go2Pins also
handles basic concurrent programs through the use of a
dedicated scheduler. In this paper we demonstrate the usage
of Go2Pins over benchmarks inspired by industrial problems
and a set of LTL formulae. Even if Go2Pins is still at the
early stages of development, our results are promising and
show the the benefits of using black-box transitions.},
doi = {10.1007/978-3-030-84629-9_8}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Mon, 06 Sep 2021 13:06:46 GMTBotVectorization of Historical Maps Using Deep Edge Filtering and Closed Shape Extraction
https://www.lrde.epita.fr/wiki/Publications/chen.21.icdar
https://www.lrde.epita.fr/wiki/Publications/chen.21.icdar<div class="mw-parser-output"><p><a class="mw-selflink selflink">Vectorization of Historical Maps Using Deep Edge Filtering and Closed Shape Extraction</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Yizi Chen, <a href="/wiki/User:Carlinet" title="User:Carlinet">Edwin Carlinet</a>, <a href="/wiki/User:Chazalon" title="User:Chazalon">Joseph Chazalon</a>, Clément Mallet, Bertrand Duménieu, Julien Perret</dd>
<dt>Where</dt>
<dd>Proceedings of the 16th International Conference on Document Analysis and Recognition (ICDAR'21)</dd>
<dt>Place</dt>
<dd>Lausanne, Switzerland</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer,_Cham&action=edit&redlink=1" class="new" title="Springer, Cham (page does not exist)">Springer, Cham</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2021-05-17</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Maps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing the complex spatial transformation of landscapes over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains (social sciences, economy, etc.). The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects under a vectorial shape. The complexity of maps (text, noise, digitization artifactsetc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades. We propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers). It is built upon the complementary strength of mathematical morphology and convolutional neural networks through efficient edge filtering. Evenmore, we modify ConnNet and combine with deep edge filtering architecture to make use of pixel connectivity information and built an end-to-end system without requiring any post-processing techniques. In this paper, we focus on the comprehensive benchmark on various architectures on multiple datasets coupled with a novel vectorization step. Our experimental results on a new public dataset using COCO Panoptic metric exhibit very encouraging results confirmed by a qualitative analysis of the success and failure cases of our approach. Codedataset, results and extra illustrations are freely available at <a rel="nofollow" class="external free" href="https://github.com/soduco/ICDAR-2021-Vectorization">https://github.com/soduco/ICDAR-2021-Vectorization</a>.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/chen.21.icdar.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ chen.21.icdar,
title = {Vectorization of Historical Maps Using Deep Edge Filtering
and Closed Shape Extraction},
author = {Yizi Chen and Edwin Carlinet and Joseph Chazalon and
Cl\'ement Mallet and Bertrand Dum\'enieu and Julien Perret},
booktitle = {Proceedings of the 16th International Conference on
Document Analysis and Recognition (ICDAR'21)},
year = {2021},
month = sep,
pages = {510--525},
series = {Lecture Notes in Computer Science},
publisher = {Springer, Cham},
volume = {12824},
address = {Lausanne, Switzerland},
abstract = {Maps have been a unique source of knowledge for centuries.
Such historical documents provide invaluable information
for analyzing the complex spatial transformation of
landscapes over important time frames. This is particularly
true for urban areas that encompass multiple interleaved
research domains (social sciences, economy, etc.). The
large amount and significant diversity of map sources call
for automatic image processing techniques in order to
extract the relevant objects under a vectorial shape. The
complexity of maps (text, noise, digitization artifacts,
etc.) has hindered the capacity of proposing a versatile
and efficient raster-to-vector approaches for decades. We
propose a learnable, reproducible, and reusable solution
for the automatic transformation of raster maps into vector
objects (building blocks, streets, rivers). It is built
upon the complementary strength of mathematical morphology
and convolutional neural networks through efficient edge
filtering. Evenmore, we modify ConnNet and combine with
deep edge filtering architecture to make use of pixel
connectivity information and built an end-to-end system
without requiring any post-processing techniques. In this
paper, we focus on the comprehensive benchmark on various
architectures on multiple datasets coupled with a novel
vectorization step. Our experimental results on a new
public dataset using COCO Panoptic metric exhibit very
encouraging results confirmed by a qualitative analysis of
the success and failure cases of our approach. Code,
dataset, results and extra illustrations are freely
available at
\url{https://github.com/soduco/ICDAR-2021-Vectorization}. },
doi = {10.1007/978-3-030-86337-1_34}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:55:11 GMTBotICDAR 2021 Competition on Historical Map Segmentation
https://www.lrde.epita.fr/wiki/Publications/chazalon.21.icdar.2
https://www.lrde.epita.fr/wiki/Publications/chazalon.21.icdar.2<div class="mw-parser-output"><p><a class="mw-selflink selflink">ICDAR 2021 Competition on Historical Map Segmentation</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Chazalon" title="User:Chazalon">Joseph Chazalon</a>, <a href="/wiki/User:Carlinet" title="User:Carlinet">Edwin Carlinet</a>, Yizi Chen, Julien Perret, Bertrand Duménieu, Clément Mallet, <a href="/wiki/User:Theo" title="User:Theo">Thierry Géraud</a>, Vincent Nguyen, Nam Nguyen, Josef Baloun, Ladislav Lenc, Pavel Král</dd>
<dt>Where</dt>
<dd>Proceedings of the 16th International Conference on Document Analysis and Recognition (ICDAR'21)</dd>
<dt>Place</dt>
<dd>Lausanne, Switzerland</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer,_Cham&action=edit&redlink=1" class="new" title="Springer, Cham (page does not exist)">Springer, Cham</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2021-05-17</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>This paper presents the final results of the ICDAR 2021 Competition on Historical Map Segmentation (MapSeg)encouraging research on a series of historical atlases of Paris, France, drawn at 1/5000 scale between 1894 and 1937. The competition featured three tasks, awarded separately. Task 1 consists in detecting building blocks and was won by the L3IRIS team using a DenseNet-121 network trained in a weakly supervised fashion. This task is evaluated on 3 large images containing hundreds of shapes to detect. Task 2 consists in segmenting map content from the larger map sheet, and was won by the UWB team using a U-Net-like FCN combined with a binarization method to increase detection edge accuracy. Task 3 consists in locating intersection points of geo-referencing lines, and was also won by the UWB team who used a dedicated pipeline combining binarization, line detection with Hough transformcandidate filtering, and template matching for intersection refinement. Tasks 2 and 3 are evaluated on 95 map sheets with complex content. Dataset, evaluation tools and results are available under permissive licensing at <a rel="nofollow" class="external free" href="https://icdar21-mapseg.github.io/">https://icdar21-mapseg.github.io/</a>.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/chazalon.21.icdar.2.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ chazalon.21.icdar.2,
title = {{ICDAR} 2021 Competition on Historical Map Segmentation},
author = {Joseph Chazalon and Edwin Carlinet and Yizi Chen and
Julien Perret and Bertrand Dum\'enieu and Cl\'ement Mallet
and Thierry G\'eraud and Vincent Nguyen and Nam Nguyen and
Josef Baloun and Ladislav Lenc and Pavel Kr\'al},
booktitle = {Proceedings of the 16th International Conference on
Document Analysis and Recognition (ICDAR'21)},
year = {2021},
month = sep,
pages = {693--707},
series = {Lecture Notes in Computer Science},
publisher = {Springer, Cham},
volume = {12824},
address = {Lausanne, Switzerland},
abstract = {This paper presents the final results of the ICDAR 2021
Competition on Historical Map Segmentation (MapSeg),
encouraging research on a series of historical atlases of
Paris, France, drawn at 1/5000 scale between 1894 and 1937.
The competition featured three tasks, awarded separately.
Task~1 consists in detecting building blocks and was won by
the L3IRIS team using a DenseNet-121 network trained in a
weakly supervised fashion. This task is evaluated on 3
large images containing hundreds of shapes to detect.
Task~2 consists in segmenting map content from the larger
map sheet, and was won by the UWB team using a U-Net-like
FCN combined with a binarization method to increase
detection edge accuracy. Task~3 consists in locating
intersection points of geo-referencing lines, and was also
won by the UWB team who used a dedicated pipeline combining
binarization, line detection with Hough transform,
candidate filtering, and template matching for intersection
refinement. Tasks~2 and~3 are evaluated on 95 map sheets
with complex content. Dataset, evaluation tools and results
are available under permissive licensing at
\url{https://icdar21-mapseg.github.io/}.},
doi = {10.1007/978-3-030-86337-1_46}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:55:01 GMTBotRevisiting the Coco Panoptic Metric to Enable Visual and Qualitative Analysis of Historical Map Instance Segmentation
https://www.lrde.epita.fr/wiki/Publications/chazalon.21.icdar.1
https://www.lrde.epita.fr/wiki/Publications/chazalon.21.icdar.1<div class="mw-parser-output"><p><a class="mw-selflink selflink">Revisiting the Coco Panoptic Metric to Enable Visual and Qualitative Analysis of Historical Map Instance Segmentation</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Chazalon" title="User:Chazalon">Joseph Chazalon</a>, <a href="/wiki/User:Carlinet" title="User:Carlinet">Edwin Carlinet</a></dd>
<dt>Where</dt>
<dd>Proceedings of the 16th International Conference on Document Analysis and Recognition (ICDAR'21)</dd>
<dt>Place</dt>
<dd>Lausanne, Switzerland</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer,_Cham&action=edit&redlink=1" class="new" title="Springer, Cham (page does not exist)">Springer, Cham</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2021-05-17</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Segmentation is an important task. It is so important that there exist tens of metrics trying to score and rank segmentation systems. It is so important that each topic has its own metric because their problem is too specific. Does it? What are the fundamental differences with the ZoneMap metric used for page segmentation, the COCO Panoptic metric used in computer vision and metrics used to rank hierarchical segmentations? In this paper, while assessing segmentation accuracy for historical maps, we explain, compare and demystify some the most used segmentation evaluation protocols. In particular, we focus on an alternative view of the COCO Panoptic metric as a classification evaluation; we show its soundness and propose extensions with more “shape-oriented” metrics. Beyond a quantitative metric, this paper aims also at providing qualitative measures through precision-recall maps that enable visualizing the success and the failures of a segmentation method.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/chazalon.21.icdar.1.pdf">Paper</a></li></ul>
<p><br />
</p>
<ul><li><a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/chazalon.21.icdar.1.poster.pdf">Poster</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ chazalon.21.icdar.1,
title = {Revisiting the {C}oco Panoptic Metric to Enable Visual and
Qualitative Analysis of Historical Map Instance
Segmentation},
author = {Joseph Chazalon and Edwin Carlinet},
booktitle = {Proceedings of the 16th International Conference on
Document Analysis and Recognition (ICDAR'21)},
year = {2021},
month = sep,
series = {Lecture Notes in Computer Science},
publisher = {Springer, Cham},
volume = {12824},
pages = {367--382},
address = {Lausanne, Switzerland},
abstract = {Segmentation is an important task. It is so important that
there exist tens of metrics trying to score and rank
segmentation systems. It is so important that each topic
has its own metric because their problem is too specific.
Does it? What are the fundamental differences with the
ZoneMap metric used for page segmentation, the COCO
Panoptic metric used in computer vision and metrics used to
rank hierarchical segmentations? In this paper, while
assessing segmentation accuracy for historical maps, we
explain, compare and demystify some the most used
segmentation evaluation protocols. In particular, we focus
on an alternative view of the COCO Panoptic metric as a
classification evaluation; we show its soundness and
propose extensions with more ``shape-oriented'' metrics.
Beyond a quantitative metric, this paper aims also at
providing qualitative measures through
\emph{precision-recall maps} that enable visualizing the
success and the failures of a segmentation method.},
doi = {10.1007/978-3-030-86337-1_25}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:54:57 GMTBotSeminar/2021-05-12
https://www.lrde.epita.fr/wiki/Seminar/2021-05-12
https://www.lrde.epita.fr/wiki/Seminar/2021-05-12<div class="mw-parser-output"><p><a class="mw-selflink selflink">Seminar/2021-05-12</a>
</p></div><div class="mw-parser-output"><h3><span id="Mercredi_12_mai_2021,_11h_-_12h,_Https://meet.jit.si/SeminaireLRDE"></span><span class="mw-headline" id="Mercredi_12_mai_2021.2C_11h_-_12h.2C_Https:.2F.2Fmeet.jit.si.2FSeminaireLRDE"><a class="mw-selflink selflink"> Mercredi 12 mai 2021, 11h - 12h, Https://meet.jit.si/SeminaireLRDE</a></span></h3>
<p><br />
</p>
<h4><span class="mw-headline" id="An_Introduction_to_Topological_Data_Analysis_with_the_Topology_ToolKit">An Introduction to Topological Data Analysis with the Topology ToolKit</span></h4>
<p><i>Julien Tierny, Sorbonne Université</i>
<br />
<br />
</p><p>Topological Data Analysis (TDA) is a recent area of computer science that focuses on discovering intrinsic structures hidden in data. Based on solid mathematical tools such as Morse theory and Persistent Homology, TDA enables the robust extraction of the main features of a data set into stable, concise, and multi-scale descriptors that facilitate data analysis and visualization. In this talk, I will give an intuitive overview of the main tools used in TDA (persistence diagrams, Reeb graphs, Morse-Smale complexes, etc.) with applications to concrete use cases in computational fluid dynamics, medical imaging, quantum chemistry, and climate modeling. This talk will be illustrated with results produced with the "Topology ToolKit" (TTK), an open-source library (BSD license) that we develop with collaborators to showcase our research. Tutorials for re-producing these experiments are available on the TTK website.
<br />
<br />
</p><p><small>Julien Tierny received his Ph.D. degree in Computer Science from the University of Lille in 2008 and
the Habilitation degree (HDR) from Sorbonne University in 2016. Currently a CNRS permanent
research scientist affiliated with Sorbonne University, his research expertise lies in topological methods
for data analysis and visualization. Author on the topic and award winner for his research, he regularly
serves as an international program committee member for the top venues in data visualization (IEEE VIS,
EuroVis, etc.) and is an associate editor for IEEE Transactions on Visualization and Computer Graphics.
Julien Tierny is also founder and lead developer of the Topology ToolKit (TTK), an open source library for
topological data analysis.</small>
<br />
<br />
<a rel="nofollow" class="external text" href="https://topology-tool-kit.github.io/">https://topology-tool-kit.github.io/</a>
</p></div>Wed, 28 Apr 2021 11:35:37 GMTBotLearning Sentinel-2 Spectral Dynamics for Long-Run Predictions Using Residual Neural Networks
https://www.lrde.epita.fr/wiki/Publications/estopinan.21.eusipco
https://www.lrde.epita.fr/wiki/Publications/estopinan.21.eusipco<div class="mw-parser-output"><p><a class="mw-selflink selflink">Learning Sentinel-2 Spectral Dynamics for Long-Run Predictions Using Residual Neural Networks</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Joaquim Estopinan, <a href="/wiki/User:Gtochon" title="User:Gtochon">Guillaume Tochon</a>, Lucas Drumetz</dd>
<dt>Where</dt>
<dd>Proceedings of the 29th European Signal Processing Conference (EUSIPCO)</dd>
<dt>Place</dt>
<dd>Dublin, Ireland</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2021-05-04</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Making the most of multispectral image time-series is a promising but still relatively under-explored research direction because of the complexity of jointly analyzing spatial, spectral and temporal information. Capturing and characterizing temporal dynamics is one of the important and challenging issues. Our new method paves the way to capture real data dynamics and should eventually benefit applications like unmixing or classification. Dealing with time-series dynamics classically requires the knowledge of a dynamical model and an observation model. The former may be incorrect or computationally hard to handle, thus motivating data-driven strategies aiming at learning dynamics directly from data. In this paper, we adapt neural network architectures to learn periodic dynamics of both simulated and real multispectral time-series. We emphasize the necessity of choosing the right state variable to capture periodic dynamics and show that our models can reproduce the average seasonal dynamics of vegetation using only one year of training data.
</p><p><br />
</p>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ estopinan.21.eusipco,
author = {Joaquim Estopinan and Guillaume Tochon and Lucas Drumetz},
title = {Learning {Sentinel-2} Spectral Dynamics for Long-Run
Predictions Using Residual Neural Networks},
booktitle = {Proceedings of the 29th European Signal Processing
Conference (EUSIPCO)},
year = 2021,
address = {Dublin, Ireland},
month = aug,
abstract = {Making the most of multispectral image time-series is a
promising but still relatively under-explored research
direction because of the complexity of jointly analyzing
spatial, spectral and temporal information. Capturing and
characterizing temporal dynamics is one of the important
and challenging issues. Our new method paves the way to
capture real data dynamics and should eventually benefit
applications like unmixing or classification. Dealing with
time-series dynamics classically requires the knowledge of
a dynamical model and an observation model. The former may
be incorrect or computationally hard to handle, thus
motivating data-driven strategies aiming at learning
dynamics directly from data. In this paper, we adapt neural
network architectures to learn periodic dynamics of both
simulated and real multispectral time-series. We emphasize
the necessity of choosing the right state variable to
capture periodic dynamics and show that our models can
reproduce the average seasonal dynamics of vegetation using
only one year of training data.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Thu, 27 May 2021 15:00:43 GMTBotA Corpus Processing and Analysis Pipeline for Quickref
https://www.lrde.epita.fr/wiki/Publications/hacquard.21.els
https://www.lrde.epita.fr/wiki/Publications/hacquard.21.els<div class="mw-parser-output"><p><a class="mw-selflink selflink">A Corpus Processing and Analysis Pipeline for Quickref</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Antoine Hacquard, <a href="/wiki/User:Didier" title="User:Didier">Didier Verna</a></dd>
<dt>Where</dt>
<dd>Proceedings of the 14th European Lisp Symposium (ELS)</dd>
<dt>Place</dt>
<dd>Online</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Date</dt>
<dd>2021-05-01</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Quicklisp is a library manager working with your existing Common Lisp implementation to download and install around 2000 libraries, from a central archive. Quickref, an application itself written in Common Lisp, generatesautomatically and by introspection, a technical documentation for every library in Quicklisp, and produces a website for this documentation. In this paper, we present a corpus processing and analysis pipeline for Quickref. This pipeline consists of a set of natural language processing blocks allowing us to analyze Quicklisp libraries, based on natural language contents sources such as README files, docstrings, or symbol names. The ultimate purpose of this pipeline is the generation of a keyword index for Quickref, although other applications such as word clouds or topic analysis are also envisioned.
</p><p><br />
</p>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ hacquard.21.els,
author = {Antoine Hacquard and Didier Verna},
title = {A Corpus Processing and Analysis Pipeline for {Q}uickref},
booktitle = {Proceedings of the 14th European Lisp Symposium (ELS)},
year = 2021,
pages = {27--35},
month = may,
address = {Online},
isbn = 9782955747452,
doi = {10.5281/zenodo.4714443},
abstract = {Quicklisp is a library manager working with your existing
Common Lisp implementation to download and install around
2000 libraries, from a central archive. Quickref, an
application itself written in Common Lisp, generates,
automatically and by introspection, a technical
documentation for every library in Quicklisp, and produces
a website for this documentation. In this paper, we present
a corpus processing and analysis pipeline for Quickref.
This pipeline consists of a set of natural language
processing blocks allowing us to analyze Quicklisp
libraries, based on natural language contents sources such
as README files, docstrings, or symbol names. The ultimate
purpose of this pipeline is the generation of a keyword
index for Quickref, although other applications such as
word clouds or topic analysis are also envisioned.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:56:33 GMTBotA Portable, Simple, Embeddable Type System
https://www.lrde.epita.fr/wiki/Publications/newton.21.els
https://www.lrde.epita.fr/wiki/Publications/newton.21.els<div class="mw-parser-output"><p><a class="mw-selflink selflink">A Portable, Simple, Embeddable Type System</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Jnewton" title="User:Jnewton">Jim Newton</a>, <a href="/wiki/User:Adrien" title="User:Adrien">Adrien Pommellet</a></dd>
<dt>Where</dt>
<dd>Proceedings of the 14th European Lisp Symposium (ELS)</dd>
<dt>Place</dt>
<dd>Online</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Projects</dt>
<dd><a href="/wiki/Spot" title="Spot">Spot</a></dd>
<dt>Keywords</dt>
<dd>infinite alphabets, type systems, Common Lisp, ClojureScala</dd>
<dt>Date</dt>
<dd>2021-04-26</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>We present a simple type system inspired by that of Common Lisp. The type system is intended to be embedded into a host language and accepts certain fundamental types from that language as axiomatically given. The type calculus provided in the type system is capable of expressing union, intersection, and complement types, as well as membership, subtype, disjoint, and habitation (non-emptiness) checks. We present a theoretical foundation and two sample implementations, one in Clojure and one in Scala.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/newton.21.els.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ newton.21.els,
author = {Jim Newton and Adrien Pommellet},
title = {A Portable, Simple, Embeddable Type System},
booktitle = {Proceedings of the 14th European Lisp Symposium (ELS)},
year = 2021,
lrdestatus = {accepted},
address = {Online},
month = may,
abstract = { We present a simple type system inspired by that of
Common Lisp. The type system is intended to be embedded
into a host language and accepts certain fundamental types
from that language as axiomatically given. The type
calculus provided in the type system is capable of
expressing union, intersection, and complement types, as
well as membership, subtype, disjoint, and habitation
(non-emptiness) checks. We present a theoretical foundation
and two sample implementations, one in Clojure and one in
Scala.},
doi = {10.5281/zenodo.4709777}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:57:40 GMTBotSeminar/2021-03-31
https://www.lrde.epita.fr/wiki/Seminar/2021-03-31
https://www.lrde.epita.fr/wiki/Seminar/2021-03-31<div class="mw-parser-output"><p><a class="mw-selflink selflink">Seminar/2021-03-31</a>
</p></div><div class="mw-parser-output"><h3><span id="Mercredi_31_mars_2021,_11h_-_12h,_Https://meet.jit.si/SeminaireLRDE_\&_Amphi_4"></span><span class="mw-headline" id="Mercredi_31_mars_2021.2C_11h_-_12h.2C_Https:.2F.2Fmeet.jit.si.2FSeminaireLRDE_.5C.26_Amphi_4"><a class="mw-selflink selflink"> Mercredi 31 mars 2021, 11h - 12h, Https://meet.jit.si/SeminaireLRDE \& Amphi 4</a></span></h3>
<p><br />
</p>
<h4><span class="mw-headline" id="Contributions_to_Boolean_satisfiability_solving_and_its_application_to_the_analysis_of_discrete_systems">Contributions to Boolean satisfiability solving and its application to the analysis of discrete systems</span></h4>
<p><i>Souheib Baarir, Université Paris VI</i>
<br />
<br />
</p><p>Despite its NP-completeness, propositional Boolean satisfiability (SAT) covers a broad spectrum of applications. Nowadays, it is an active research area finding its applications in many contexts like planning decision, cryptology, computational biology, hardware and software analysis. Hence, the development of approaches allowing to handle increasingly challenging SAT problems has become a major focus: during the past eight years, SAT solving has been the main subject of my research work. This talk presents some of the main results we obtained in the field.
<br />
<br />
</p><p><small>Souheib Baarir est Docteur en informatique de l'Université de Paris VI depuis 2007 et a obtenu son HDR à Sorbonne Université en 2019. Le thème de ses recherches s'inscrit dans le cadre des méthodes formelles de vérification des systèmes concurrents. En particulier, il s’intéresse aux méthodes permettant d’optimiser la vérification en exploitant le parallélisme et/ou les propriétés de symétries apparaissant dans de tels systèmes.</small>
<br />
<br />
<a rel="nofollow" class="external text" href="https://www.lip6.fr/actualite/personnes-fiche.php?ident=P617">https://www.lip6.fr/actualite/personnes-fiche.php?ident=P617</a>
</p></div>Mon, 15 Mar 2021 11:28:02 GMTBotNewsEntry (2021/03/05)
https://www.lrde.epita.fr/wiki/NewsEntry_(2021/03/05)
https://www.lrde.epita.fr/wiki/NewsEntry_(2021/03/05)<div class="mw-parser-output"><p><a class="mw-selflink selflink">NewsEntry (2021/03/05)</a>
</p></div><div class="mw-parser-output"><table class="wikitable">
<tbody><tr>
<th>Title
</th>
<td>Seminar on « Mathematical morphology, AI and astrometry » held at EPITA
</td></tr>
<tr>
<th>Sub-Title
</th>
<td>
<p>E. Puybareau and G. Tochon from LRDE invite the <a rel="nofollow" class="external text" href="https://www.imcce.fr/recherche/equipes/pegase/">Pegase team from IMCCE</a> to present the respective themes of the two communities (image processing and AI for the former, astronomy for the latter) and to discuss their possible interactions.
</p>
</td></tr>
<tr>
<th>Date
</th>
<td>
<p>2021/03/05
</p>
</td></tr></tbody></table>
</div>Mon, 08 Mar 2021 11:34:02 GMTDanielaStability of the Tree of Shapes to Additive Noise
https://www.lrde.epita.fr/wiki/Publications/boutry.21.dgmm.3
https://www.lrde.epita.fr/wiki/Publications/boutry.21.dgmm.3<div class="mw-parser-output"><p><a class="mw-selflink selflink">Stability of the Tree of Shapes to Additive Noise</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, <a href="/wiki/User:Gtochon" title="User:Gtochon">Guillaume Tochon</a></dd>
<dt>Where</dt>
<dd>Proceedings of the IAPR International Conference on Discrete Geometry and Mathematical Morphology (DGMM)</dd>
<dt>Place</dt>
<dd>Uppsala, Sweden</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Date</dt>
<dd>2021-03-02</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>The tree of shapes (ToS) is a famous self-dual hierarchical structure in mathematical morphology, which represents the inclusion relationship of the shapes (i.e. the interior of the level lines with holes filled) in a grayscale image. The ToS has already found numerous applications in image processing tasks, such as grain filtering, contour extraction, image simplificationand so on. Its structure consistency is bound to the cleanliness of the level lines, which are themselves deeply affected by the presence of noise within the image. However, according to our knowledge, no one has measured before how resistant to (additive) noise this hierarchical structure is. In this paper, we propose and compare several measures to evaluate the stability of the ToS structure to noise.
</p><p><br />
</p>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ boutry.21.dgmm.3,
author = {Nicolas Boutry and Guillaume Tochon},
title = {Stability of the Tree of Shapes to Additive Noise},
booktitle = {Proceedings of the IAPR International Conference on
Discrete Geometry and Mathematical Morphology (DGMM)},
year = 2021,
month = may,
address = {Uppsala, Sweden},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {12708},
pages = {365--377},
abstract = {The tree of shapes (ToS) is a famous self-dual
hierarchical structure in mathematical morphology, which
represents the inclusion relationship of the shapes
(\textit{i.e.} the interior of the level lines with holes
filled) in a grayscale image. The ToS has already found
numerous applications in image processing tasks, such as
grain filtering, contour extraction, image simplification,
and so on. Its structure consistency is bound to the
cleanliness of the level lines, which are themselves deeply
affected by the presence of noise within the image.
However, according to our knowledge, no one has measured
before how resistant to (additive) noise this hierarchical
structure is. In this paper, we propose and compare several
measures to evaluate the stability of the ToS structure to
noise.},
doi = {10.1007/978-3-030-76657-3_26}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:54:26 GMTBotA New Matching Algorithm between Trees of Shapes and its Application to Brain Tumor Segmentation
https://www.lrde.epita.fr/wiki/Publications/boutry.21.dgmm.2
https://www.lrde.epita.fr/wiki/Publications/boutry.21.dgmm.2<div class="mw-parser-output"><p><a class="mw-selflink selflink">A New Matching Algorithm between Trees of Shapes and its Application to Brain Tumor Segmentation</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, <a href="/wiki/User:Theo" title="User:Theo">Thierry Géraud</a></dd>
<dt>Where</dt>
<dd>Proceedings of the IAPR International Conference on Discrete Geometry and Mathematical Morphology (DGMM)</dd>
<dt>Place</dt>
<dd>Uppsala, Sweden</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Date</dt>
<dd>2021-03-02</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Many approaches exist to compute the distance between two trees in pattern recognition. These trees can be structures with or without values on their nodes or edges. Howevernone of these distances take into account the shapes possibly associated to the nodes of the tree. For this reason, we propose in this paper a new distance between two trees of shapes based on the Hausdorff distance. This distance allows us to make inexact tree matching and to compute what we call residual trees, representing where two trees differ. We will also see that thanks to these residual trees, we can obtain good results in matter of brain tumor segmentation. This segmentation does not provide only a segmentation but also the tree of shapes corresponding to the segmentation and its depth map.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/boutry.21.dgmm.2.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ boutry.21.dgmm.2,
author = {Nicolas Boutry and Thierry G\'eraud},
title = {A New Matching Algorithm between Trees of Shapes and its
Application to Brain Tumor Segmentation},
booktitle = {Proceedings of the IAPR International Conference on
Discrete Geometry and Mathematical Morphology (DGMM)},
year = 2021,
month = may,
pages = {67--78},
address = {Uppsala, Sweden},
series = {Lecture Notes in Computer Science},
volume = {12708},
publisher = {Springer},
abstract = {Many approaches exist to compute the distance between two
trees in pattern recognition. These trees can be structures
with or without values on their nodes or edges. However,
none of these distances take into account the shapes
possibly associated to the nodes of the tree. For this
reason, we propose in this paper a new distance between two
trees of shapes based on the Hausdorff distance. This
distance allows us to make inexact tree matching and to
compute what we call residual trees, representing where two
trees differ. We will also see that thanks to these
residual trees, we can obtain good results in matter of
brain tumor segmentation. This segmentation does not
provide only a segmentation but also the tree of shapes
corresponding to the segmentation and its depth map.},
doi = {10.1007/978-3-030-76657-3_4}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:54:22 GMTBotAn Equivalence Relation between Morphological Dynamics and Persistent Homology in n-D
https://www.lrde.epita.fr/wiki/Publications/boutry.21.dgmm.1
https://www.lrde.epita.fr/wiki/Publications/boutry.21.dgmm.1<div class="mw-parser-output"><p><a class="mw-selflink selflink">An Equivalence Relation between Morphological Dynamics and Persistent Homology in n-D</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, <a href="/wiki/User:Theo" title="User:Theo">Thierry Géraud</a>, Laurent Najman</dd>
<dt>Where</dt>
<dd>Proceedings of the IAPR International Conference on Discrete Geometry and Mathematical Morphology (DGMM)</dd>
<dt>Place</dt>
<dd>Uppsala, Sweden</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Date</dt>
<dd>2021-03-02</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>In Mathematical Morphology (MM), dynamics are used to compute markers to proceed for example to watershed-based image decomposition. At the same time, persistence is a concept coming from Persistent Homology (PH) and Morse Theory (MT) and represents the stability of the extrema of a Morse function. Since these concepts are similar on Morse functions, we studied their relationship and we found, and proved, that they are equal on 1D Morse functions. Here, we propose to extend this proof to <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle n}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>n</mi>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle n}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.395ex; height:1.676ex;" alt="{\displaystyle n}"/></span>-D, <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle n\geq 2}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>n</mi>
<mo>≥<!-- ≥ --></mo>
<mn>2</mn>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle n\geq 2}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/e6bf67f9d06ca3af619657f8d20ee1322da77174" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.505ex; width:5.656ex; height:2.343ex;" alt="{\displaystyle n\geq 2}"/></span>, showing that this equality can be applied to <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle n}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>n</mi>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle n}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.395ex; height:1.676ex;" alt="{\displaystyle n}"/></span>-D images and not only to 1D functions. This is a step further to show how much MM and MT are related.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/boutry.21.dgmm.1.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ boutry.21.dgmm.1,
author = {Nicolas Boutry and Thierry G\'eraud and Laurent Najman},
title = {An Equivalence Relation between Morphological Dynamics and
Persistent Homology in {$n$-D}},
booktitle = {Proceedings of the IAPR International Conference on
Discrete Geometry and Mathematical Morphology (DGMM)},
year = 2021,
month = may,
address = {Uppsala, Sweden},
series = {Lecture Notes in Computer Science},
volume = {12708},
publisher = {Springer},
pages = {525--537},
abstract = {In Mathematical Morphology (MM), dynamics are used to
compute markers to proceed for example to watershed-based
image decomposition. At the same time, persistence is a
concept coming from Persistent Homology (PH) and Morse
Theory (MT) and represents the stability of the extrema of
a Morse function. Since these concepts are similar on Morse
functions, we studied their relationship and we found, and
proved, that they are equal on 1D Morse functions. Here, we
propose to extend this proof to $n$-D, $n \geq 2$, showing
that this equality can be applied to $n$-D images and not
only to 1D functions. This is a step further to show how
much MM and MT are related.},
doi = {10.1007/978-3-030-76657-3_38}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:54:17 GMTBotDeep Learning for Detection and Segmentation of Artefact and Disease Instances in Gastrointestinal Endoscopy
https://www.lrde.epita.fr/wiki/Publications/boutry.21.media
https://www.lrde.epita.fr/wiki/Publications/boutry.21.media<div class="mw-parser-output"><p><a class="mw-selflink selflink">Deep Learning for Detection and Segmentation of Artefact and Disease Instances in Gastrointestinal Endoscopy</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Sharib Ali, Mariia Dmitrieva, Noha Ghatwary, Sophia Bano, Gorkem Polat, Alptekin Temizel, Adrian Krenzer, Amar Hekalo, Yun Bo Guo, Bogdan Matuszewski, Mourad Gridach, Irina Voiculescu, Vishnusai Yoganand, Arnav Chavan, Aryan Raj, Nhan T Nguyen, Dat Q Tran, Le Duy Huynh, <a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, Shahadate Rezvy, Haijian Chen, Yoon Ho Choi, Anand Subramanian, Velmurugan Balasubramanian, Xiaohong W Gao, Hongyu Hu, Yusheng Liao, Danail Stoyanov, Christian Daul, Stefano Realdon, Renato Cannizzaro, Dominique Lamarque, Terry Tran-Nguyen, Adam Bailey, Barbara Braden, James East, Jens Rittscher</dd>
<dt>Journal</dt>
<dd>Medical Image Analysis</dd>
<dt>Type</dt>
<dd>article</dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Date</dt>
<dd>2021-02-24</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>The Endoscopy Computer Vision Challenge (EndoCV) is a crowd-sourcing initiative to address eminent problems in developing reliable computer aided detection and diagnosis endoscopy systems and suggest a pathway for clinical translation of technologies. Whilst endoscopy is a widely used diagnostic and treatment tool for hollow-organs, there are several core challenges often faced by endoscopistsmainly: 1) presence of multi-class artefacts that hinder their visual interpretation, and 2) difficulty in identifying subtle precancerous precursors and cancer abnormalities. Artefacts often affect the robustness of deep learning methods applied to the gastrointestinal tract organs as they can be confused with tissue of interest. EndoCV2020 challenges are designed to address research questions in these remits. In this paper, we present a summary of methods developed by the top 17 teams and provide an objective comparison of state-of-the-art methods and methods designed by the participants for two sub-challenges: i) artefact detection and segmentation (EAD2020), and ii) disease detection and segmentation (EDD2020). Multi-center, multi-organ, multi-class, and multi-modal clinical endoscopy datasets were compiled for both EAD2020 and EDD2020 sub-challenges. The out-of-sample generalization ability of detection algorithms was also evaluated. Whilst most teams focused on accuracy improvements, only a few methods hold credibility for clinical usability. The best performing teams provided solutions to tackle class imbalance, and variabilities in size, origin, modality and occurrences by exploring data augmentation, data fusion, and optimal class thresholding techniques.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/boutry.21.media.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@Article{ boutry.21.media,
author = {Sharib Ali and Mariia Dmitrieva and Noha Ghatwary and
Sophia Bano and Gorkem Polat and Alptekin Temizel and
Adrian Krenzer and Amar Hekalo and Yun Bo Guo and Bogdan
Matuszewski and Mourad Gridach and Irina Voiculescu and
Vishnusai Yoganand and Arnav Chavan and Aryan Raj and Nhan
T. Nguyen and Dat Q. Tran and Le Duy Huynh and Nicolas
Boutry and Shahadate Rezvy and Haijian Chen and Yoon Ho
Choi and Anand Subramanian and Velmurugan Balasubramanian
and Xiaohong W. Gao and Hongyu Hu and Yusheng Liao and
Danail Stoyanov and Christian Daul and Stefano Realdon and
Renato Cannizzaro and Dominique Lamarque and Terry
Tran-Nguyen and Adam Bailey and Barbara Braden and James
East and Jens Rittscher},
title = {Deep Learning for Detection and Segmentation of Artefact
and Disease Instances in Gastrointestinal Endoscopy},
journal = {Medical Image Analysis},
number = {102002},
year = {2021},
month = may,
doi = {10.1016/j.media.2021.102002},
abstract = {The Endoscopy Computer Vision Challenge (EndoCV) is a
crowd-sourcing initiative to address eminent problems in
developing reliable computer aided detection and diagnosis
endoscopy systems and suggest a pathway for clinical
translation of technologies. Whilst endoscopy is a widely
used diagnostic and treatment tool for hollow-organs, there
are several core challenges often faced by endoscopists,
mainly: 1) presence of multi-class artefacts that hinder
their visual interpretation, and 2) difficulty in
identifying subtle precancerous precursors and cancer
abnormalities. Artefacts often affect the robustness of
deep learning methods applied to the gastrointestinal tract
organs as they can be confused with tissue of interest.
EndoCV2020 challenges are designed to address research
questions in these remits. In this paper, we present a
summary of methods developed by the top 17 teams and
provide an objective comparison of state-of-the-art methods
and methods designed by the participants for two
sub-challenges: i) artefact detection and segmentation
(EAD2020), and ii) disease detection and segmentation
(EDD2020). Multi-center, multi-organ, multi-class, and
multi-modal clinical endoscopy datasets were compiled for
both EAD2020 and EDD2020 sub-challenges. The out-of-sample
generalization ability of detection algorithms was also
evaluated. Whilst most teams focused on accuracy
improvements, only a few methods hold credibility for
clinical usability. The best performing teams provided
solutions to tackle class imbalance, and variabilities in
size, origin, modality and occurrences by exploring data
augmentation, data fusion, and optimal class thresholding
techniques.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:54:30 GMTBotOn Some Associations Between Mathematical Morphology and Artificial Intelligence
https://www.lrde.epita.fr/wiki/Publications/bloch.21.dgmm
https://www.lrde.epita.fr/wiki/Publications/bloch.21.dgmm<div class="mw-parser-output"><p><a class="mw-selflink selflink">On Some Associations Between Mathematical Morphology and Artificial Intelligence</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Isabelle Bloch, Samy Blusseau, Ramón Pino Pérez, <a href="/wiki/User:Elodie" title="User:Elodie">Élodie Puybareau</a>, <a href="/wiki/User:Gtochon" title="User:Gtochon">Guillaume Tochon</a></dd>
<dt>Where</dt>
<dd>Proceedings of the IAPR International Conference on Discrete Geometry and Mathematical Morphology (DGMM)</dd>
<dt>Place</dt>
<dd>Uppsala, Sweden</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2021-02-16</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>This paper aims at providing an overview of the use of mathematical morphology, in its algebraic setting, in several fields of artificial intelligence (AI). Three domains of AI will be covered. In the first domainmathematical morphology operators will be expressed in some logics (propositional, modal, description logics) to answer typical questions in knowledge representation and reasoning, such as revision, fusion, explanatory relationssatisfying usual postulates. In the second domain, spatial reasoning will benefit from spatial relations modeled using fuzzy sets and morphological operators, with applications in model-based image understanding. In the third domaininteractions between mathematical morphology and deep learning will be detailed. Morphological neural networks were introduced as an alternative to classical architectures, yielding a new geometry in decision surfaces. Deep networks were also trained to learn morphological operators and pipelines, and morphological algorithms were used as companion tools to machine learning, for pre/post processing or even regularization purposes. These ideas have known a large resurgence in the last few years and new ones are emerging.
</p><p><br />
</p>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ bloch.21.dgmm,
doi = {10.1007/978-3-030-76657-3_33},
author = {Isabelle Bloch and Samy Blusseau and Ram\'on {Pino
P\'erez} and \'Elodie Puybareau and Guillaume Tochon},
editor = {Lindblad, Joakim and Malmberg, Filip and Sladoje,
Nata{\v{s}}a},
title = {On Some Associations Between Mathematical Morphology and
Artificial Intelligence},
booktitle = {Proceedings of the IAPR International Conference on
Discrete Geometry and Mathematical Morphology (DGMM)},
year = {2021},
address = {Uppsala, Sweden},
series = {Lecture Notes in Computer Science},
volume = {12708},
publisher = {Springer},
pages = {457--469},
month = may,
abstract = {This paper aims at providing an overview of the use of
mathematical morphology, in its algebraic setting, in
several fields of artificial intelligence (AI). Three
domains of AI will be covered. In the first domain,
mathematical morphology operators will be expressed in some
logics (propositional, modal, description logics) to answer
typical questions in knowledge representation and
reasoning, such as revision, fusion, explanatory relations,
satisfying usual postulates. In the second domain, spatial
reasoning will benefit from spatial relations modeled using
fuzzy sets and morphological operators, with applications
in model-based image understanding. In the third domain,
interactions between mathematical morphology and deep
learning will be detailed. Morphological neural networks
were introduced as an alternative to classical
architectures, yielding a new geometry in decision
surfaces. Deep networks were also trained to learn
morphological operators and pipelines, and morphological
algorithms were used as companion tools to machine
learning, for pre/post processing or even regularization
purposes. These ideas have known a large resurgence in the
last few years and new ones are emerging.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:53:58 GMTBotCombining Deep Learning and Mathematical Morphology for Historical Map Segmentation
https://www.lrde.epita.fr/wiki/Publications/chen.21.dgmm
https://www.lrde.epita.fr/wiki/Publications/chen.21.dgmm<div class="mw-parser-output"><p><a class="mw-selflink selflink">Combining Deep Learning and Mathematical Morphology for Historical Map Segmentation</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Yizi Chen, <a href="/wiki/User:Carlinet" title="User:Carlinet">Edwin Carlinet</a>, <a href="/wiki/User:Chazalon" title="User:Chazalon">Joseph Chazalon</a>, Clément Mallet, Bertrand Duménieu, Julien Perret</dd>
<dt>Where</dt>
<dd>Proceedings of the IAPR International Conference on Discrete Geometry and Mathematical Morphology (DGMM)</dd>
<dt>Place</dt>
<dd>Uppsala, Sweden</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2021-02-16</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>The digitization of historical maps enables the study of ancient, fragile, unique, and hardly accessible information sources. Main map features can be retrieved and tracked through the time for subsequent thematic analysis. The goal of this work is the vectorization step, i.e., the extraction of vector shapes of the objects of interest from raster images of maps. We are particularly interested in closed shape detection such as buildings, building blocksgardens, rivers, etc. in order to monitor their temporal evolution. Historical map images present significant pattern recognition challenges. The extraction of closed shapes by using traditional Mathematical Morphology (MM) is highly challenging due to the overlapping of multiple map features and texts. Moreover, state-of-the-art Convolutional Neural Networks (CNN) are perfectly designed for content image filtering but provide no guarantee about closed shape detection. Also, the lack of textural and color information of historical maps makes it hard for CNN to detect shapes that are represented by only their boundaries. Our contribution is a pipeline that combines the strengths of CNN (efficient edge detection and filtering) and MM (guaranteed extraction of closed shapes) in order to achieve such a task. The evaluation of our approach on a public dataset shows its effectiveness for extracting the closed boundaries of objects in historical maps.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/chen.2021.dgmm.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ chen.21.dgmm,
author = {Yizi Chen and Edwin Carlinet and Joseph Chazalon and
Cl\'ement Mallet and Bertrand Dum\'enieu and Julien Perret},
title = {Combining Deep Learning and Mathematical Morphology for
Historical Map Segmentation},
booktitle = {Proceedings of the IAPR International Conference on
Discrete Geometry and Mathematical Morphology (DGMM)},
year = {2021},
series = {Lecture Notes in Computer Science},
volume = {12708},
month = may,
address = {Uppsala, Sweden},
publisher = {Springer},
pages = {79--92},
abstract = {The digitization of historical maps enables the study of
ancient, fragile, unique, and hardly accessible information
sources. Main map features can be retrieved and tracked
through the time for subsequent thematic analysis. The goal
of this work is the vectorization step, i.e., the
extraction of vector shapes of the objects of interest from
raster images of maps. We are particularly interested in
closed shape detection such as buildings, building blocks,
gardens, rivers, etc. in order to monitor their temporal
evolution. Historical map images present significant
pattern recognition challenges. The extraction of closed
shapes by using traditional Mathematical Morphology (MM) is
highly challenging due to the overlapping of multiple map
features and texts. Moreover, state-of-the-art
Convolutional Neural Networks (CNN) are perfectly designed
for content image filtering but provide no guarantee about
closed shape detection. Also, the lack of textural and
color information of historical maps makes it hard for CNN
to detect shapes that are represented by only their
boundaries. Our contribution is a pipeline that combines
the strengths of CNN (efficient edge detection and
filtering) and MM (guaranteed extraction of closed shapes)
in order to achieve such a task. The evaluation of our
approach on a public dataset shows its effectiveness for
extracting the closed boundaries of objects in historical
maps.},
note = {Accepted},
doi = {10.1007/978-3-030-76657-3_5}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:55:07 GMTBotGoing beyond p-convolutions to learn grayscale morphological operators
https://www.lrde.epita.fr/wiki/Publications/kirszenberg.21.dgmm
https://www.lrde.epita.fr/wiki/Publications/kirszenberg.21.dgmm<div class="mw-parser-output"><p><a class="mw-selflink selflink">Going beyond p-convolutions to learn grayscale morphological operators</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Alexandre Kirszenberg, <a href="/wiki/User:Gtochon" title="User:Gtochon">Guillaume Tochon</a>, <a href="/wiki/User:Elodie" title="User:Elodie">Élodie Puybareau</a>, Jesus Angulo</dd>
<dt>Where</dt>
<dd>Proceedings of the IAPR International Conference on Discrete Geometry and Mathematical Morphology (DGMM)</dd>
<dt>Place</dt>
<dd>Uppsala, Sweden</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2021-02-16</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Integrating mathematical morphology operations within deep neural networks has been subject to increasing attention lately. However, replacing standard convolution layers with erosions or dilations is particularly challenging because the min and max operations are not differentiable. Relying on the asymptotic behavior of the counter-harmonic meanp-convolutional layers were proposed as a possible workaround to this issue since they can perform pseudo-dilation or pseudo-erosion operations (depending on the value of their inner parameter p), and very promising results were reported. In this work, we present two new morphological layers based on the same principle as the p-convolutional layer while circumventing its principal drawbacks, and demonstrate their potential interest in further implementations within deep convolutional neural network architectures.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/kirszie.2021.dgmm.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ kirszenberg.21.dgmm,
author = {Alexandre Kirszenberg and Guillaume Tochon and \'{E}lodie
Puybareau and Jesus Angulo},
title = {Going beyond p-convolutions to learn grayscale
morphological operators},
booktitle = {Proceedings of the IAPR International Conference on
Discrete Geometry and Mathematical Morphology (DGMM)},
year = {2021},
series = {Lecture Notes in Computer Science},
volume = {12708},
month = may,
address = {Uppsala, Sweden},
publisher = {Springer},
pages = {470--482},
abstract = {Integrating mathematical morphology operations within deep
neural networks has been subject to increasing attention
lately. However, replacing standard convolution layers with
erosions or dilations is particularly challenging because
the min and max operations are not differentiable. Relying
on the asymptotic behavior of the counter-harmonic mean,
p-convolutional layers were proposed as a possible
workaround to this issue since they can perform
pseudo-dilation or pseudo-erosion operations (depending on
the value of their inner parameter p), and very promising
results were reported. In this work, we present two new
morphological layers based on the same principle as the
p-convolutional layer while circumventing its principal
drawbacks, and demonstrate their potential interest in
further implementations within deep convolutional neural
network architectures.},
doi = {10.1007/978-3-030-76657-3_34}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:56:52 GMTBotSeminar/2021-02-10
https://www.lrde.epita.fr/wiki/Seminar/2021-02-10
https://www.lrde.epita.fr/wiki/Seminar/2021-02-10<div class="mw-parser-output"><p><a class="mw-selflink selflink">Seminar/2021-02-10</a>
</p></div><div class="mw-parser-output"><h3><span id="Mercredi_10_février_2021,_11h_-_12h,_{\small_https://meet.jit.si/Seminaire$_$LRDE$_$Uli"></span><span class="mw-headline" id="Mercredi_10_f.C3.A9vrier_2021.2C_11h_-_12h.2C_.7B.5Csmall_https:.2F.2Fmeet.jit.si.2FSeminaire.24_.24LRDE.24_.24Uli"><a class="mw-selflink selflink"> Mercredi 10 février 2021, 11h - 12h, {\small https://meet.jit.si/Seminaire$_$LRDE$_$Uli</a></span></h3>
<p><br />
</p>
<h4><span class="mw-headline" id="Generating_Posets_Beyond_N">Generating Posets Beyond N</span></h4>
<p><i>Uli Fahrenberg, Ecole Polytechnique</i>
<br />
<br />
</p><p>We introduce iposets - posets with interfaces - equipped with a novel gluing
composition along interfaces and the standard parallel composition. We study
their basic algebraic properties as well as the hierarchy of gluing-parallel
posets generated from singletons by finitary applications of the two
compositions. We show that not only series-parallel posets, but also
interval orders, which seem more interesting for modeling concurrent and
distributed systems, can be generated, but not all posets. Generating posets
is also important for constructing free algebras for concurrent semi-rings
and Kleene algebras that allow compositional reasoning about such systems.
<br />
<br />
</p><p><small>Ulrich (Uli) Fahrenberg holds a PhD in mathematics from Aalborg University, Denmark. He has started his career in computer science as an assistant professor at Aalborg University. Afterwards he has worked as a postdoc at Inria Rennes, France, and since 2016 he is a researcher at the computer science lab at École polytechnique in Palaiseau, France. Uli Fahrenberg works in algebraic topology, concurrency theory, real-time verification, and general quantitative verification. He has published more than 80 papers in computer science and mathematics. He has been a member of numerous program committees, and since 2016 he is a reviewer for AMS Mathematical Reviews.</small>
<br />
<br />
<a rel="nofollow" class="external text" href="http://www.lix.polytechnique.fr/~uli/bio.html">http://www.lix.polytechnique.fr/~uli/bio.html</a>
</p></div>Tue, 26 Jan 2021 19:07:08 GMTBotSeminar/2020-12-16
https://www.lrde.epita.fr/wiki/Seminar/2020-12-16
https://www.lrde.epita.fr/wiki/Seminar/2020-12-16<div class="mw-parser-output"><p><a class="mw-selflink selflink">Seminar/2020-12-16</a>
</p></div><div class="mw-parser-output"><h3><span id="Mercredi_16_décembre_2020,_11h_-_12h,_{\small_https://eu.bbcollab.com/collab/ui/session/guest/95a72a9dc7b0405c8c281ea3157e9637}"></span><span class="mw-headline" id="Mercredi_16_d.C3.A9cembre_2020.2C_11h_-_12h.2C_.7B.5Csmall_https:.2F.2Feu.bbcollab.com.2Fcollab.2Fui.2Fsession.2Fguest.2F95a72a9dc7b0405c8c281ea3157e9637.7D"><a class="mw-selflink selflink"> Mercredi 16 décembre 2020, 11h - 12h, {\small https://eu.bbcollab.com/collab/ui/session/guest/95a72a9dc7b0405c8c281ea3157e9637}</a></span></h3>
<p><br />
</p>
<h4><span class="mw-headline" id="Diagnosis_and_Opacity_in_Partially_Observable_Systems">Diagnosis and Opacity in Partially Observable Systems</span></h4>
<p><i>Stefan Schwoon, ENS Paris-Saclay</i>
<br />
<br />
</p><p>In a partially observable system, diagnosis is the task of detecting certain events, for instance fault occurrences. In the presence of hostile observers, on the other hand, one is interested in rendering a system opaque, i.e. making it impossible to detect certain "secret" events. The talk will present some decidability and complexity results for these two problems
when the system is represented as a finite automaton or a Petri net. We then also consider the problem of active diagnosis, where the observer has some control over the system. In this context, we study problems such as the computational complexity of the synthesis problem, the memory required for the controller, and the delay between a fault occurrence and its detection by the diagnoser. The talk is based on joint work with B. Bérard, S. Haar, S. Haddad, T. Melliti, and S. Schmitz.
<br />
<br />
</p><p><small>Stefan Schwoon studied Computer Science at the University of Hildesheim and received a PhD from the Technical University of Munich in 2002. He held the position of Scientific Assistent at the University of Stuttgart from 2002 to 2007, and at the Technical University in Munich from 2007 to 2009. He is currently Associate Professor (Maître de conférences) at Laboratoire Spécification et Vérification (LSV), ENS Paris-Saclay, and a member of the INRIA team Mexico. His research interests include model checking and diagnosis on concurrent and partially-observable systems.</small>
<br />
<br />
<a rel="nofollow" class="external text" href="http://www.lsv.fr/~schwoon/">http://www.lsv.fr/~schwoon/</a>
</p></div>Tue, 26 Jan 2021 19:07:07 GMTBotNewsEntry (2020/11/16)
https://www.lrde.epita.fr/wiki/NewsEntry_(2020/11/16)
https://www.lrde.epita.fr/wiki/NewsEntry_(2020/11/16)<div class="mw-parser-output"><p><a class="mw-selflink selflink">NewsEntry (2020/11/16)</a>
</p></div><div class="mw-parser-output"><table class="wikitable">
<tbody><tr>
<th>Title
</th>
<td>The LRDE hosts a new member, Baptiste Esteban, who joins the <a href="/wiki/Olena" title="Olena">Olena</a> team for his PhD studies.
</td></tr>
<tr>
<th>Sub-Title
</th>
<td>
<p>After completing <a rel="nofollow" class="external text" href="https://www.epita.fr/nos-formations/diplome-ingenieur/cycle-ingenieur/les-majeures/">EPITA's IMAGE and RDI double major</a>, Baptiste is back at LRDE for his PhD. Having worked on noise estimation in natural images with mathematical morphology approaches, he will now focus on how to conciliate genericity and performance of image processing algorithms in dynamic contexts, especially noise estimation as a validation framework.
</p>
</td></tr>
<tr>
<th>Date
</th>
<td>
<p>2020/11/16
</p>
</td></tr></tbody></table>
</div>Wed, 18 Nov 2020 09:45:01 GMTDanielaA Global Benchmark of Algorithms for Segmenting the Left Atrium from Late Gadolinium-Enhanced Cardiac Magnetic Resonance Imaging
https://www.lrde.epita.fr/wiki/Publications/xiong.20.media
https://www.lrde.epita.fr/wiki/Publications/xiong.20.media<div class="mw-parser-output"><p><a class="mw-selflink selflink">A Global Benchmark of Algorithms for Segmenting the Left Atrium from Late Gadolinium-Enhanced Cardiac Magnetic Resonance Imaging</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Zhaohan Xiong, Qing Xia, Zhiqiang Hu, Ning Huang, Cheng Bian, Yefeng Zheng, Sulaiman Vesal, Nishant Ravikumar, Andreas Maier, Xin Yang, Pheng-Ann Heng, Dong Ni, Caizi Li, Qianqian Tong, Weixin Si, <a href="/wiki/User:Elodie" title="User:Elodie">Élodie Puybareau</a>, Younes Khoudli, <a href="/wiki/User:Theo" title="User:Theo">Thierry Géraud</a>, Chen Chen, Wenjia Bai, Daniel Rueckert, Lingchao Xu, Xiahai Zhuang, Xinzhe Luo, Shuman Jia, Maxime Sermesant, Yashu Liu, Kuanquan Wang, Davide Borra, Alessandro Masci, Cristiana Corsi, Coen de Vente, Mitko Veta, Rashed Karim, Chandrakanth Jayachandran Preetha, Sandy Engelhardt, Menyun Qiao, Yuanyuan Wang, Qian Tao, Marta Nunez-Garcia, Oscar Camara, Nicolo Savioli, Pablo Lamata, Jichao Zhao</dd>
<dt>Journal</dt>
<dd>Medical Image Analysis</dd>
<dt>Type</dt>
<dd>article</dd>
<dt>Date</dt>
<dd>2020-11-10</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Segmentation of medical images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) used for visualizing diseased atrial structures, is a crucial first step for ablation treatment of atrial fibrillation. However, direct segmentation of LGE-MRIs is challenging due to the varying intensities caused by contrast agents. Since most clinical studies have relied on manual, labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the 2018 Left Atrium Segmentation Challenge using 154 3D LGE-MRIscurrently the world's largest atrial LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show that the top method achieved a Dice score of 93.2% and a mean surface to surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double sequentially used CNNsin which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved superior results than traditional methods and machine learning approaches containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for atrial LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field. Furthermore, the findings from this study can potentially be extended to other imaging datasets and modalities, having an impact on the wider medical imaging community.
</p><p><br />
</p>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@Article{ xiong.20.media,
title = {A Global Benchmark of Algorithms for Segmenting the Left
Atrium from Late Gadolinium-Enhanced Cardiac Magnetic
Resonance Imaging},
journal = {Medical Image Analysis},
volume = {67},
pages = {101832},
year = {2021},
month = jan,
issn = {1361-8415},
doi = {10.1016/j.media.2020.101832},
author = {Zhaohan Xiong and Qing Xia and Zhiqiang Hu and Ning Huang
and Cheng Bian and Yefeng Zheng and Sulaiman Vesal and
Nishant Ravikumar and Andreas Maier and Xin Yang and
Pheng-Ann Heng and Dong Ni and Caizi Li and Qianqian Tong
and Weixin Si and \'Elodie Puybareau and Younes Khoudli and
Thierry G\'{e}raud and Chen Chen and Wenjia Bai and Daniel
Rueckert and Lingchao Xu and Xiahai Zhuang and Xinzhe Luo
and Shuman Jia and Maxime Sermesant and Yashu Liu and
Kuanquan Wang and Davide Borra and Alessandro Masci and
Cristiana Corsi and Coen {de Vente} and Mitko Veta and
Rashed Karim and Chandrakanth Jayachandran Preetha and
Sandy Engelhardt and Menyun Qiao and Yuanyuan Wang and Qian
Tao and Marta Nunez-Garcia and Oscar Camara and Nicolo
Savioli and Pablo Lamata and Jichao Zhao},
keywords = {Left atrium, Convolutional neural networks, Late
gadolinium-enhanced magnetic resonance imaging, Image
segmentation},
abstract = {Segmentation of medical images, particularly late
gadolinium-enhanced magnetic resonance imaging (LGE-MRI)
used for visualizing diseased atrial structures, is a
crucial first step for ablation treatment of atrial
fibrillation. However, direct segmentation of LGE-MRIs is
challenging due to the varying intensities caused by
contrast agents. Since most clinical studies have relied on
manual, labor-intensive approaches, automatic methods are
of high interest, particularly optimized machine learning
approaches. To address this, we organized the 2018 Left
Atrium Segmentation Challenge using 154 3D LGE-MRIs,
currently the world's largest atrial LGE-MRI dataset, and
associated labels of the left atrium segmented by three
medical experts, ultimately attracting the participation of
27 international teams. In this paper, extensive analysis
of the submitted algorithms using technical and biological
metrics was performed by undergoing subgroup analysis and
conducting hyper-parameter analysis, offering an overall
picture of the major design choices of convolutional neural
networks (CNNs) and practical considerations for achieving
state-of-the-art left atrium segmentation. Results show
that the top method achieved a Dice score of 93.2\% and a
mean surface to surface distance of 0.7 mm, significantly
outperforming prior state-of-the-art. Particularly, our
analysis demonstrated that double sequentially used CNNs,
in which a first CNN is used for automatic
region-of-interest localization and a subsequent CNN is
used for refined regional segmentation, achieved superior
results than traditional methods and machine learning
approaches containing single CNNs. This large-scale
benchmarking study makes a significant step towards
much-improved segmentation methods for atrial LGE-MRIs, and
will serve as an important benchmark for evaluating and
comparing the future works in the field. Furthermore, the
findings from this study can potentially be extended to
other imaging datasets and modalities, having an impact on
the wider medical imaging community.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Thu, 22 Jul 2021 09:47:23 GMTBotPAIP 2019: Liver Cancer Segmentation Challenge
https://www.lrde.epita.fr/wiki/Publications/kim.20.media
https://www.lrde.epita.fr/wiki/Publications/kim.20.media<div class="mw-parser-output"><p><a class="mw-selflink selflink">PAIP 2019: Liver Cancer Segmentation Challenge</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Yoo Jung Kim, Hyungjoon Jang, Kyoungbun Lee, Seongkeun Park, Sung-Gyu Min, Choyeon Hong, Jeong Hwan Park, Kanggeun Lee, Jisoo Kim, Wonjae Hong, Hyun Jung, Yanling Liu, Haran Rajkumar, Mahendra Khened, Ganapathy Krishnamurthi, Sen Yang, Xiyue Wang, Chang Hee Han, Jin Tae Kwak, Jianqiang Ma, Zhe Tang, Bahram Marami, Jack Zeineh, Zixu Zhao, Pheng-Ann Heng, Rudiger Schmitz, Frederic Madesta, Thomas Rosch, Rene Werner, Jie Tian, <a href="/wiki/User:Elodie" title="User:Elodie">Élodie Puybareau</a>, Matteo Bovio, Xiufeng Zhang, Yifeng Zhu, Se Young Chun, Won-Ki Jeong, Peom Park, Jinwook Choi</dd>
<dt>Journal</dt>
<dd>Medical Image Analysis</dd>
<dt>Type</dt>
<dd>article</dd>
<dt>Date</dt>
<dd>2020-11-10</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Pathology Artificial Intelligence Platform (PAIP) is a free research platform in support of pathological artificial intelligence (AI). The main goal of the platform is to construct a high-quality pathology learning data set that will allow greater accessibility. The PAIP Liver Cancer Segmentation Challenge, organized in conjunction with the Medical Image Computing and Computer Assisted Intervention Society (MICCAI 2019), is the first image analysis challenge to apply PAIP datasets. The goal of the challenge was to evaluate new and existing algorithms for automated detection of liver cancer in whole-slide images (WSIs). Additionally, the PAIP of this year attempted to address potential future problems of AI applicability in clinical settings. In the challenge, participants were asked to use analytical data and statistical metrics to evaluate the performance of automated algorithms in two different tasks. The participants were given the two different tasks: Task 1 involved investigating Liver Cancer Segmentation and Task 2 involved investigating Viable Tumor Burden Estimation. There was a strong correlation between high performance of teams on both tasks, in which teams that performed well on Task 1 also performed well on Task 2. After evaluation, we summarized the top 11 team's algorithms. We then gave pathological implications on the easily predicted images for cancer segmentation and the challenging images for viable tumor burden estimation. Out of the 231 participants of the PAIP challenge datasets, a total of 64 were submitted from 28 team participants. The submitted algorithms predicted the automatic segmentation on the liver cancer with WSIs to an accuracy of a score estimation of 0.78. The PAIP challenge was created in an effort to combat the lack of research that has been done to address Liver cancer using digital pathology. It remains unclear of how the applicability of AI algorithms created during the challenge can affect clinical diagnoses. However, the results of this dataset and evaluation metric provided has the potential to aid the development and benchmarking of cancer diagnosis and segmentation.
</p><p><br />
</p>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@Article{ kim.20.media,
title = {{PAIP} 2019: {L}iver Cancer Segmentation Challenge},
journal = {Medical Image Analysis},
volume = {67},
pages = {101854},
year = {2021},
month = jan,
issn = {1361-8415},
doi = {10.1016/j.media.2020.101854},
author = {Yoo Jung Kim and Hyungjoon Jang and Kyoungbun Lee and
Seongkeun Park and Sung-Gyu Min and Choyeon Hong and Jeong
Hwan Park and Kanggeun Lee and Jisoo Kim and Wonjae Hong
and Hyun Jung and Yanling Liu and Haran Rajkumar and
Mahendra Khened and Ganapathy Krishnamurthi and Sen Yang
and Xiyue Wang and Chang Hee Han and Jin Tae Kwak and
Jianqiang Ma and Zhe Tang and Bahram Marami and Jack Zeineh
and Zixu Zhao and Pheng-Ann Heng and Rudiger Schmitz and
Frederic Madesta and Thomas Rosch and Rene Werner and Jie
Tian and \'Elodie Puybareau and Matteo Bovio and Xiufeng
Zhang and Yifeng Zhu and Se Young Chun and Won-Ki Jeong and
Peom Park and Jinwook Choi},
keywords = {Liver cancer, Tumor burden, Digital pathology, Challenge,
Segmentation},
abstract = {Pathology Artificial Intelligence Platform (PAIP) is a
free research platform in support of pathological
artificial intelligence (AI). The main goal of the platform
is to construct a high-quality pathology learning data set
that will allow greater accessibility. The PAIP Liver
Cancer Segmentation Challenge, organized in conjunction
with the Medical Image Computing and Computer Assisted
Intervention Society (MICCAI 2019), is the first image
analysis challenge to apply PAIP datasets. The goal of the
challenge was to evaluate new and existing algorithms for
automated detection of liver cancer in whole-slide images
(WSIs). Additionally, the PAIP of this year attempted to
address potential future problems of AI applicability in
clinical settings. In the challenge, participants were
asked to use analytical data and statistical metrics to
evaluate the performance of automated algorithms in two
different tasks. The participants were given the two
different tasks: Task 1 involved investigating Liver Cancer
Segmentation and Task 2 involved investigating Viable Tumor
Burden Estimation. There was a strong correlation between
high performance of teams on both tasks, in which teams
that performed well on Task 1 also performed well on Task
2. After evaluation, we summarized the top 11 team's
algorithms. We then gave pathological implications on the
easily predicted images for cancer segmentation and the
challenging images for viable tumor burden estimation. Out
of the 231 participants of the PAIP challenge datasets, a
total of 64 were submitted from 28 team participants. The
submitted algorithms predicted the automatic segmentation
on the liver cancer with WSIs to an accuracy of a score
estimation of 0.78. The PAIP challenge was created in an
effort to combat the lack of research that has been done to
address Liver cancer using digital pathology. It remains
unclear of how the applicability of AI algorithms created
during the challenge can affect clinical diagnoses.
However, the results of this dataset and evaluation metric
provided has the potential to aid the development and
benchmarking of cancer diagnosis and segmentation.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Thu, 22 Jul 2021 09:46:13 GMTBotDo not Treat Boundaries and Regions Differently: An Example on Heart Left Atrial Segmentation
https://www.lrde.epita.fr/wiki/Publications/zhao.20.icpr.2
https://www.lrde.epita.fr/wiki/Publications/zhao.20.icpr.2<div class="mw-parser-output"><p><a class="mw-selflink selflink">Do not Treat Boundaries and Regions Differently: An Example on Heart Left Atrial Segmentation</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Zhou Zhao, <a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, <a href="/wiki/User:Elodie" title="User:Elodie">Élodie Puybareau</a>, <a href="/wiki/User:Theo" title="User:Theo">Thierry Géraud</a></dd>
<dt>Where</dt>
<dd>Proceedings of the 25th International Conference on Pattern Recognition (ICPR)</dd>
<dt>Place</dt>
<dd>Milan, Italy</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=IEEE&action=edit&redlink=1" class="new" title="IEEE (page does not exist)">IEEE</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2020-11-02</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Atrial fibrillation is the most common heart rhythm disease. Due to a lack of understanding in matter of underlying atrial structures, current treatments are still not satisfying. Recently, with the popularity of deep learning, many segmentation methods based on fully convolutional networks have been proposed to analyze atrial structures, especially from late gadolinium-enhanced magnetic resonance imaging. However, two problems still occur: 1) segmentation results include the atrial- like background; 2) boundaries are very hard to segment. Most segmentation approaches design a specific network that mainly focuses on the regions, to the detriment of the boundaries. Therefore, this paper proposes an attention full convolutional network framework based on the ResNet-101 architecture, which focuses on boundaries as much as on regions. The additional attention module is added to have the network pay more attention on regions and then to reduce the impact of the misleading similarity of neighboring tissues. We also use a hybrid loss composed of a region loss and a boundary loss to treat boundaries and regions at the same time. We demonstrate the efficiency of the proposed approach on the MICCAI 2018 Atrial Segmentation Challenge public dataset.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/zhao.20.icpr.2.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ zhao.20.icpr.2,
author = {Zhou Zhao and Nicolas Boutry and \'Elodie Puybareau and
Thierry G\'eraud},
title = {Do not Treat Boundaries and Regions Differently: {A}n
Example on Heart Left Atrial Segmentation},
booktitle = {Proceedings of the 25th International Conference on
Pattern Recognition (ICPR)},
year = 2021,
pages = {7447--7453},
month = jan,
address = {Milan, Italy},
publisher = {IEEE},
abstract = {Atrial fibrillation is the most common heart rhythm
disease. Due to a lack of understanding in matter of
underlying atrial structures, current treatments are still
not satisfying. Recently, with the popularity of deep
learning, many segmentation methods based on fully
convolutional networks have been proposed to analyze atrial
structures, especially from late gadolinium-enhanced
magnetic resonance imaging. However, two problems still
occur: 1) segmentation results include the atrial- like
background; 2) boundaries are very hard to segment. Most
segmentation approaches design a specific network that
mainly focuses on the regions, to the detriment of the
boundaries. Therefore, this paper proposes an attention
full convolutional network framework based on the
ResNet-101 architecture, which focuses on boundaries as
much as on regions. The additional attention module is
added to have the network pay more attention on regions and
then to reduce the impact of the misleading similarity of
neighboring tissues. We also use a hybrid loss composed of
a region loss and a boundary loss to treat boundaries and
regions at the same time. We demonstrate the efficiency of
the proposed approach on the MICCAI 2018 Atrial
Segmentation Challenge public dataset.},
doi = {10.1109/ICPR48806.2021.9412755}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:59:09 GMTBotFOANet: A Focus of Attention Network with Application to Myocardium Segmentation
https://www.lrde.epita.fr/wiki/Publications/zhao.20.icpr.1
https://www.lrde.epita.fr/wiki/Publications/zhao.20.icpr.1<div class="mw-parser-output"><p><a class="mw-selflink selflink">FOANet: A Focus of Attention Network with Application to Myocardium Segmentation</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Zhou Zhao, <a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, <a href="/wiki/User:Elodie" title="User:Elodie">Élodie Puybareau</a>, <a href="/wiki/User:Theo" title="User:Theo">Thierry Géraud</a></dd>
<dt>Where</dt>
<dd>Proceedings of the 25th International Conference on Pattern Recognition (ICPR)</dd>
<dt>Place</dt>
<dd>Milan, Italy</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=IEEE&action=edit&redlink=1" class="new" title="IEEE (page does not exist)">IEEE</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2020-11-02</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>In myocardium segmentation of cardiac magnetic resonance images, ambiguities often appear near the boundaries of the target domains due to tissue similarities. To address this issue, we propose a new architecture, called FOANet, which can be decomposed in three main steps: a localization stepa Gaussian-based contrast enhancement step, and a segmentation step. This architecture is supplied with a hybrid loss function that guides the FOANet to study the transformation relationship between the input image and the corresponding label in a three-level hierarchy (pixel-patch- and map-level), which is helpful to improve segmentation and recovery of the boundaries. We demonstrate the efficiency of our approach on two public datasets in terms of regional and boundary segmentations.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/zhao.20.icpr.1.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ zhao.20.icpr.1,
author = {Zhou Zhao and Nicolas Boutry and \'Elodie Puybareau and
Thierry G\'eraud},
title = {{FOANet}: {A} Focus of Attention Network with Application
to Myocardium Segmentation},
booktitle = {Proceedings of the 25th International Conference on
Pattern Recognition (ICPR)},
year = 2021,
pages = {1120--1127},
month = jan,
address = {Milan, Italy},
publisher = {IEEE},
abstract = {In myocardium segmentation of cardiac magnetic resonance
images, ambiguities often appear near the boundaries of the
target domains due to tissue similarities. To address this
issue, we propose a new architecture, called FOANet, which
can be decomposed in three main steps: a localization step,
a Gaussian-based contrast enhancement step, and a
segmentation step. This architecture is supplied with a
hybrid loss function that guides the FOANet to study the
transformation relationship between the input image and the
corresponding label in a three-level hierarchy (pixel-,
patch- and map-level), which is helpful to improve
segmentation and recovery of the boundaries. We demonstrate
the efficiency of our approach on two public datasets in
terms of regional and boundary segmentations.},
doi = {10.1109/ICPR48806.2021.9412016}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:59:05 GMTBotNewsEntry (2020/10/21)
https://www.lrde.epita.fr/wiki/NewsEntry_(2020/10/21)
https://www.lrde.epita.fr/wiki/NewsEntry_(2020/10/21)<div class="mw-parser-output"><p><a class="mw-selflink selflink">NewsEntry (2020/10/21)</a>
</p></div><div class="mw-parser-output"><table class="wikitable">
<tbody><tr>
<th>Title
</th>
<td>The LRDE is happy to welcome a new member, Caroline Mazini-Rodrigues, who joins the <a href="/wiki/Olena" title="Olena">Olena</a> team for her PhD studies.
</td></tr>
<tr>
<th>Sub-Title
</th>
<td>
<p>Holding a Master degree in Computer Science from <a rel="nofollow" class="external text" href="https://www.unicamp.br">Universidade Estadual de Campinas</a>, Caroline joins LRDE’s Image team where she will focus on Explainability of Convolutional Neural Networks. Her PhD will be conducted in cooperation with <a rel="nofollow" class="external text" href="http://ligm.u-pem.fr">Laboratoire d’Informatique Gaspard-Monge</a>.
</p>
</td></tr>
<tr>
<th>Date
</th>
<td>
<p>2020/10/21
</p>
</td></tr></tbody></table>
</div>Mon, 16 Nov 2020 15:50:34 GMTDanielaTwo Stages CNN-Based Segmentation of GliomasUncertainty Quantification and Prediction of Overall Patient Survival
https://www.lrde.epita.fr/wiki/Publications/buatois.19.brainles
https://www.lrde.epita.fr/wiki/Publications/buatois.19.brainles<div class="mw-parser-output"><p><a class="mw-selflink selflink">Two Stages CNN-Based Segmentation of GliomasUncertainty Quantification and Prediction of Overall Patient Survival</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Thibault Buatois, <a href="/wiki/User:Elodie" title="User:Elodie">Élodie Puybareau</a>, <a href="/wiki/User:Gtochon" title="User:Gtochon">Guillaume Tochon</a>, <a href="/wiki/User:Chazalon" title="User:Chazalon">Joseph Chazalon</a></dd>
<dt>Where</dt>
<dd>International MICCAI Brainlesion Workshop</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2020-09-03</dd></dl>
</div>
<p><br />
</p>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ buatois.19.brainles,
title = {Two Stages {CNN}-Based Segmentation of Gliomas,
Uncertainty Quantification and Prediction of Overall
Patient Survival},
author = {Thibault Buatois and \'Elodie Puybareau and Guillaume
Tochon and Joseph Chazalon},
booktitle = {International MICCAI Brainlesion Workshop},
year = {2019},
editor = {A. Crimi and S. Bakas},
volume = {11992},
series = {Lecture Notes in Computer Science},
pages = {167--178},
publisher = {Springer},
doi = {10.1007/978-3-030-46643-5_16}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:54:36 GMTBotEquivalence between Digital Well-Composedness and Well-Composedness in the Sense of Alexandrov on n-D Cubical Grids
https://www.lrde.epita.fr/wiki/Publications/boutry.20.jmiv.2
https://www.lrde.epita.fr/wiki/Publications/boutry.20.jmiv.2<div class="mw-parser-output"><p><a class="mw-selflink selflink">Equivalence between Digital Well-Composedness and Well-Composedness in the Sense of Alexandrov on n-D Cubical Grids</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, Laurent Najman, <a href="/wiki/User:Theo" title="User:Theo">Thierry Géraud</a></dd>
<dt>Journal</dt>
<dd>Journal of Mathematical Imaging and Vision</dd>
<dt>Type</dt>
<dd>article</dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2020-09-03</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Among the different flavors of well-composednesses on cubical grids, two of them, called respectively Digital Well-Composedness (DWCness) and Well-Composedness in the sens of Alexandrov (AWCness), are known to be equivalent in 2D and in 3D. The former means that a cubical set does not contain critical configurations when the latter means that the boundary of a cubical set is made of a disjoint union of discrete surfaces. In this paper, we prove that this equivalence holds in <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle n}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>n</mi>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle n}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.395ex; height:1.676ex;" alt="{\displaystyle n}"/></span>-D, which is of interest because today images are not only 2D or 3D but also 4D and beyond. The main benefit of this proof is that the topological properties available for AWC sets, mainly their separation properties, are also true for DWC sets, and the properties of DWC sets are also true for AWC sets: an Euler number locally computable, equivalent connectivities from a local or global point of view... This result is also true for gray-level images thanks to cross-section topology, which means that the sets of shapes of DWC gray-level images make a tree like the ones of AWC gray-level images.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/boutry.20.jmiv.2.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@Article{ boutry.20.jmiv.2,
author = {Nicolas Boutry and Laurent Najman and Thierry G\'eraud},
title = {Equivalence between Digital Well-Composedness and
Well-Composedness in the Sense of {A}lexandrov on {$n$-D}
Cubical Grids},
journal = {Journal of Mathematical Imaging and Vision},
volume = {62},
pages = {1285--1333},
month = sep,
year = {2020},
doi = {10.1007/s10851-020-00988-z},
abstract = {Among the different flavors of well-composednesses on
cubical grids, two of them, called respectively Digital
Well-Composedness (DWCness) and Well-Composedness in the
sens of Alexandrov (AWCness), are known to be equivalent in
2D and in 3D. The former means that a cubical set does not
contain critical configurations when the latter means that
the boundary of a cubical set is made of a disjoint union
of discrete surfaces. In this paper, we prove that this
equivalence holds in $n$-D, which is of interest because
today images are not only 2D or 3D but also 4D and beyond.
The main benefit of this proof is that the topological
properties available for AWC sets, mainly their separation
properties, are also true for DWC sets, and the properties
of DWC sets are also true for AWC sets: an Euler number
locally computable, equivalent connectivities from a local
or global point of view... This result is also true for
gray-level images thanks to cross-section topology, which
means that the sets of shapes of DWC gray-level images make
a tree like the ones of AWC gray-level images. }
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Tue, 24 Nov 2020 10:46:45 GMTBotTopological Properties of the First Non-Local Digitally Well-Composed Interpolation on n-D Cubical Grids
https://www.lrde.epita.fr/wiki/Publications/boutry.20.jmiv.1
https://www.lrde.epita.fr/wiki/Publications/boutry.20.jmiv.1<div class="mw-parser-output"><p><a class="mw-selflink selflink">Topological Properties of the First Non-Local Digitally Well-Composed Interpolation on n-D Cubical Grids</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, Laurent Najman, <a href="/wiki/User:Theo" title="User:Theo">Thierry Géraud</a></dd>
<dt>Journal</dt>
<dd>Journal of Mathematical Imaging and Vision</dd>
<dt>Type</dt>
<dd>article</dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2020-09-03</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>In discrete topology, we like digitally well-composed (shortly DWC) interpolations because they remove pinches in cubical images. Usual well-composed interpolations are local and sometimes self-dual (they treat in a same way dark and bright components in the image). In our case, we are particularly interested in <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle n}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>n</mi>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle n}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.395ex; height:1.676ex;" alt="{\displaystyle n}"/></span>-D self-dual DWC interpolations to obtain a purely self-dual tree of shapes. However, it has been proved that we cannot have an <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle n}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>n</mi>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle n}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.395ex; height:1.676ex;" alt="{\displaystyle n}"/></span>-D interpolation which is at the same time local, self-dualand well-composed. By removing the locality constraint, we have obtained an <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle n}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>n</mi>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle n}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.395ex; height:1.676ex;" alt="{\displaystyle n}"/></span>-D interpolation with many properties in practice: it is self-dual, DWC, and in-between (this last property means that it preserves the contours). Since we did not published the proofs of these results before, we propose to provide in a first time the proofs of the two last properties here (DWCness and in-betweeness) and a sketch of the proof of self-duality (the complete proof of self-duality requires more material and will come later). Some theoretical and practical results are given.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/boutry.20.jmiv.1.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@Article{ boutry.20.jmiv.1,
author = {Nicolas Boutry and Laurent Najman and Thierry G\'eraud},
title = {Topological Properties of the First Non-Local Digitally
Well-Composed Interpolation on {$n$-D} Cubical Grids},
journal = {Journal of Mathematical Imaging and Vision},
volume = {62},
pages = {1256--1284},
month = sep,
year = {2020},
doi = {10.1007/s10851-020-00989-y},
abstract = {In discrete topology, we like digitally well-composed
(shortly DWC) interpolations because they remove pinches in
cubical images. Usual well-composed interpolations are
local and sometimes self-dual (they treat in a same way
dark and bright components in the image). In our case, we
are particularly interested in $n$-D self-dual DWC
interpolations to obtain a purely self-dual tree of shapes.
However, it has been proved that we cannot have an $n$-D
interpolation which is at the same time local, self-dual,
and well-composed. By removing the locality constraint, we
have obtained an $n$-D interpolation with many properties
in practice: it is self-dual, DWC, and in-between (this
last property means that it preserves the contours). Since
we did not published the proofs of these results before, we
propose to provide in a first time the proofs of the two
last properties here (DWCness and in-betweeness) and a
sketch of the proof of self-duality (the complete proof of
self-duality requires more material and will come later).
Some theoretical and practical results are given. }
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Tue, 24 Nov 2020 10:46:43 GMTBotA Machine Learning Based Splitting Heuristic for Divide-and-Conquer Solvers
https://www.lrde.epita.fr/wiki/Publications/nejati.20.cp
https://www.lrde.epita.fr/wiki/Publications/nejati.20.cp<div class="mw-parser-output"><p><a class="mw-selflink selflink">A Machine Learning Based Splitting Heuristic for Divide-and-Conquer Solvers</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Saeed Nejati, <a href="/wiki/User:Ludovic" title="User:Ludovic">Ludovic Le Frioux</a>, Vijay Ganesh</dd>
<dt>Where</dt>
<dd>Proceedings of the 26 th International Conference on Principles and Practice of Constraint Programming (CP'20)</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer,_Cham&action=edit&redlink=1" class="new" title="Springer, Cham (page does not exist)">Springer, Cham</a></dd>
<dt>Keywords</dt>
<dd>Parallel satisfiability, splitting heuristicdivide-and-conquer, machine learning</dd>
<dt>Date</dt>
<dd>2020-09-01</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>In this paper, we present a machine learning based splitting heuristic for divide-and-conquer parallel Boolean SAT solvers. Splitting heuristics, whether they are look-ahead or look-back, are designed using proxy metricswhich when optimized, approximate the true metric of minimizing solver runtime on sub-formulas resulting from a split. The rationale for such metrics is that they have been empirically shown to be excellent proxies for runtime of solvers, in addition to being cheap to compute in an online fashion. However, the design of traditional splitting methods are often ad-hoc and do not leverage the copious amounts of data that solvers generate. To address the above-mentioned issues, we propose a machine learning based splitting heuristic that leverages the features of input formulas and data generated during the run of a divide-and-conquer (DC) parallel solver. More precisely, we reformulate the splitting problem as a ranking problem and develop two machine learning models for pairwise ranking and computing the minimum ranked variable. Our model can compare variables according to their splitting qualitywhich is based on a set of features extracted from structural properties of the input formula, as well as dynamic probing statistics, collected during the solver's run. We derive the true labels through offline collection of runtimes of a parallel DC solver on sample formulas and variables within them. At each splitting point, we generate a predicted ranking (pairwise or minimum rank) of candidate variables and split the formula on the top variable. We implemented our heuristic in the Painless parallel SAT framework and evaluated our solver on a set of cryptographic instances encoding the SHA-1 preimage as well as SAT competition 2018 and 2019 benchmarks. We solve significantly more instances compared to the baseline Painless solver and outperform top divide-and-conquer solvers from recent SAT competitions, such as Treengeling. Furthermore, we are much faster than these top solvers on cryptographic benchmarks.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/nejati.20.cp.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ nejati.20.cp,
author = {Saeed Nejati and Ludovic {Le Frioux} and Vijay Ganesh},
title = {A Machine Learning Based Splitting Heuristic for
Divide-and-Conquer Solvers},
booktitle = {Proceedings of the 26 th International Conference on
Principles and Practice of Constraint Programming (CP'20)},
year = 2020,
month = sep,
volume = {12333},
pages = {899--916},
series = {Lecture Notes in Computer Science},
publisher = {Springer, Cham},
abstract = {In this paper, we present a machine learning based
splitting heuristic for divide-and-conquer parallel Boolean
SAT solvers. Splitting heuristics, whether they are
look-ahead or look-back, are designed using proxy metrics,
which when optimized, approximate the true metric of
minimizing solver runtime on sub-formulas resulting from a
split. The rationale for such metrics is that they have
been empirically shown to be excellent proxies for runtime
of solvers, in addition to being cheap to compute in an
online fashion. However, the design of traditional
splitting methods are often ad-hoc and do not leverage the
copious amounts of data that solvers generate. To address
the above-mentioned issues, we propose a machine learning
based splitting heuristic that leverages the features of
input formulas and data generated during the run of a
divide-and-conquer (DC) parallel solver. More precisely, we
reformulate the splitting problem as a ranking problem and
develop two machine learning models for pairwise ranking
and computing the minimum ranked variable. Our model can
compare variables according to their splitting quality,
which is based on a set of features extracted from
structural properties of the input formula, as well as
dynamic probing statistics, collected during the solver's
run. We derive the true labels through offline collection
of runtimes of a parallel DC solver on sample formulas and
variables within them. At each splitting point, we generate
a predicted ranking (pairwise or minimum rank) of candidate
variables and split the formula on the top variable. We
implemented our heuristic in the Painless parallel SAT
framework and evaluated our solver on a set of
cryptographic instances encoding the SHA-1 preimage as well
as SAT competition 2018 and 2019 benchmarks. We solve
significantly more instances compared to the baseline
Painless solver and outperform top divide-and-conquer
solvers from recent SAT competitions, such as Treengeling.
Furthermore, we are much faster than these top solvers on
cryptographic benchmarks.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:57:27 GMTBotOn the Usefulness of Clause Strengthening in Parallel SAT Solving
https://www.lrde.epita.fr/wiki/Publications/vallade.20.nfm
https://www.lrde.epita.fr/wiki/Publications/vallade.20.nfm<div class="mw-parser-output"><p><a class="mw-selflink selflink">On the Usefulness of Clause Strengthening in Parallel SAT Solving</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Vincent Vallade, <a href="/wiki/User:Ludovic" title="User:Ludovic">Ludovic Le Frioux</a>, <a href="/wiki/User:Sbaarir" title="User:Sbaarir">Souheib Baarir</a>, Julien Sopena, Fabrice Kordon</dd>
<dt>Where</dt>
<dd>Proceedings of the 12th NASA Formal Methods Symposium (NFM'20)</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer,_Cham&action=edit&redlink=1" class="new" title="Springer, Cham (page does not exist)">Springer, Cham</a></dd>
<dt>Keywords</dt>
<dd>Parallel satisfiability, tool, strengthening, clause sharing, portfolio, divide-and-conquer</dd>
<dt>Date</dt>
<dd>2020-08-01</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>In the context of parallel SATisfiability solving, this paper presents an implementation and evaluation of a clause strengthening algorithm. The developed component can be easily combined with (virtually) any CDCL-like SAT solver. Our implementation is integrated as a part of Painless, a generic and modular framework for building parallel SAT solvers.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/vallade.20.nfm.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ vallade.20.nfm,
author = {Vincent Vallade and Ludovic {Le Frioux} and Souheib Baarir
and Julien Sopena and Fabrice Kordon},
title = {On the Usefulness of Clause Strengthening in Parallel
{SAT} Solving},
booktitle = {Proceedings of the 12th NASA Formal Methods Symposium
(NFM'20)},
year = 2020,
month = aug,
volume = {12229},
pages = {222--229},
series = {Lecture Notes in Computer Science},
publisher = {Springer, Cham},
abstract = {In the context of parallel SATisfiability solving, this
paper presents an implementation and evaluation of a clause
strengthening algorithm. The developed component can be
easily combined with (virtually) any CDCL-like SAT solver.
Our implementation is integrated as a part of Painless, a
generic and modular framework for building parallel SAT
solvers.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:58:24 GMTBotEuler Well-Composedness
https://www.lrde.epita.fr/wiki/Publications/boutry.20.iwcia1
https://www.lrde.epita.fr/wiki/Publications/boutry.20.iwcia1<div class="mw-parser-output"><p><a class="mw-selflink selflink">Euler Well-Composedness</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, Rocio Gonzalez-Diaz, Maria-Jose Jimenez, Eduardo Paluzo-Hildago</dd>
<dt>Where</dt>
<dd>Combinatorial Image Analysis: Proceedings of the 20th International Workshop, IWCIA 2020, Novi Sad, Serbia, July 16–18, 2020</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Date</dt>
<dd>2020-07-21</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>In this paper, we define a new flavour of well-composedness, called Euler well-composedness, in the general setting of regular cell complexes: A regular cell complex is Euler well-composed if the Euler characteristic of the link of each boundary vertex is <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle 1}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mn>1</mn>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle 1}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/92d98b82a3778f043108d4e20960a9193df57cbf" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.162ex; height:2.176ex;" alt="{\displaystyle 1}"/></span>. A cell decomposition of a picture <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle I}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>I</mi>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle I}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/535ea7fc4134a31cbe2251d9d3511374bc41be9f" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.172ex; height:2.176ex;" alt="{\displaystyle I}"/></span> is a pair of regular cell complexes <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle {\big (}K(I),K({\bar {I}}){\big )}}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mrow class="MJX-TeXAtom-ORD">
<mrow class="MJX-TeXAtom-ORD">
<mo maxsize="1.2em" minsize="1.2em">(</mo>
</mrow>
</mrow>
<mi>K</mi>
<mo stretchy="false">(</mo>
<mi>I</mi>
<mo stretchy="false">)</mo>
<mo>,</mo>
<mi>K</mi>
<mo stretchy="false">(</mo>
<mrow class="MJX-TeXAtom-ORD">
<mrow class="MJX-TeXAtom-ORD">
<mover>
<mi>I</mi>
<mo stretchy="false">¯<!-- ¯ --></mo>
</mover>
</mrow>
</mrow>
<mo stretchy="false">)</mo>
<mrow class="MJX-TeXAtom-ORD">
<mrow class="MJX-TeXAtom-ORD">
<mo maxsize="1.2em" minsize="1.2em">)</mo>
</mrow>
</mrow>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle {\big (}K(I),K({\bar {I}}){\big )}}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/c988b0f7f3156a1955a5d03ac791755a56f8b5fb" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -1.005ex; width:13.608ex; height:3.176ex;" alt="{\displaystyle {\big (}K(I),K({\bar {I}}){\big )}}"/></span> such that <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle K(I)}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>K</mi>
<mo stretchy="false">(</mo>
<mi>I</mi>
<mo stretchy="false">)</mo>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle K(I)}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/729811dcd08c0366b3b912c3b3b29e0996a1923f" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:5.047ex; height:2.843ex;" alt="{\displaystyle K(I)}"/></span> (resp. <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle K({\bar {I}})}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>K</mi>
<mo stretchy="false">(</mo>
<mrow class="MJX-TeXAtom-ORD">
<mrow class="MJX-TeXAtom-ORD">
<mover>
<mi>I</mi>
<mo stretchy="false">¯<!-- ¯ --></mo>
</mover>
</mrow>
</mrow>
<mo stretchy="false">)</mo>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle K({\bar {I}})}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/b8a714060a73ca0285d6d969519e924d15e0f471" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:5.397ex; height:3.009ex;" alt="{\displaystyle K({\bar {I}})}"/></span>) is a topological and geometrical model representing <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle I}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>I</mi>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle I}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/535ea7fc4134a31cbe2251d9d3511374bc41be9f" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.172ex; height:2.176ex;" alt="{\displaystyle I}"/></span> (resp. its complementary, <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle {\bar {I}}}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mrow class="MJX-TeXAtom-ORD">
<mrow class="MJX-TeXAtom-ORD">
<mover>
<mi>I</mi>
<mo stretchy="false">¯<!-- ¯ --></mo>
</mover>
</mrow>
</mrow>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle {\bar {I}}}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/ab2ee9e7dd5523b6c8794be941a43f52a3063199" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.522ex; height:2.509ex;" alt="{\displaystyle {\bar {I}}}"/></span>). Then, a cell decomposition of a picture <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle I}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>I</mi>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle I}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/535ea7fc4134a31cbe2251d9d3511374bc41be9f" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.172ex; height:2.176ex;" alt="{\displaystyle I}"/></span> is self-dual Euler well-composed if both <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle K(I)}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>K</mi>
<mo stretchy="false">(</mo>
<mi>I</mi>
<mo stretchy="false">)</mo>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle K(I)}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/729811dcd08c0366b3b912c3b3b29e0996a1923f" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:5.047ex; height:2.843ex;" alt="{\displaystyle K(I)}"/></span> and <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle K({\bar {I}})}">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>K</mi>
<mo stretchy="false">(</mo>
<mrow class="MJX-TeXAtom-ORD">
<mrow class="MJX-TeXAtom-ORD">
<mover>
<mi>I</mi>
<mo stretchy="false">¯<!-- ¯ --></mo>
</mover>
</mrow>
</mrow>
<mo stretchy="false">)</mo>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle K({\bar {I}})}</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/b8a714060a73ca0285d6d969519e924d15e0f471" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:5.397ex; height:3.009ex;" alt="{\displaystyle K({\bar {I}})}"/></span> are Euler well-composed. We prove in this paper that, firstself-dual Euler well-composedness is equivalent to digital well-composedness in dimension 2 and 3, and second, in dimension 4, self-dual Euler well-composedness implies digital well-composedness, though the converse is not true.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/boutry.20.iwcia1.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ boutry.20.iwcia1,
author = {Nicolas Boutry and Rocio Gonzalez-Diaz and Maria-Jose
Jimenez and Eduardo Paluzo-Hildago},
title = {Euler Well-Composedness},
booktitle = {Combinatorial Image Analysis: Proceedings of the 20th
International Workshop, IWCIA 2020, Novi Sad, Serbia, July
16--18, 2020},
year = 2020,
editor = {T. Lukic and R. P. Barneva and V. Brimkov and L. Comic and
N. Sladoje},
volume = {12148},
series = {Lecture Notes in Computer Science},
pages = {3--19},
publisher = {Springer},
doi = {10.1007/978-3-030-51002-2_1},
abstract = {In this paper, we define a new flavour of
well-composedness, called Euler well-composedness, in the
general setting of regular cell complexes: A regular cell
complex is Euler well-composed if the Euler characteristic
of the link of each boundary vertex is $1$. A cell
decomposition of a picture $I$ is a pair of regular cell
complexes $\big(K(I),K(\bar{I})\big)$ such that $K(I)$
(resp. $K(\bar{I})$) is a topological and geometrical model
representing $I$ (resp. its complementary, $\bar{I}$).
Then, a cell decomposition of a picture $I$ is self-dual
Euler well-composed if both $K(I)$ and $K(\bar{I})$ are
Euler well-composed. We prove in this paper that, first,
self-dual Euler well-composedness is equivalent to digital
well-composedness in dimension 2 and 3, and second, in
dimension 4, self-dual Euler well-composedness implies
digital well-composedness, though the converse is not true.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Fri, 05 Feb 2021 19:20:31 GMTBotA 4D Counter-Example Showing that DWCness Does Not Imply CWCness in n-D
https://www.lrde.epita.fr/wiki/Publications/boutry.20.iwcia2
https://www.lrde.epita.fr/wiki/Publications/boutry.20.iwcia2<div class="mw-parser-output"><p><a class="mw-selflink selflink">A 4D Counter-Example Showing that DWCness Does Not Imply CWCness in n-D</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, Rocio Gonzalez-Diaz, Laurent Najman, <a href="/wiki/User:Theo" title="User:Theo">Thierry Géraud</a></dd>
<dt>Where</dt>
<dd>Combinatorial Image Analysis: Proceedings of the 20th International Workshop, IWCIA 2020, Novi Sad, Serbia, July 16–18, 2020</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Date</dt>
<dd>2020-07-21</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>In this paper, we prove that the two flavours of well-composedness called Continuous Well-Composedness (shortly CWCness), stating that the boundary of the continuous analog of a discrete set is a manifold, and Digital Well-Composedness (shortly DWCness), stating that a discrete set does not contain any critical configurationare not equivalent in dimension 4. To prove this, we exhibit the example of a configuration of 8 tesseracts (4D cubes) sharing a common corner (vertex), which is DWC but not CWC. This result is surprising since we know that CWCness and DWCness are equivalent in 2D and 3D. To reach our goal, we use local homology.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/boutry.20.iwcia2.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ boutry.20.iwcia2,
author = {Nicolas Boutry and Rocio Gonzalez-Diaz and Laurent Najman
and Thierry G\'eraud},
title = {A {4D} Counter-Example Showing that {DWCness} Does Not
Imply {CWCness} in $n$-{D}},
booktitle = {Combinatorial Image Analysis: Proceedings of the 20th
International Workshop, IWCIA 2020, Novi Sad, Serbia, July
16--18, 2020},
year = 2020,
editor = {T. Lukic and R. P. Barneva and V. Brimkov and L. Comic and
N. Sladoje},
volume = {12148},
series = {Lecture Notes in Computer Science},
pages = {73--87},
publisher = {Springer},
doi = {10.1007/978-3-030-51002-2_6},
abstract = {In this paper, we prove that the two flavours of
well-composedness called Continuous Well-Composedness
(shortly CWCness), stating that the boundary of the
continuous analog of a discrete set is a manifold, and
Digital Well-Composedness (shortly DWCness), stating that a
discrete set does not contain any critical configuration,
are not equivalent in dimension 4. To prove this, we
exhibit the example of a configuration of 8 tesseracts (4D
cubes) sharing a common corner (vertex), which is DWC but
not CWC. This result is surprising since we know that
CWCness and DWCness are equivalent in 2D and 3D. To reach
our goal, we use local homology.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Tue, 24 Nov 2020 10:46:41 GMTBotNon-iterative methods for image improvement in digital holography of the retina
https://www.lrde.epita.fr/wiki/Publications/rivet.20.phd
https://www.lrde.epita.fr/wiki/Publications/rivet.20.phd<div class="mw-parser-output"><p><a class="mw-selflink selflink">Non-iterative methods for image improvement in digital holography of the retina</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Jrivet" title="User:Jrivet">Julie Rivet</a></dd>
<dt>Place</dt>
<dd>Paris, France</dd>
<dt>Type</dt>
<dd>phdthesis</dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Date</dt>
<dd>2020-07-17</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>With the increase of the number of people with moderate to severe visual impairment, monitoring and treatment of vision disorders have become major issues in medicine today. At the Quinze-Vingts national ophthalmology hospital in Paris, two optical benches have been settled in recent years to develop two real-time digital holography techniques for the retina: holographic optical coherence tomography (OCT) and laser Doppler holography. The first reconstructs three-dimensional images, while the second allows visualization of blood flow in vessels. Besides problems inherent to the imaging system itself, optical devices are subject to external disturbance, bringing also difficulties in imaging and loss of accuracy. The main obstacles these technologies face are eye motion and eye aberrations. In this thesis, we have introduced several methods for image quality improvement in digital holography, and validated them experimentally. The resolution of holographic images has been improved by robust non-iterative methods: lateral and axial tracking and compensation of translation movements, and measurement and compensation of optical aberrations. This allows us to be optimistic that structures on holographic images of the retina will be more visible and sharper, which could ultimately provide very valuable information to clinicians.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/rivet.20.phd.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@PhDThesis{ rivet.20.phd,
author = {Julie Rivet},
title = {Non-iterative methods for image improvement in digital
holography of the retina},
school = {Sorbonne Universit\'e},
year = 2020,
address = {Paris, France},
month = jul,
abstract = {With the increase of the number of people with moderate to
severe visual impairment, monitoring and treatment of
vision disorders have become major issues in medicine
today. At the Quinze-Vingts national ophthalmology hospital
in Paris, two optical benches have been settled in recent
years to develop two real-time digital holography
techniques for the retina: holographic optical coherence
tomography (OCT) and laser Doppler holography. The first
reconstructs three-dimensional images, while the second
allows visualization of blood flow in vessels. Besides
problems inherent to the imaging system itself, optical
devices are subject to external disturbance, bringing also
difficulties in imaging and loss of accuracy. The main
obstacles these technologies face are eye motion and eye
aberrations. In this thesis, we have introduced several
methods for image quality improvement in digital
holography, and validated them experimentally. The
resolution of holographic images has been improved by
robust non-iterative methods: lateral and axial tracking
and compensation of translation movements, and measurement
and compensation of optical aberrations. This allows us to
be optimistic that structures on holographic images of the
retina will be more visible and sharper, which could
ultimately provide very valuable information to
clinicians.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:58:02 GMTBotNewsEntry (2020/07/17)
https://www.lrde.epita.fr/wiki/NewsEntry_(2020/07/17)
https://www.lrde.epita.fr/wiki/NewsEntry_(2020/07/17)<div class="mw-parser-output"><p><a class="mw-selflink selflink">NewsEntry (2020/07/17)</a>
</p></div><div class="mw-parser-output"><table class="wikitable">
<tbody><tr>
<th>Title
</th>
<td>Julie Rivet defends her <a href="/wiki/Affiche-these-JR" title="Affiche-these-JR"> PhD thesis</a> "Non-iterative methods for image improvement in digital holography of the retina" at EPITA at 2 pm.
</td></tr>
<tr>
<th>Sub-Title
</th>
<td>
</td></tr>
<tr>
<th>Date
</th>
<td>
<p>2020/07/17
</p>
</td></tr></tbody></table>
</div>Tue, 30 Jun 2020 09:53:28 GMTDanielaNewsEntry (2020/07/10)
https://www.lrde.epita.fr/wiki/NewsEntry_(2020/07/10)
https://www.lrde.epita.fr/wiki/NewsEntry_(2020/07/10)<div class="mw-parser-output"><p><a class="mw-selflink selflink">NewsEntry (2020/07/10)</a>
</p></div><div class="mw-parser-output"><table class="wikitable">
<tbody><tr>
<th>Title
</th>
<td>Didier Verna defends his <a href="/wiki/Affiche-these-HDR-DV" title="Affiche-these-HDR-DV"> Habilitation thesis</a> at EPITA at 2 pm.
</td></tr>
<tr>
<th>Sub-Title
</th>
<td>
</td></tr>
<tr>
<th>Date
</th>
<td>
<p>2020/07/10
</p>
</td></tr></tbody></table>
</div>Mon, 06 Jul 2020 10:08:06 GMTDaniela(Dynamic (Programming Paradigms)) ;; Performance and Expressivity
https://www.lrde.epita.fr/wiki/Publications/verna.20.hdr
https://www.lrde.epita.fr/wiki/Publications/verna.20.hdr<div class="mw-parser-output"><p><a class="mw-selflink selflink">(Dynamic (Programming Paradigms)) ;; Performance and Expressivity</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Didier" title="User:Didier">Didier Verna</a></dd>
<dt>Type</dt>
<dd>phdthesis</dd>
<dt>Date</dt>
<dd>2020-07-10</dd></dl>
</div>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/verna.20.hdr.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@PhDThesis{ verna.20.hdr,
author = {Didier Verna},
title = {(Dynamic (Programming Paradigms)) ;; Performance and
Expressivity},
school = {Sorbone-Universit\'e},
type = {Habilitation Thesis},
month = jul,
year = 2020,
doi = {10.5281/zenodo.4244393}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 04 Nov 2020 13:57:46 GMTBotPractical “Paritizing” of Emerson–Lei Automata
https://www.lrde.epita.fr/wiki/Publications/renkin.20.atva
https://www.lrde.epita.fr/wiki/Publications/renkin.20.atva<div class="mw-parser-output"><p><a class="mw-selflink selflink">Practical “Paritizing” of Emerson–Lei Automata</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/~frenkin/">Florian Renkin</a>, <a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/~adl/">Alexandre Duret-Lutz</a>, <a href="/wiki/User:Adrien" title="User:Adrien">Adrien Pommellet</a></dd>
<dt>Where</dt>
<dd>Proceedings of the 18th International Symposium on Automated Technology for Verification and Analysis (ATVA'20)</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Keywords</dt>
<dd>Spot</dd>
<dt>Date</dt>
<dd>2020-07-07</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>We introduce a new algorithm that takes a Transition-based Emerson-Lei Automaton (TELA), that is, an <span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle \omega }">
<semantics>
<mrow class="MJX-TeXAtom-ORD">
<mstyle displaystyle="true" scriptlevel="0">
<mi>ω<!-- ω --></mi>
</mstyle>
</mrow>
<annotation encoding="application/x-tex">{\displaystyle \omega }</annotation>
</semantics>
</math></span><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/48eff443f9de7a985bb94ca3bde20813ea737be8" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:1.446ex; height:1.676ex;" alt="{\displaystyle \omega }"/></span>-automaton whose acceptance condition is an arbitrary Boolean formula on sets of transitions to be seen infinitely or finitely often, and converts it into a Transition-based Parity Automaton (TPA). To reduce the size of the output TPA, the algorithm combines and optimizes two procedures based on a emphlatest appearance record principle, and introduces a emphpartial degeneralization. Our motivation is to use this algorithm to improve our LTL synthesis tool, where producing deterministic parity automata is an intermediate step.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/renkin.20.atva.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ renkin.20.atva,
author = {Florian Renkin and Alexandre Duret-Lutz and Adrien
Pommellet},
title = {Practical ``Paritizing'' of {E}merson--{L}ei Automata},
booktitle = {Proceedings of the 18th International Symposium on
Automated Technology for Verification and Analysis
(ATVA'20)},
year = {2020},
volume = {12302},
series = {Lecture Notes in Computer Science},
pages = {127--143},
month = oct,
publisher = {Springer},
abstract = {We introduce a new algorithm that takes a
\emph{Transition-based Emerson-Lei Automaton} (TELA), that
is, an $\omega$-automaton whose acceptance condition is an
arbitrary Boolean formula on sets of transitions to be seen
infinitely or finitely often, and converts it into a
\emph{Transition-based Parity Automaton} (TPA). To reduce
the size of the output TPA, the algorithm combines and
optimizes two procedures based on a \emph{latest appearance
record} principle, and introduces a \emph{partial
degeneralization}. Our motivation is to use this algorithm
to improve our LTL synthesis tool, where producing
deterministic parity automata is an intermediate step.},
doi = {10.1007/978-3-030-59152-6_7}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Thu, 05 Nov 2020 07:58:04 GMTBotImproving swarming using genetic algorithms
https://www.lrde.epita.fr/wiki/Publications/renault.20.isse
https://www.lrde.epita.fr/wiki/Publications/renault.20.isse<div class="mw-parser-output"><p><a class="mw-selflink selflink">Improving swarming using genetic algorithms</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Renault" title="User:Renault">Etienne Renault</a></dd>
<dt>Journal</dt>
<dd>Innovations in Systems and Software Engineering: a NASA journal (ISSE)</dd>
<dt>Type</dt>
<dd>article</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer&action=edit&redlink=1" class="new" title="Springer (page does not exist)">Springer</a></dd>
<dt>Projects</dt>
<dd><a href="/wiki/Spot" title="Spot">Spot</a></dd>
<dt>Date</dt>
<dd>2020-06-02</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>The verification of temporal properties against a given system may require the exploration of its full state space. In explicit model checking, this exploration uses a depth-first search and can be achieved with multiple randomized threads to increase performance. Nonethelessthe topology of the state space and the exploration order can cap the speedup up to a certain number of threads. This paper proposes a new technique that aims to tackle this limitation by generating artificial initial states, using genetic algorithms. Threads are then launched from these states and thus explore different parts of the state space. Our prototype implementation is 10% faster than state-of-the-art algorithms on a general benchmark and 40% on a specialized benchmark. Even if we expected a decrease in an order of magnitude, these results are still encouraging since they suggest a new way to handle existing limitations. Empirically, our technique seems well suited for "linear topology", i.e., the one we can obtain when combining model checking algorithms with partial-order reduction techniques.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/renault.20.isse.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@Article{ renault.20.isse,
author = {Etienne Renault},
title = {Improving swarming using genetic algorithms},
journal = {Innovations in Systems and Software Engineering: a NASA
journal (ISSE)},
year = 2020,
volume = {16},
number = {2},
pages = {143--159},
month = jun,
publisher = {Springer},
abstract = { The verification of temporal properties against a given
system may require the exploration of its full state space.
In explicit model checking, this exploration uses a
depth-first search and can be achieved with multiple
randomized threads to increase performance. Nonetheless,
the topology of the state space and the exploration order
can cap the speedup up to a certain number of threads. This
paper proposes a new technique that aims to tackle this
limitation by generating artificial initial states, using
genetic algorithms. Threads are then launched from these
states and thus explore different parts of the state space.
Our prototype implementation is 10\% faster than
state-of-the-art algorithms on a general benchmark and 40\%
on a specialized benchmark. Even if we expected a decrease
in an order of magnitude, these results are still
encouraging since they suggest a new way to handle existing
limitations. Empirically, our technique seems well suited
for "linear topology", i.e., the one we can obtain when
combining model checking algorithms with partial-order
reduction techniques. },
doi = {10.1007/s11334-020-00362-7}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 15 Jul 2020 14:48:46 GMTBotA New Minimum Barrier Distance for Multivariate Images with Applications to Salient Object Detection, Shortest Path Finding, and Segmentation
https://www.lrde.epita.fr/wiki/Publications/movn.20.cviu
https://www.lrde.epita.fr/wiki/Publications/movn.20.cviu<div class="mw-parser-output"><p><a class="mw-selflink selflink">A New Minimum Barrier Distance for Multivariate Images with Applications to Salient Object Detection, Shortest Path Finding, and Segmentation</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd><a href="/wiki/User:Movn" title="User:Movn">Minh Ôn Vũ Ngoc</a>, <a href="/wiki/User:Nboutry" title="User:Nboutry">Nicolas Boutry</a>, <a href="/wiki/User:Jonathan" title="User:Jonathan">Jonathan Fabrizio</a>, <a href="/wiki/User:Theo" title="User:Theo">Thierry Géraud</a></dd>
<dt>Journal</dt>
<dd>Computer Vision and Image Understanding</dd>
<dt>Type</dt>
<dd>article</dd>
<dt>Projects</dt>
<dd><a href="/wiki/Olena" title="Olena">Olena</a></dd>
<dt>Keywords</dt>
<dd>Image</dd>
<dt>Date</dt>
<dd>2020-06-02</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Distance transforms and the saliency maps they induce are widely used in image processing, computer vision, and pattern recognition. One of the most commonly used distance transform is the geodesic one. Unfortunately, this distance does not always achieve satisfying results on noisy or blurred images. Recently, a new (pseudo-)distance, called the minimum barrier distance (MBD), more robust to pixel variations, has been introduced. Some years after, Géraud et al. have proposed a good and fast-to compute approximation of this distance: the Dahu pseudo-distance. Since this distance was initially developped for grayscale images, we propose here an extension of this transform to multivariate images; we call it vectorial Dahu pseudo-distance. An efficient way to compute it is provided in this paper. Besides, we provide benchmarks demonstrating how much the vectorial Dahu pseudo-distance is more robust and competitive compared to other MB-based distances, which shows how much this distance is promising for salient object detection, shortest path finding, and object segmentation.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/movn.20.cviu.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@Article{ movn.20.cviu,
author = {Minh {\^On V\~{u} Ng\d{o}c} and Nicolas Boutry and
Jonathan Fabrizio and Thierry G\'eraud},
title = {A New Minimum Barrier Distance for Multivariate Images
with Applications to Salient Object Detection, Shortest
Path Finding, and Segmentation},
journal = {Computer Vision and Image Understanding},
year = {2020},
month = aug,
volume = {197--198},
doi = {10.1016/j.cviu.2020.102993},
abstract = {Distance transforms and the saliency maps they induce are
widely used in image processing, computer vision, and
pattern recognition. One of the most commonly used distance
transform is the geodesic one. Unfortunately, this distance
does not always achieve satisfying results on noisy or
blurred images. Recently, a new (pseudo-)distance, called
the minimum barrier distance (MBD), more robust to pixel
variations, has been introduced. Some years after, G\'eraud
et al. have proposed a good and fast-to compute
approximation of this distance: the Dahu pseudo-distance.
Since this distance was initially developped for grayscale
images, we propose here an extension of this transform to
multivariate images; we call it vectorial Dahu
pseudo-distance. An efficient way to compute it is provided
in this paper. Besides, we provide benchmarks demonstrating
how much the vectorial Dahu pseudo-distance is more robust
and competitive compared to other MB-based distances, which
shows how much this distance is promising for salient
object detection, shortest path finding, and object
segmentation.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Tue, 24 Nov 2020 10:47:42 GMTBotNewsEntry (2020/06/02)
https://www.lrde.epita.fr/wiki/NewsEntry_(2020/06/02)
https://www.lrde.epita.fr/wiki/NewsEntry_(2020/06/02)<div class="mw-parser-output"><p><a class="mw-selflink selflink">NewsEntry (2020/06/02)</a>
</p></div><div class="mw-parser-output"><table class="wikitable">
<tbody><tr>
<th>Title
</th>
<td>EPITA presents a webinar with Microsoft at <a rel="nofollow" class="external text" href="http://www.impact-ai.fr/education/exploria/">Explor'IA</a> on Artificial Intelligence and Medical Image Analysis.
</td></tr>
<tr>
<th>Sub-Title
</th>
<td>
<p>In this webinar, Nicolas Boutry from LRDE presents how to segment with Convolutional Neural Networks (CNN's) white and grey matters in multi-modal MRI 3D brain images of 6-months year old children. His demonstration is based on a dataset from the <a rel="nofollow" class="external text" href="http://iseg2017.web.unc.edu/">iSeg2017 challenge</a>.
</p>
</td></tr>
<tr>
<th>Date
</th>
<td>
<p>2020/06/02
</p>
</td></tr></tbody></table>
</div>Mon, 18 May 2020 16:23:30 GMTDanielaCommunity and LBD-based Clause Sharing Policy for Parallel SAT Solving
https://www.lrde.epita.fr/wiki/Publications/vallade.20.sat
https://www.lrde.epita.fr/wiki/Publications/vallade.20.sat<div class="mw-parser-output"><p><a class="mw-selflink selflink">Community and LBD-based Clause Sharing Policy for Parallel SAT Solving</a>
</p></div><div class="mw-parser-output"><div class="sideBox">
<dl><dt>Authors</dt>
<dd>Vincent Vallade, <a href="/wiki/User:Ludovic" title="User:Ludovic">Ludovic Le Frioux</a>, <a href="/wiki/User:Sbaarir" title="User:Sbaarir">Souheib Baarir</a>, Julien Sopena, Vijay Ganesh, Fabrice Kordon</dd>
<dt>Where</dt>
<dd>Proceedings of the 23rd International Conference on Theory and Applications of Satisfiability Testing (SAT'20)</dd>
<dt>Type</dt>
<dd>inproceedings</dd>
<dt>Publisher</dt>
<dd><a href="/index.php?title=Springer,_Cham&action=edit&redlink=1" class="new" title="Springer, Cham (page does not exist)">Springer, Cham</a></dd>
<dt>Keywords</dt>
<dd>Parallel satisfiability, clause sharing, community structure</dd>
<dt>Date</dt>
<dd>2020-06-01</dd></dl>
</div>
<h2><span class="mw-headline" id="Abstract">Abstract</span></h2>
<p>Modern parallel SAT solvers rely heavily on effective clause sharing policies for their performance. The core problem being addressed by these policies can be succinctly stated as "the problem of identifying high-quality learnt clauses" that when shared between the worker nodes of parallel solvers results in improved performance than otherwise. The term "high-quality clauses" is often defined in terms of metrics that solver designers have identified over years of empirical study. Some of the more well-known metrics to identify high-quality clauses for sharing include clause length, literal block distance (LBD), and clause usage in propagation. In this paper, we propose a new metric aimed at identifying high-quality learnt clauses and a concomitant clause-sharing policy based on a combination of LBD and community structure of Boolean formulas. The concept of community structure has been proposed as a possible explanation for the extraordinary performance of SAT solvers in industrial instances. Henceit is a natural candidate as a basis for a metric to identify high-quality clauses. To be more precise, our metric identifies clauses that have low LBD and low community number as ones that are high-quality for applications such as verification and testing. The community number of a clause C measures the number of different communities of a formula that the variables in C span. We perform extensive empirical analysis of our metric and clause-sharing policy, and show that our method significantly outperforms state-of-the-art techniques on the benchmark from the parallel track of the last four SAT competitions.
</p>
<h2><span class="mw-headline" id="Documents">Documents</span></h2>
<ul><li><a rel="nofollow" class="external text" href="http://www.lrde.epita.fr/dload/papers/vallade.20.sat.pdf">Paper</a></li></ul>
<h2><span id="Bibtex_(lrde.bib)"></span><span class="mw-headline" id="Bibtex_.28lrde.bib.29">Bibtex (<a rel="nofollow" class="external text" href="https://www.lrde.epita.fr/dload/papers/lrde.bib">lrde.bib</a>)</span></h2>
<p><small>
</small></p><small><pre>@InProceedings{ vallade.20.sat,
author = {Vincent Vallade and Ludovic {Le Frioux} and Souheib Baarir
and Julien Sopena and Vijay Ganesh and Fabrice Kordon},
title = {Community and {LBD}-based Clause Sharing Policy for
Parallel {SAT} Solving},
booktitle = {Proceedings of the 23rd International Conference on Theory
and Applications of Satisfiability Testing (SAT'20)},
year = 2020,
month = jun,
volume = {12178},
pages = {11--27},
series = {Lecture Notes in Computer Science},
publisher = {Springer, Cham},
abstract = {Modern parallel SAT solvers rely heavily on effective
clause sharing policies for their performance. The core
problem being addressed by these policies can be succinctly
stated as "the problem of identifying high-quality learnt
clauses" that when shared between the worker nodes of
parallel solvers results in improved performance than
otherwise. The term "high-quality clauses" is often defined
in terms of metrics that solver designers have identified
over years of empirical study. Some of the more well-known
metrics to identify high-quality clauses for sharing
include clause length, literal block distance (LBD), and
clause usage in propagation. In this paper, we propose a
new metric aimed at identifying high-quality learnt clauses
and a concomitant clause-sharing policy based on a
combination of LBD and community structure of Boolean
formulas. The concept of community structure has been
proposed as a possible explanation for the extraordinary
performance of SAT solvers in industrial instances. Hence,
it is a natural candidate as a basis for a metric to
identify high-quality clauses. To be more precise, our
metric identifies clauses that have low LBD and low
community number as ones that are high-quality for
applications such as verification and testing. The
community number of a clause C measures the number of
different communities of a formula that the variables in C
span. We perform extensive empirical analysis of our metric
and clause-sharing policy, and show that our method
significantly outperforms state-of-the-art techniques on
the benchmark from the parallel track of the last four SAT
competitions.}
}</pre></small><small></small><p><small></small>
</p><p><br />
</p><p><br />
</p><p><br />
</p><p><br />
</p></div>Wed, 08 Sep 2021 08:58:28 GMTBot