https://www.lrde.epita.fr/index.php?title=Special:NewPages&feed=atom&hideredirs=1&limit=100&offset=&namespace=0&username=&tagfilter=&size-mode=max&size=0LRDE - New pages [en]2024-03-29T09:48:22ZFrom LRDEMediaWiki 1.35.3https://www.lrde.epita.fr/wiki/Publications/boutry.23.jmiv.2Publications/boutry.23.jmiv.22023-07-29T10:17:30Z<p>Bot: Created page with "{{Publication | published = true | date = 2023-01-01 | authors = Nicolas Boutry | title = Introducing PC n-Manifolds and P-well-composedness in Partially Ordered Sets | journa..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2023-01-01<br />
| authors = Nicolas Boutry<br />
| title = Introducing PC n-Manifolds and P-well-composedness in Partially Ordered Sets<br />
| journal = Journal of Mathematical Imaging and Vision<br />
| abstract = In discrete topology, discrete surfaces are well-known for their strong topological and regularity properties. Their definition is recursive, and checking if a poset is a discrete surface is tractable. Their applications are numerous: when domain unicoherence is ensured, they lead access to the tree of shapes, and then to filtering in the shape space (shapings); they also lead to Laplacian zero-crossing extraction, to brain tumor segmentation, and many other applications related to mathematical morphology. They have many advantages in digital geometry and digital topology since discrete surfaces do not have any pinches (and then the underlying polyhedron of their geometric realization can be parameterized). However, contrary to topological manifolds known in continuous topology, discrete surfaces do not have any boundary, which is not always realizable in practice (finite hyper-rectangles cannot be discrete surfaces due to their non-empty boundary). For this reasonwe propose the three following contributions: (1) we introduce a new definition of boundary, called border, based on the definition of discrete surfaces, and which allows us to delimit any partially ordered set whenever it is not embedded in a greater ambient space, (2) we introduce <math>P</math>-well-composedness similar to well-composedness in the sense of Alexandrov but based on borders, (3) we propose new (possibly geometrical) structures called (smooth) <math>n</math>-PCM's which represent almost the same regularity as discrete surfaces and that are tractable thanks to their recursive definition, and (4) we prove several fundamental theorems relative to PCM's and their relations with discrete surfaces. We deeply believe that these new <math>n</math>-dimensional structures are promising for the discrete topology and digital geometry fields.<br />
| type = article<br />
| id = boutry.23.jmiv.2<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> boutry.23.jmiv.2,<br />
author = <nowiki>{</nowiki>Nicolas Boutry<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Introducing PC $n$-Manifolds and $P$-well-composedness in<br />
Partially Ordered Sets<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2023<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Journal of Mathematical Imaging and Vision<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>In discrete topology, discrete surfaces are well-known for<br />
their strong topological and regularity properties. Their<br />
definition is recursive, and checking if a poset is a<br />
discrete surface is tractable. Their applications are<br />
numerous: when domain unicoherence is ensured, they lead<br />
access to the tree of shapes, and then to filtering in the<br />
shape space (shapings); they also lead to Laplacian<br />
zero-crossing extraction, to brain tumor segmentation, and<br />
many other applications related to mathematical morphology.<br />
They have many advantages in digital geometry and digital<br />
topology since discrete surfaces do not have any pinches<br />
(and then the underlying polyhedron of their geometric<br />
realization can be parameterized). However, contrary to<br />
topological manifolds known in continuous topology,<br />
discrete surfaces do not have any boundary, which is not<br />
always realizable in practice (finite hyper-rectangles<br />
cannot be discrete surfaces due to their non-empty<br />
boundary). For this reason, we propose the three following<br />
contributions: (1) we introduce a new definition of<br />
boundary, called border, based on the definition of<br />
discrete surfaces, and which allows us to delimit any<br />
partially ordered set whenever it is not embedded in a<br />
greater ambient space, (2) we introduce<br />
$P$-well-com\-po\-sed\-ness similar to<br />
well-com\-po\-sed\-ness in the sense of Alexandrov but<br />
based on borders, (3) we propose new (possibly geometrical)<br />
structures called (smooth) $n$-PCM's which represent almost<br />
the same regularity as discrete surfaces and that are<br />
tractable thanks to their recursive definition, and (4) we<br />
prove several fundamental theorems relative to PCM's and<br />
their relations with discrete surfaces. We deeply believe<br />
that these new $n$-dimensional structures are promising for<br />
the discrete topology and digital geometry fields. <nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/boutry.23.jmivPublications/boutry.23.jmiv2023-06-05T10:00:42Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2023-01-01<br />
| authors = Gilles Bertrand, Nicolas Boutry, Laurent Najman<br />
| title = Discrete Morse Functions and Watersheds<br />
| journal = Journal of Mathematical Imaging and Vision (Special Edition)<br />
| abstract = Any watershed, when defined on a stack on a normal pseudomanifold of dimension <math>d</math>, is a pure <math>(d-1)</math>-subcomplex that satisfies a drop-of-water principle. In this paper, we introduce Morse stacks, a class of functions that are equivalent to discrete Morse functions. We show that the watershed of a Morse stack on a normal pseudomanifold is uniquely defined, and can be obtained with a linear-time algorithm relying on a sequence of collapses. Last, we prove that such a watershed is the cut of the unique minimum spanning forest, rooted in the minima of the Morse stack, of the facet graph of the pseudomanifold.<br />
| type = article<br />
| id = boutry.23.jmiv<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> boutry.23.jmiv,<br />
author = <nowiki>{</nowiki>Gilles Bertrand and Nicolas Boutry and Laurent Najman<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Discrete Morse Functions and Watersheds<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2023<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Journal of Mathematical Imaging and Vision (Special<br />
Edition)<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Any watershed, when defined on a stack on a normal<br />
pseudomanifold of dimension $d$, is a pure<br />
$(d-1)$-subcomplex that satisfies a drop-of-water<br />
principle. In this paper, we introduce Morse stacks, a<br />
class of functions that are equivalent to discrete Morse<br />
functions. We show that the watershed of a Morse stack on a<br />
normal pseudomanifold is uniquely defined, and can be<br />
obtained with a linear-time algorithm relying on a sequence<br />
of collapses. Last, we prove that such a watershed is the<br />
cut of the unique minimum spanning forest, rooted in the<br />
minima of the Morse stack, of the facet graph of the<br />
pseudomanifold. <nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/tschora.23.idaPublications/tschora.23.ida2023-04-04T05:30:11Z<p>Bot: Created page with "{{Publication | published = true | date = 2023-04-10 | authors = Léonard Tschora, Erwan Pierre, Marc Plantevit, Céline Robardet | editors = Bruno Crémilleux, Sibylle Hess,..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2023-04-10<br />
| authors = Léonard Tschora, Erwan Pierre, Marc Plantevit, Céline Robardet<br />
| editors = Bruno Crémilleux, Sibylle Hess, Siegfried Nijssen<br />
| title = Forecasting Electricity Prices: An Optimize Then Predict-Based Approach<br />
| booktitle = Advances in Intelligent Data Analysis XXI<br />
| publisher = Springer Nature Switzerland<br />
| address = Cham<br />
| pages = 446 to 458<br />
| abstract = We are interested in electricity price forecasting at the European scale. The electricity market is ruled by price regulation mechanisms that make it possible to adjust production to demand, as electricity is difficult to store. These mechanisms ensure the highest price for producers, the lowest price for consumers and a zero energy balance by setting day-ahead prices, i.e. prices for the next 24h. Most studies have focused on learning increasingly sophisticated models to predict the next day's 24 hourly prices for a given zone. However, the zones are interdependent and this last point has hitherto been largely underestimated. In the following, we show that estimating the energy cross-border transfer by solving an optimization problem and integrating it as input of a model improves the performance of the price forecasting for several zones together.<br />
| lrdekeywords = IA<br />
| lrdenewsdate = 2023-04-10<br />
| type = inproceedings<br />
| id = tschora.23.ida<br />
| identifier = doi:10.1007/978-3-031-30047-9_35<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> tschora.23.ida,<br />
author = <nowiki>{</nowiki>L<nowiki>{</nowiki>\'e<nowiki>}</nowiki>onard Tschora and Erwan Pierre and Marc Plantevit<br />
and C<nowiki>{</nowiki>\'e<nowiki>}</nowiki>line Robardet<nowiki>}</nowiki>,<br />
editor = <nowiki>{</nowiki>Cr<nowiki>{</nowiki>\'e<nowiki>}</nowiki>milleux, Bruno and Hess, Sibylle and Nijssen,<br />
Siegfried<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Forecasting Electricity Prices: An Optimize Then<br />
Predict-Based Approach<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Advances in Intelligent Data Analysis XXI<nowiki>}</nowiki>,<br />
year = 2023,<br />
publisher = <nowiki>{</nowiki>Springer Nature Switzerland<nowiki>}</nowiki>,<br />
address = <nowiki>{</nowiki>Cham<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>446--458<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>We are interested in electricity price forecasting at the<br />
European scale. The electricity market is ruled by price<br />
regulation mechanisms that make it possible to adjust<br />
production to demand, as electricity is difficult to store.<br />
These mechanisms ensure the highest price for producers,<br />
the lowest price for consumers and a zero energy balance by<br />
setting day-ahead prices, i.e. prices for the next 24h.<br />
Most studies have focused on learning increasingly<br />
sophisticated models to predict the next day's 24 hourly<br />
prices for a given zone. However, the zones are<br />
interdependent and this last point has hitherto been<br />
largely underestimated. In the following, we show that<br />
estimating the energy cross-border transfer by solving an<br />
optimization problem and integrating it as input of a model<br />
improves the performance of the price forecasting for<br />
several zones together.<nowiki>}</nowiki>,<br />
isbn = <nowiki>{</nowiki>978-3-031-30047-9<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1007/978-3-031-30047-9_35<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/Xu.23.iceccsPublications/Xu.23.iceccs2023-04-03T15:15:10Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2023-04-03<br />
| authors = Hao Xu, Souheib Baarir, Tewfik Ziadi, Siham EssodaiguiYves Bossu, Lom Messan Hillah<br />
| title = An Experience Report on the Optimization of the Product Configuration System of Renault<br />
| booktitle = 26th International Conference on Engineering of Complex Computer Systems<br />
| publisher = IEEE<br />
| lrdeprojects = AA<br />
| lrdenewsdate = 2023-04-03<br />
| abstract = The problem of configuring a variability model is widespread in many different domains. A leading automobile manufacturer has developed its technology internally to model vehicle diversity. This technology relies on the approach known as knowledge compilation to explore the configurations space. However, the growing variability and complexity of the vehicles' range hardens the space representation problem and impacts performance requirements. This paper tackles these issues by exploiting symmetries that represent isomorphic parts in the configurations space. A new method describes how these symmetries are exploited and integrated. The extensive experiments we conducted on datasets from the automobile manufacturer show our approach's robustness and effectiveness: the achieved gain is a reduction of 52.13% in space representation and 49.81% in processing time on average<br />
| note = To Appear<br />
| type = proceedings<br />
| id = Xu.23.iceccs<br />
| bibtex = <br />
@Proceedings<nowiki>{</nowiki> xu.23.iceccs,<br />
author = <nowiki>{</nowiki>Hao Xu and Souheib Baarir and Tewfik Ziadi and Siham<br />
Essodaigui, Yves Bossu and Lom Messan Hillah<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>An Experience Report on the Optimization of the Product<br />
Configuration System of Renault<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>26th International Conference on Engineering of Complex<br />
Computer Systems<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>IEEE<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2023<nowiki>}</nowiki>,<br />
month = jun,<br />
abstract = <nowiki>{</nowiki>The problem of configuring a variability model is<br />
widespread in many different domains. A leading automobile<br />
manufacturer has developed its technology internally to<br />
model vehicle diversity. This technology relies on the<br />
approach known as knowledge compilation to explore the<br />
configurations space. However, the growing variability and<br />
complexity of the vehicles' range hardens the space<br />
representation problem and impacts performance<br />
requirements. This paper tackles these issues by exploiting<br />
symmetries that represent isomorphic parts in the<br />
configurations space. A new method describes how these<br />
symmetries are exploited and integrated. The extensive<br />
experiments we conducted on datasets from the automobile<br />
manufacturer show our approach's robustness and<br />
effectiveness: the achieved gain is a reduction of 52.13\%<br />
in space representation and 49.81\% in processing time on<br />
average <nowiki>}</nowiki>,<br />
note = <nowiki>{</nowiki>To Appear<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/Xu.23.sacPublications/Xu.23.sac2023-04-03T15:15:09Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2023-04-03<br />
| authors = Hao Xu, Souheib Baarir, Tewfik Ziadi, Siham EssodaiguiYves Bossu, Lom Messan Hillah<br />
| title = Optimization of the Product Configuration System of Renault<br />
| booktitle = SAC '23: The 38th ACM/SIGAPP Symposium on Applied Computing<br />
| publisher = ACM<br />
| lrdeprojects = AA<br />
| lrdenewsdate = 2023-04-03<br />
| note = To Appear<br />
| type = inproceedings<br />
| id = Xu.23.sac<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> xu.23.sac,<br />
author = <nowiki>{</nowiki> Hao Xu and Souheib Baarir and Tewfik Ziadi and Siham<br />
Essodaigui, Yves Bossu and Lom Messan Hillah<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Optimization of the Product Configuration System of<br />
Renault<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki><nowiki>{</nowiki>SAC<nowiki>}</nowiki> '23: The 38th <nowiki>{</nowiki>ACM/SIGAPP<nowiki>}</nowiki> Symposium on Applied<br />
Computing<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>ACM<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2023<nowiki>}</nowiki>,<br />
month = mar,<br />
note = <nowiki>{</nowiki>To Appear<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/chen.23.phdPublications/chen.23.phd2023-04-03T08:20:40Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2023-03-22<br />
| authors = Yizi Chen<br />
| title = Modern vectorization and alignement of historical maps: An application to Paris atlas (1789-1950)<br />
| school = Gustave Eiffel University<br />
| type = phdthesis<br />
| address = Saint-Mandé, France<br />
| lrdekeywords = Image, Historical Maps, Vectorization, Instance Segmentation, Alignment, Deep Edge Filtering, WatershedClosed Shape Extraction<br />
| lrdenewsdate = 2023-03-22<br />
| lrdeprojects = Olena, SoDUCo<br />
| abstract = Maps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing complex spatial transformations over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains: humanities, social sciences, etc. The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects as vector features. The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades. In this thesis, we propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers), focusing on the extraction of closed shapes. Our approach is built upon the complementary strengths of convolutional neural networks which excel at filtering edges while presenting poor topological properties for their outputs, and mathematical morphology, which offers solid guarantees regarding closed shape extraction while being very sensitive to noise. In order to improve the robustness of deep edge filters to noise, we review several, and propose new topology-preserving loss functions which enable to improve the topological properties of the results. We also introduce a new contrast convolution (CConv) layer to investigate how architectural changes can impact such properties. Finally, we investigate the different approaches which can be used to implement each stage, and how to combine them in the most efficient way. Thanks to a shape extraction pipeline, we propose a new alignment procedure for historical map images, and start to leverage the redundancies contained in map sheets with similar contents to propagate annotations, improve vectorization quality, and eventually detect evolution patterns for later analysis or to automatically assess vectorization quality. To evaluate the performance of all methods mentioned abovewe released a new dataset of annotated historical map images. It is the first public and open dataset targeting the task of historical map vectorization. We hope that thanks to our publications, public and open releases of datasets, codes and results, our work will benefit a wide range of historical map-related applications.<br />
| id = chen.23.phd<br />
| bibtex = <br />
@PhDThesis<nowiki>{</nowiki> chen.23.phd,<br />
author = <nowiki>{</nowiki>Yizi Chen<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Modern vectorization and alignement of historical maps: An<br />
application to Paris atlas (1789-1950)<nowiki>}</nowiki>,<br />
school = <nowiki>{</nowiki>Gustave Eiffel University<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2023<nowiki>}</nowiki>,<br />
type = <nowiki>{</nowiki>phdthesis<nowiki>}</nowiki>,<br />
address = <nowiki>{</nowiki>Saint-Mand<nowiki>{</nowiki>\'e<nowiki>}</nowiki>, France<nowiki>}</nowiki>,<br />
month = mar,<br />
abstract = <nowiki>{</nowiki>Maps have been a unique source of knowledge for centuries.<br />
Such historical documents provide invaluable information<br />
for analyzing complex spatial transformations over<br />
important time frames. This is particularly true for urban<br />
areas that encompass multiple interleaved research domains:<br />
humanities, social sciences, etc. The large amount and<br />
significant diversity of map sources call for automatic<br />
image processing techniques in order to extract the<br />
relevant objects as vector features. The complexity of maps<br />
(text, noise, digitization artifacts, etc.) has hindered<br />
the capacity of proposing a versatile and efficient<br />
raster-to-vector approaches for decades. In this thesis, we<br />
propose a learnable, reproducible, and reusable solution<br />
for the automatic transformation of raster maps into vector<br />
objects (building blocks, streets, rivers), focusing on the<br />
extraction of closed shapes. Our approach is built upon the<br />
complementary strengths of convolutional neural networks<br />
which excel at filtering edges while presenting poor<br />
topological properties for their outputs, and mathematical<br />
morphology, which offers solid guarantees regarding closed<br />
shape extraction while being very sensitive to noise. In<br />
order to improve the robustness of deep edge filters to<br />
noise, we review several, and propose new<br />
topology-preserving loss functions which enable to improve<br />
the topological properties of the results. We also<br />
introduce a new contrast convolution (CConv) layer to<br />
investigate how architectural changes can impact such<br />
properties. Finally, we investigate the different<br />
approaches which can be used to implement each stage, and<br />
how to combine them in the most efficient way. Thanks to a<br />
shape extraction pipeline, we propose a new alignment<br />
procedure for historical map images, and start to leverage<br />
the redundancies contained in map sheets with similar<br />
contents to propagate annotations, improve vectorization<br />
quality, and eventually detect evolution patterns for later<br />
analysis or to automatically assess vectorization quality.<br />
To evaluate the performance of all methods mentioned above,<br />
we released a new dataset of annotated historical map<br />
images. It is the first public and open dataset targeting<br />
the task of historical map vectorization. We hope that<br />
thanks to our publications, public and open releases of<br />
datasets, codes and results, our work will benefit a wide<br />
range of historical map-related applications.<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Affiche-these-NNAffiche-these-NN2023-03-13T07:30:31Z<p>Daniela: </p>
<hr />
<div>{{DISPLAYTITLE:PhD Defense Nicolas Nalpon}}<br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><br />
[[File:enac.png|200px]]&nbsp;[[File:logo-ecole doctorale Systemes.jpg|200px]]&nbsp;[[File:Epita-logo-2.png|250px]]&nbsp;[[File:Lre-logo.png|200px]]<br />
</div><br />
<br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''SOUTENANCE de THÈSE'''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''Nicolas Nalpon'''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''Lundi 13 mars 2023 '''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>''' à 10h30 '''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''ENAC, 7 Avenue Edouard Belin, 31400 Toulouse'''</big><br />
</div><br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>''' Amphithéâtre Bréguet '''</big><br />
</div><br />
<br />
<br />
<br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''Vers la vérification des langages de description d'interface utilisateur'''</big></div><br />
<br />
<br />
'''Résumé :'''<br />
<br />
Les UIDLs (User Interface Description Languages) sont des langages conçus pour faciliter la conception des interfaces utilisateurs. Ils permettent de se concentrer sur le développement de l’interface utilisateur sans se préoccuper du reste du programme tout en offrant une syntaxe adéquate à leur description. Cependant ces langages sont utilisés dans des domaines critiques, tels que l’aéronautique ou le domaine médical, alors qu’ils ne permettent pas, en l’état, d’apporter les garanties requises pour ce type d’applications critiques.<br />
<br />
Dans cette thèse, nous nous questionnons sur les UIDLs spécialisés dans la description des interfaces graphiques et leur utilisation dans les contextes critiques. Notre approche porte sur l’étude de la sémantique de ces langages et de leur formalisation. Les sémantiques des UIDLs ont été peu étudiées dans la littérature et pourtant, leur formalisation pourrait permettre de vérifier l’ensemble des interfaces descriptibles. Nous présentons des propriétés communes aux UIDLs afin de réfléchir sur la façon de les formaliser.<br />
<br />
Pour répondre à cette question, nous proposons d’utiliser, les bigraphes de Robin Milner, un formalisme mathématique permettant de modéliser un système évoluant en espace et en temps. Nous montrons que la théorie des bigraphes est adéquate pour la formalisation de la sémantique des UIDLs et définissons un UIDL ayant pour fondement théorique les bigraphes. La définition d’un tel UIDL permet de l’utiliser en tant que langage intermédiaire pour la compilation d’autres UIDLs et par son intermédiaire, de vérifier des interfaces graphiques. Nous validons notre approche en compilant le langage Smala, un UIDL utilisé dans le domaine de l’aviation, vers l’UIDL défini et en vérifiant certaines propriétés sur des exemples d’interfaces.<br />
<br />
'''Mots clés : langage de description d'interface graphique, bigraphe, sémantique formelle'''<br />
<br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''Towards the verification of user interface description languages'''</big></div><br />
<br />
<br />
'''Abstract:'''<br />
<br />
UIDLs (User Interface Description Languages) are programing languages specifically designed to simplify the creation of user interfaces. These languages enable developers to concentrate on interface design while providing an optimised syntax for describing interfaces without worrying about other aspects of the program. While, these languages are used in critical domains such as aeronautics or medicine, where their current state does not offer the necessary safety and reliability guarantee.<br />
<br />
In this study, we explore specialised UIDLs for describing graphical interfaces and their use in critical contexts. Our approach focuses on analysing the semantics of these languages and their formalisation. The semantics of UIDLs have not been extensively studied in the literature, and yet their formalisation could facilitate the verification of all possible interfaces. We identify common properties of UIDLs and explore ways to formalise them.<br />
<br />
In order to formalise these languages, we propose using Robin Milner's bigraphs, a mathematical formalism that models a system's evolution over time and space. We demonstrate that the theory of bigraphs is suitable for formalising the semantics of UIDLs, and we define a UIDL with bigraphs as its theoretical foundation. This definition enables us to use the UIDL as an intermediate language for compiling other UIDLs and testing graphical interfaces. We validate our approach by compiling the Smala language, a UIDL used in the aviation domain, to the defined UIDL and verifying certain properties on interfaces. <br />
<br />
'''Keywords: user interface description languages, bigraphs, formal semantics'''<br />
<br />
'''Composition du Jury :'''<br />
<br />
* Sylvain CONCHON, Reviewer, Professor, Université Paris-Saclay<br />
* Christine TASSON, Reviewer, Professor, Sorbonne Université<br />
* Timothy BOURKE, Examiner, Associate Professor, INRIA - École Normale Supérieure Paris<br />
* Frédéric DABROWSKI, Examiner, Associate Professor, Université d'Orléans<br />
* Célia MARTINIE, Examiner, Associate Professor, Université Toulouse III - Paul Sabatier<br />
* Xavier THIRIOUX, Examiner, Professor, ISAE-SUPAERO <br />
* Pierre-Loïc GAROCHE, Supervisor, Professor, ENAC<br />
* Célia PICARD, Co-supervisor, Associate Professor, ENAC<br />
* Cyril Allignol, Invited, Associate Professor, ENAC</div>Danielahttps://www.lrde.epita.fr/wiki/Publications/kheireddine.22.apsecPublications/kheireddine.22.apsec2023-03-10T16:49:06Z<p>Bot: Created page with "{{Publication | published = true | date = 2022-12-09 | authors = Anissa Kheireddine, Étienne Renault, Souheib Baarir | title = Tuning SAT Solvers for LTL Model Checking | boo..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-09<br />
| authors = Anissa Kheireddine, Étienne Renault, Souheib Baarir<br />
| title = Tuning SAT Solvers for LTL Model Checking<br />
| booktitle = Proceedings of the 29th Asia-Pacific Software Engineering Conference (APSEC'22)<br />
| pages = 259 to 268<br />
| volume = ???<br />
| publisher = IEEE<br />
| abstract = Bounded model checking (BMC) aims at checking whether a model satisfies a property. Most of the existing SAT-based BMC approaches rely on generic strategies, which are supposed to work for any SAT problem. The key idea defended in this paper is to tune SAT solvers algorithm using: (1) a static classification based on the variables used to encode the BMC into a Boolean formula; (2) and use the hierarchy of Manna&Pnueli that classifies any property expressed through Linear-time Temporal Logic (LTL). By combining these two information with the classical Literal Block Distance (LBD) measure, we designed a new heuristic, well suited for solving BMC problems. In particular, our work identifies and exploits a new set of relevant (learnt) clauses. We experiment with these ideas by developing a tool dedicated for SAT-based LTL BMC solvers, called BSaLTic. Our experiments over a large database of BMC problems, show promising results. In particular, BSaLTic provides good performance on UNSAT problems. This work highlights the importance of considering the structure of the underlying problem in SAT procedures.<br />
| lrdeprojects = Spot<br />
| lrdepaper = http://www.lrde.epita.fr/dload/papers/kheireddine.22.apsec.pdf<br />
| lrdenewsdate = 2022-12-09<br />
| type = inproceedings<br />
| id = kheireddine.22.apsec<br />
| identifier = doi:10.1109/APSEC57359.2022.00038<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> kheireddine.22.apsec,<br />
author = <nowiki>{</nowiki>Anissa Kheireddine and \'Etienne Renault and Souheib<br />
Baarir<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Tuning <nowiki>{</nowiki>SAT<nowiki>}</nowiki> Solvers for <nowiki>{</nowiki>LTL<nowiki>}</nowiki> Model Checking<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Proceedings of the 29th Asia-Pacific Software Engineering<br />
Conference (APSEC'22)<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>259--268<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>???<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>IEEE<nowiki>}</nowiki>,<br />
month = dec,<br />
abstract = <nowiki>{</nowiki>Bounded model checking (BMC) aims at checking whether a<br />
model satisfies a property. Most of the existing SAT-based<br />
BMC approaches rely on generic strategies, which are<br />
supposed to work for any SAT problem. The key idea defended<br />
in this paper is to tune SAT solvers algorithm using: (1) a<br />
static classification based on the variables used to encode<br />
the BMC into a Boolean formula; (2) and use the hierarchy<br />
of Manna\&Pnueli that classifies any property expressed<br />
through Linear-time Temporal Logic (LTL). By combining<br />
these two information with the classical Literal Block<br />
Distance (LBD) measure, we designed a new heuristic, well<br />
suited for solving BMC problems. In particular, our work<br />
identifies and exploits a new set of relevant (learnt)<br />
clauses. We experiment with these ideas by developing a<br />
tool dedicated for SAT-based LTL BMC solvers, called<br />
BSaLTic. Our experiments over a large database of BMC<br />
problems, show promising results. In particular, BSaLTic<br />
provides good performance on UNSAT problems. This work<br />
highlights the importance of considering the structure of<br />
the underlying problem in SAT procedures. <nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1109/APSEC57359.2022.00038<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Affiche-these-YCAffiche-these-YC2023-03-07T10:50:20Z<p>Ychen: </p>
<hr />
<div>{{DISPLAYTITLE: Ph.D. Defense Yizi Chen}}<br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><br />
[[File:Logo_IGN-ENSG.png|250px]]&nbsp;[[File:Logo_UGE.png|250px]]&nbsp;[[File:Lastig_1920_EN.png|200px]]&nbsp;[[File:Epita-logo-2.png|200px]]&nbsp;[[File:Lre-logo.png|200px]]<br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''SOUTENANCE de THÈSE'''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''Yizi CHEN'''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''Mercredi 22 mars 2023 '''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>''' à 14h00 '''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''73 Av. de Paris, 94160 Saint-Mandé'''</big><br />
</div><br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''BAT A 1er étage pièce 182 François ARAGO'''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">Plan d’accès :<br />
{{#widget:Iframe<br />
|url=https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d2407.8036370477002!2d2.4213783760873584!3d48.84432266549369!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x47e6729aa18311b7%3A0x601fe46bf859396!2sIGN%20Institut%20national%20de%20l&#39;information%20g%C3%A9ographique%20et%20foresti%C3%A8re!5e0!3m2!1sfr!2sfr!4v1678727360025!5m2!1sfr!2sfr<br />
|width=650<br />
|height=450<br />
|border=0<br />
}}<br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''Modern vectorization and alignment of historical maps: An application to Paris Atlas (1789-1950)'''</big><br />
</div><br />
<br />
'''Résumé: '''<br />
<br />
Les cartes sont une source unique de connaissances depuis des siècles.<br />
Ces documents historiques fournissent des informations inestimables pour analyser des transformations spatiales complexes sur des périodes importantes.<br />
Cela est particulièrement vrai pour les zones urbaines qui englobent de multiples domaines de recherche imbriqués : humanités, sciences sociales, etc.<br />
La complexité des cartes (texte, bruit, artefacts de numérisation, etc.) a entravé la capacité à proposer des approches de vectorisation polyvalentes et efficaces pendant des décennies.<br />
<br />
Dans cette thèse, nous proposons une solution apprenable, reproductible et réutilisable pour la transformation automatique de cartes raster en objets vectoriels (îlots, rues, rivières),<br />
en nous focalisant sur le problème d'extraction de formes closes.<br />
Notre approche s'appuie sur la complémentarité des réseaux de neurones convolutifs qui excellent dans et de la morphologie mathématique, qui présente de solides garanties au regard de l'extraction de formes closes tout en étant très sensible au bruit.<br />
<br />
Afin d'améliorer la robustesse au bruit des filtres convolutifs,<br />
nous comparons plusieurs fonctions de coût visant spécifiquement à préserver les propriétés topologiques des résultats, et en proposons de nouvelles.<br />
À cette fin, nous introduisons également un nouveau type de couche convolutive (CConv) exploitant le contraste des images,<br />
pour explorer les possibilités de telles améliorations à l'aide de transformations architecturales des réseaux.<br />
Finalement, nous comparons les différentes approches et architectures qui peuvent être utilisées pour implémenter chaque étape de notre chaîne de traitements, et comment combiner ces dernières de la meilleure façon possible.<br />
<br />
Grâce à une chaîne de traitement fonctionnelle, nous proposons une nouvelle procédure d'alignement d'images de plans historiques,<br />
et commençons à tirer profit de la redondance des données extraites dans des images similaires pour propager des annotations, améliorer la qualité de la vectorisation, et éventuellement détecter des cas d'évolution en vue d'analyse thématique, ou encore l'estimation automatique de la qualité de la vectorisation.<br />
<br />
Afin d'évaluer la performance des méthodes mentionnées précédemment, nous avons publié un nouveau jeu de données composé d'images de plans historiques annotées.<br />
C'est le premier jeu de données en libre accès dédié à la vectorisation de plans historiques.<br />
Nous espérons qu'au travers de nos publications, et de la diffusion ouverte et publique de nos résultats, sources et jeux de données, cette recherche pourra être utile à un large éventail d'applications liées aux cartes historiques.<br />
<br />
'''Mots-clés:''' Cartes historiques, vision par ordinateur et apprentissage en profondeur<br />
<br />
<br />
'''Abstract: '''<br />
<br />
Maps have been a unique source of knowledge for centuries. <br />
Such historical documents provide invaluable information for analyzing complex spatial transformations over important time frames. <br />
This is particularly true for urban areas that encompass multiple interleaved research domains: humanities, social sciences, etc.<br />
The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects as vector features. <br />
The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing versatile and efficient raster-to-vector approaches for decades. <br />
<br />
In this thesis, we propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers),<br />
focusing on the extraction of closed shapes.<br />
Our approach is built upon the complementary strengths of convolutional neural networks which excel at filtering edges while presenting poor topological properties for their outputs, and mathematical morphology, which offers solid guarantees regarding closed shape extraction while being very sensitive to noise.<br />
<br />
In order to improve the robustness of deep edge filters to noise, <br />
we review several and propose new topology-preserving loss functions which enable to improve of the topological properties of the results.<br />
We also introduce a new contrast convolution (CConv) layer to investigate how architectural changes can impact such properties.<br />
Finally, we investigate the different approaches which can be used to implement each stage, and how to combine them in the most efficient way.<br />
<br />
Thanks to a shape extraction pipeline, we propose a new alignment procedure for historical map images,<br />
and start to leverage the redundancies contained in map sheets with similar contents to propagate annotations, improve vectorization quality, and eventually detect evolution patterns for later analysis or to automatically assess vectorization quality.<br />
<br />
To evaluate the performance of all methods mentioned above, we released a new dataset of annotated historical map images.<br />
It is the first public and open dataset targeting the task of historical map vectorization.<br />
We hope that thanks to our publications, public and open releases of datasets, codes, and results,<br />
our work will benefit a wide range of historical map-related applications.<br />
<br />
'''Keywords: ''' Historical Maps, Computer Vision and Deep Learning<br />
<br />
<br />
'''Composition du Jury:'''<br />
<br />
Reviewer:<br />
<br />
* Véronique Eglin, Professor, INSA Lyon, Imagine/LIRIS<br />
* Lorenz Hurni, Professor, ETH Zürich, IKG<br />
<br />
Examiner:<br />
<br />
* Mathieu Aubry, Senior researcher, ENPC, IMAGINE/LIGM <br />
* Stefan Leyk, Professor, University of Colorado Boulder<br />
* Nicole Vincent, Professor, Université Paris Cité, LIPADE<br />
<br />
Supervisor:<br />
<br />
* Julien Perret, Senior researcher, LASTIG, Univ Gustave Eiffel, IGN-ENSG<br />
* Joseph Chazalon, Lecturer, EPITA, LRE<br />
* Clément Mallet, Senior researcher, LASTIG, Univ Gustave Eiffel, IGN-ENSG</div>Ychenhttps://www.lrde.epita.fr/wiki/Publications/fahrenberg.23.algunivPublications/fahrenberg.23.alguniv2023-03-05T14:45:29Z<p>Bot: Created page with "{{Publication | published = true | date = 2023-03-05 | authors = Uli Fahrenberg, Christian Johansen, Georg Struth, Krzysztof Ziemiański | title = Catoids and Modal Convolutio..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2023-03-05<br />
| authors = Uli Fahrenberg, Christian Johansen, Georg Struth, Krzysztof Ziemiański<br />
| title = Catoids and Modal Convolution Algebras<br />
| journal = Algebra Universalis<br />
| volume = 84<br />
| number = 10<br />
| None = https://link.springer.com/article/10.1007/s00012-023-00805-9<br />
| lrdenewsdate = 2023-03-05<br />
| lrdeprojects = AA<br />
| abstract = We show how modal quantales arise as convolution algebras <math>Q^X</math> of functions from catoids <math>X</math>, that ismultisemigroups with a source map <math>\ell</math> and a target map <math>r</math>, into modal quantales <math>Q</math>, which can be seen as weight or value algebras. In the tradition of boolean algebras with operators we study modal correspondences between algebraic laws in <math>X</math>, <math>Q</math> and <math>Q^X</math>. The class of catoids we introduce generalises Schweizer and Sklar's function systems and object-free categories to a setting isomorphic to algebras of ternary relations, as they are used for boolean algebras with operators and substructural logics. Our results provide a generic construction of weighted modal quantales from such multisemigroups. It is illustrated by many examples. We also discuss how these results generalise to a setting that supports reasoning with stochastic matrices or probabilistic predicate transformers.<br />
| type = article<br />
| id = fahrenberg.23.alguniv<br />
| identifier = doi:10.1007/s00012-023-00805-9<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> fahrenberg.23.alguniv,<br />
author = <nowiki>{</nowiki>Uli Fahrenberg and Christian Johansen and Georg Struth and<br />
Krzysztof Ziemia<nowiki>{</nowiki>\'n<nowiki>}</nowiki>ski<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Catoids and Modal Convolution Algebras<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Algebra Universalis<nowiki>}</nowiki>,<br />
volume = 84,<br />
number = <nowiki>{</nowiki>10<nowiki>}</nowiki>,<br />
year = 2023,<br />
month = mar,<br />
url = <nowiki>{</nowiki>https://link.springer.com/article/10.1007/s00012-023-00805-9<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1007/s00012-023-00805-9<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki> We show how modal quantales arise as convolution algebras<br />
$Q^X$ of functions from catoids $X$, that is,<br />
multisemigroups with a source map $\ell$ and a target map<br />
$r$, into modal quantales $Q$, which can be seen as weight<br />
or value algebras. In the tradition of boolean algebras<br />
with operators we study modal correspondences between<br />
algebraic laws in $X$, $Q$ and $Q^X$. The class of catoids<br />
we introduce generalises Schweizer and Sklar's function<br />
systems and object-free categories to a setting isomorphic<br />
to algebras of ternary relations, as they are used for<br />
boolean algebras with operators and substructural logics.<br />
Our results provide a generic construction of weighted<br />
modal quantales from such multisemigroups. It is<br />
illustrated by many examples. We also discuss how these<br />
results generalise to a setting that supports reasoning<br />
with stochastic matrices or probabilistic predicate<br />
transformers.<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/fahrenberg.23.pnPublications/fahrenberg.23.pn2023-03-05T14:12:30Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2023-03-05<br />
| authors = Uli Fahrenberg, Krzysztof Ziemiański<br />
| title = A Myhill-Nerode Theorem for Higher-Dimensional Automata<br />
| series = Lecture Notes in Computer Science<br />
| publisher = Springer<br />
| booktitle = Application and Theory of Petri Nets and Concurrency (PETRI NETS)<br />
| lrdeprojects = AA<br />
| abstract = We establish a Myhill-Nerode type theorem for higher-dimensional automata (HDAs), stating that a language is regular precisely if it has finite prefix quotient. HDAs extend standard automata with additional structure, making it possible to distinguish between interleavings and concurrency. We also introduce deterministic HDAs and show that not all HDAs are determinizable, that is, there exist regular languages that cannot be recognised by a deterministic HDA. Using our theorem, we develop an internal characterisation of deterministic languages.<br />
| lrdenewsdate = 2023-03-05<br />
| note = Accepted<br />
| type = inproceedings<br />
| id = fahrenberg.23.pn<br />
| identifier = doi:FIXME<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> fahrenberg.23.pn,<br />
author = <nowiki>{</nowiki>Fahrenberg, Uli and Ziemia<nowiki>{</nowiki>\'n<nowiki>}</nowiki>ski, Krzysztof<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>A <nowiki>{</nowiki>Myhill<nowiki>}</nowiki>-<nowiki>{</nowiki>Nerode<nowiki>}</nowiki> Theorem for Higher-Dimensional<br />
Automata<nowiki>}</nowiki>,<br />
series = <nowiki>{</nowiki>Lecture Notes in Computer Science<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>Springer<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Application and Theory of Petri Nets and Concurrency<br />
(PETRI NETS)<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2023<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>FIXME<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>We establish a Myhill-Nerode type theorem for<br />
higher-dimensional automata (HDAs), stating that a language<br />
is regular precisely if it has finite prefix quotient. HDAs<br />
extend standard automata with additional structure, making<br />
it possible to distinguish between interleavings and<br />
concurrency. We also introduce deterministic HDAs and show<br />
that not all HDAs are determinizable, that is, there exist<br />
regular languages that cannot be recognised by a<br />
deterministic HDA. Using our theorem, we develop an<br />
internal characterisation of deterministic languages.<nowiki>}</nowiki>,<br />
month = mar,<br />
note = <nowiki>{</nowiki>Accepted<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/mazini.22.expPublications/mazini.22.exp2022-12-16T21:26:34Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = false<br />
| date = 2022-12-15<br />
| authors = Caroline Mazini-Rodrigues, Nicolas Boutry, NajmanLaurent<br />
| title = Gradients Intégrés Renforcés<br />
| booktitle = Explain'AI - EGC Workshop<br />
| note = Accepted<br />
| lrdeprojects = Olena<br />
| lrdenewsdate = 2022-12-15<br />
| nodoi = <br />
| type = misc<br />
| id = mazini.22.exp<br />
| bibtex = <br />
@Misc<nowiki>{</nowiki> mazini.22.exp,<br />
author = <nowiki>{</nowiki>Mazini-Rodrigues, Caroline and Boutry, Nicolas and Najman,<br />
Laurent<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Gradients Int\'egr\'es Renforc\'es<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Explain'AI - EGC Workshop<nowiki>}</nowiki>,<br />
year = 2022,<br />
note = <nowiki>{</nowiki>Accepted<nowiki>}</nowiki>,<br />
nodoi = <nowiki>{</nowiki><nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/boutry.21.jmivPublications/boutry.21.jmiv2022-12-16T13:32:10Z<p>Bot: Created page with "{{Publication | published = true | date = 2021-11-09 | authors = Nicolas Boutry, Rocio Gonzalez-Diaz, Laurent Najman, Thierry Géraud | title = Continuous Well-Composedness im..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2021-11-09<br />
| authors = Nicolas Boutry, Rocio Gonzalez-Diaz, Laurent Najman, Thierry Géraud<br />
| title = Continuous Well-Composedness implies Digital Well-Composedness in n-D<br />
| journal = Journal of Mathematical Imaging and Vision<br />
| volume = 64<br />
| number = 2<br />
| pages = 131 to 150<br />
| lrdeprojects = Olena<br />
| abstract = In this paper, we prove that when a <math>n</math>-D cubical set is continuously well-composed (CWC), that is, when the boundary of its continuous analog is a topological <math>(n-1)</math>-manifold, then it is digitally well-composed (DWC)which means that it does not contain any critical configuration. We prove this result thanks to local homology. This paper is the sequel of a previous paper where we proved that DWCness does not imply CWCness in 4D.<br />
| lrdepaper = https://www.lrde.epita.fr/dload/papers/boutry.22.jmiv.pdf<br />
| lrdekeywords = Image<br />
| lrdenewsdate = 2021-11-09<br />
| type = article<br />
| id = boutry.21.jmiv<br />
| identifier = doi:10.1007/s10851-021-01058-8<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> boutry.21.jmiv,<br />
author = <nowiki>{</nowiki>Nicolas Boutry and Rocio Gonzalez-Diaz and Laurent Najman<br />
and Thierry G\'eraud<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Continuous Well-Composedness implies Digital<br />
Well-Composedness in $n$-<nowiki>{</nowiki>D<nowiki>}</nowiki><nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Journal of Mathematical Imaging and Vision<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>64<nowiki>}</nowiki>,<br />
number = <nowiki>{</nowiki>2<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>131--150<nowiki>}</nowiki>,<br />
month = jan,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>In this paper, we prove that when a $n$-D cubical set is<br />
continuously well-composed (CWC), that is, when the<br />
boundary of its continuous analog is a topological<br />
$(n-1)$-manifold, then it is digitally well-composed (DWC),<br />
which means that it does not contain any critical<br />
configuration. We prove this result thanks to local<br />
homology. This paper is the sequel of a previous paper<br />
where we proved that DWCness does not imply CWCness in<br />
4D.<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1007/s10851-021-01058-8<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Affiche-these-ZZAffiche-these-ZZ2022-12-15T14:54:30Z<p>Zz: </p>
<hr />
<div>{{DISPLAYTITLE:PhD Defense Zhou ZHAO}}<br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><br />
[[File:Logo of Sorbonne University.png|250px]]&nbsp;[[File:EDITE Logo.png|200px]]&nbsp;[[File:Epita-logo-2.png|250px]]&nbsp;[[File:Lre-logo.png|200px]]<br />
</div><br />
<br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''SOUTENANCE de THÈSE'''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''Zhou ZHAO'''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''Mercredi 11 janvier 2023 '''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>''' à 14h00 '''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''EPITA, 14-16 Rue Voltaire, 94270 Le Kremlin-Bicêtre'''</big><br />
</div><br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>''' Amphi 401 '''</big><br />
</div><br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">Plan d’accès :<br />
{{#widget:Iframe<br />
|url=https://maps.google.fr/maps?q=14-16+rue+Voltaire+94276+Kremlin+Bic%C3%AAtre&amp;oe=utf-8&amp;client=firefox-a&amp;ie=UTF8&amp;hq=&amp;hnear=16+Rue+Voltaire,+94270+Le+Kremlin-Bic%C3%A8tre,+Val-de-Marne,+%C3%8Ele-de-France&amp;t=m&amp;ll=48.815681,2.362833&amp;spn=0.019781,0.05579&amp;z=14&amp;iwloc=A&amp;output=embed&iwloc=near<br />
|width=650<br />
|height=350<br />
|border=0<br />
}}<br />
</div><br />
<br />
<br />
<br />
<br />
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"><big>'''Heart Segmentation and Evaluation of Fibrosis'''</big><br />
</div><br />
<br />
<br />
'''Résumé: '''<br />
<br />
La fibrillation auriculaire est la maladie du rythme cardiaque la plus courante. En raison d’un manque de compréhension des structures auriculaires sous-jacentes, les<br />
traitements actuels ne sont toujours pas satisfaisants. Récemment, avec la popularité de l’apprentissage profond, de nombreuses méthodes de segmentation basées<br />
sur l’apprentissage profond ont été proposées pour analyser les structures auriculaires, en particulier à partir de l’imagerie par résonance magnétique renforcée au<br />
gadolinium tardif. Cependant, deux problèmes subsistent : 1) les résultats de la segmentation incluent le fond de type atrial ; 2) les limites sont très difficiles à segmenter. La plupart des approches de segmentation conçoivent un réseau spécifique qui se concentre principalement sur les régions, au détriment des frontières.<br />
<br />
Par conséquent, dans cette thèse, nous proposons deux méthodes différentes pour segmenter le cœur, une méthode en deux étapes et une méthode entraînable de<br />
bout en bout. La méthode en deux étapes peut être décomposée en trois étapes principales : une étape de localisation, une étape d’amélioration du contraste à base de<br />
gaussienne et une étape de segmentation. Cette architecture est dotée d’une fonction de perte hybride qui guide le réseau pour étudier la relation de transformation entre l’image d’entrée et l’étiquette correspondante dans une hiérarchie à trois niveaux (pixel-, patch- et carte), ce qui permet d’améliorer la segmentation et la récupération des frontières. Nous démontrons l’efficacité de notre approche sur trois ensembles de données publiques en termes de segmentations régionales et de frontières. Pour la méthode entraînable de bout en bout, nous proposons un cadre de réseau convolutif complet d’attention basé sur l’architecture ResNet-101, qui se concentre sur les frontières autant que sur les régions. Le module d’attention supplémentaire est ajouté pour que le réseau accorde plus d’attention aux régions et pour réduire l’impact de la similarité trompeuse des tissus voisins. Nous utilisons également une perte hybride composée d’une perte de région et d’une perte de frontière pour traiter les frontières et les régions en même temps. L’efficacité de l’approche proposée est vérifiée sur trois jeux de données publics.<br />
<br />
Enfin, pour évaluer le degré de fibrose, nous avons proposé deux méthodes, l’une consistant à combiner l’apprentissage profond avec la morphologie, et l’autre à<br />
utiliser directement l’apprentissage profond. Pour la première méthode, nous calculons la paroi auriculaire gauche sur la base des résultats de segmentation du chapitre précédent en dilatant morphologiquement, puis des seuils pour évaluer le degré de fibrose. Pour la seconde méthode, nous fournissons une architecture UNet en cascade et utilisons des informations multi-modalités pour compléter la segmentation du myocarde, de la cicatrice et de l’œdème. Nous démontrons l’efficacité de notre<br />
approche sur un jeu de données public. <br />
<br />
'''Mots-clés:''' Apprentissage profond, cardiaque, segmentation, attention, réseau entièrement convolutif, perte hybride, évaluation de la fibrose, traitement morphologique des images.<br />
<br />
<br />
'''Abstract: '''<br />
<br />
Atrial fibrillation is the most common heart rhythm disease. Due to a lack of understanding in matter of underlying atrial structures, current treatments are still not<br />
satisfying. Recently, with the popularity of deep learning, many segmentation methods based on deep learning have been proposed to analyze atrial structures, especially from late gadolinium-enhanced magnetic resonance imaging. However, two problems still occur: 1) segmentation results include the atrial-like background; 2)<br />
boundaries are very hard to segment. Most segmentation approaches design a specific network that mainly focuses on the regions, to the detriment of the boundaries.<br />
<br />
Therefore, in this dissertation, we propose two different methods to segment the heart, one two-stage and one end-to-end trainable method. For the two-stage<br />
method, it can be decomposed in three main steps: a localization step, a Gaussian-based contrast enhancement step, and a segmentation step. This architecture is supplied with a hybrid loss function that guides the network to study the transformation relationship between the input image and the corresponding label in a three-level hierarchy (pixel-, patch- and map-level), which is helpful to improve segmentation and recovery of the boundaries. We demonstrate the efficiency of our approach on three public datasets in terms of regional and boundary segmentations. For the end-to-end trainable method. we propose an attention full convolutional network framework based on the ResNet-101 architecture, which focuses on boundaries as much as on regions. The additional attention module is added to have the network pay<br />
more attention on regions and then to reduce the impact of the misleading similarity of neighboring tissues. We also use a hybrid loss composed of a region loss and a<br />
boundary loss to treat boundaries and regions at the same time. The efficiency of proposed approach is verified on three public datasets.<br />
<br />
Finally, for evaluating the fibrosis degree, we proposed two methods, one is to combine deep learning with morphology, and the other is to use deep learning directly. For the first method, we calculate the left atrial wall based on the segmentation results in the previous chapter by morphologically dilating, and then thresholds<br />
to evaluate the fibrosis degree. For the second method, we provide one cascaded UNet architecture and uses multi-modalities information to complete the segmentation of the myocardium, scar and edema. We demonstrate the efficiency of our approach on one public dataset.<br />
<br />
'''Keywords: ''' Deep Learning, Cardiac, Segmentation, Attention, Fully Convolutional Network, Hybrid Loss, Fibrosis Assessment, Morphological Image Processing<br />
<br />
<br />
'''Composition du Jury :'''<br />
<br />
Reviewer:<br />
<br />
* Frédérique Frouin, Doc., INSERM-Institut Curie<br />
* Antoine Vacavant, Pr., Université Clermont Auvergne<br />
<br />
Examiner:<br />
<br />
* Isabelle Bloch, Pr., Sorbonne Université<br />
* Alasdair Newson, MdC.,Télécom ParisTech<br />
* Florence Rossant, Pr., Institut Supérieur d’Electronique de Paris<br />
* Caroline Petitjean, Pr., Université de Rouen Normandie<br />
<br />
Supervisor:<br />
<br />
* Thierry Géraud, Pr., EPITA, LRE<br />
* Élodie Puybareau, Dr., EPITA, LRE<br />
<br />
Invitee:<br />
<br />
* Jérôme Lacotte, Doc., Institut Cardiovasculaire Paris Sud</div>Zzhttps://www.lrde.epita.fr/wiki/Publications/kheireddine.22.constraintsPublications/kheireddine.22.constraints2022-12-15T10:52:53Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-09<br />
| authors = Anissa Kheireddine, Étienne Renault, Souheib Baarir<br />
| title = Towards Better Heuristics for Solving Bounded Model Checking Problems<br />
| journal = Constraints<br />
| editors = Mark Wallace<br />
| series = Leibniz International Proceedings in Informatics (LIPIcs)<br />
| pages = 45 to 66<br />
| volume = 28<br />
| publisher = Springer<br />
| abstract = This paper presents a new way to improve the performance of the SAT-based bounded model checking problem on sequential and parallel procedures by exploiting relevant information identified through the characteristics of the original problem. This led us to design a new way of building interesting heuristics based on the structure of the underlying problem. The proposed methodology is generic and can be applied for any SAT problem. This paper compares the state-of-the-art approaches with two new heuristics for sequential procedures: Structure-based and Linear Programming heuristics. We extend these study and applied the above methodology on parallel approaches, especially to refine the sharing measure which shows promising results.<br />
| lrdeprojects = Spot<br />
| lrdepaper = http://www.lrde.epita.fr/dload/papers/kheireddine.22.constraints.pdf<br />
| lrdenewsdate = 2022-12-09<br />
| note = First published online on 27 December 2022.<br />
| type = article<br />
| id = kheireddine.22.constraints<br />
| identifier = doi:10.1007/s10601-022-09339-8<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> kheireddine.22.constraints,<br />
author = <nowiki>{</nowiki>Anissa Kheireddine and \'Etienne Renault and Souheib<br />
Baarir<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Towards Better Heuristics for Solving Bounded Model<br />
Checking Problems<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Constraints<nowiki>}</nowiki>,<br />
editor = <nowiki>{</nowiki>Mark Wallace<nowiki>}</nowiki>,<br />
series = <nowiki>{</nowiki>Leibniz International Proceedings in Informatics<br />
(LIPIcs)<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2023<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>45--66<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>28<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>Springer<nowiki>}</nowiki>,<br />
month = mar,<br />
abstract = <nowiki>{</nowiki>This paper presents a new way to improve the performance<br />
of the SAT-based bounded model checking problem on<br />
sequential and parallel procedures by exploiting relevant<br />
information identified through the characteristics of the<br />
original problem. This led us to design a new way of<br />
building interesting heuristics based on the structure of<br />
the underlying problem. The proposed methodology is generic<br />
and can be applied for any SAT problem. This paper compares<br />
the state-of-the-art approaches with two new heuristics for<br />
sequential procedures: Structure-based and Linear<br />
Programming heuristics. We extend these study and applied<br />
the above methodology on parallel approaches, especially to<br />
refine the sharing measure which shows promising results.<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1007/s10601-022-09339-8<nowiki>}</nowiki>,<br />
note = <nowiki>{</nowiki>First published online on 27 December 2022.<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/beaudoin.22.jmsePublications/beaudoin.22.jmse2022-12-14T13:57:22Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-01-01<br />
| authors = Laurent Beaudoin, Loïca Avanthey, BunelCorentin, Charles Villard<br />
| title = Automatically Guided Selection of a Set of Underwater Calibration Images<br />
| journal = Journal of Marine Science and Engineering (JMSE) [MDPI]<br />
| None = https://doi.org/10.3390/jmse10060741<br />
| pages = 1 to 15<br />
| volume = 10<br />
| number = 6<br />
| type = article<br />
| id = beaudoin.22.jmse<br />
| identifier = doi:10.3390/jmse10060741<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> beaudoin.22.jmse,<br />
author = <nowiki>{</nowiki>Beaudoin, Laurent and Avanthey, Lo\"<nowiki>{</nowiki>i<nowiki>}</nowiki>ca and Bunel,<br />
Corentin and Villard, Charles<nowiki>}</nowiki>,<br />
year = 2022,<br />
title = <nowiki>{</nowiki><nowiki>{</nowiki>A<nowiki>}</nowiki>utomatically <nowiki>{</nowiki>G<nowiki>}</nowiki>uided <nowiki>{</nowiki>S<nowiki>}</nowiki>election of a <nowiki>{</nowiki>S<nowiki>}</nowiki>et of<br />
<nowiki>{</nowiki>U<nowiki>}</nowiki>nderwater <nowiki>{</nowiki>C<nowiki>}</nowiki>alibration <nowiki>{</nowiki>I<nowiki>}</nowiki>mages<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki><nowiki>{</nowiki>J<nowiki>}</nowiki>ournal of <nowiki>{</nowiki>M<nowiki>}</nowiki>arine <nowiki>{</nowiki>S<nowiki>}</nowiki>cience and <nowiki>{</nowiki>E<nowiki>}</nowiki>ngineering (<nowiki>{</nowiki>JMSE<nowiki>}</nowiki>)<br />
[<nowiki>{</nowiki>MDPI<nowiki>}</nowiki>]<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://doi.org/10.3390/jmse10060741<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>1--15<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>10<nowiki>}</nowiki>,<br />
number = <nowiki>{</nowiki>6<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.3390/jmse10060741<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/beaudoin.22.eitPublications/beaudoin.22.eit2022-12-14T13:57:22Z<p>Bot: Created page with "{{Publication | published = true | date = 2022-01-01 | authors = Laurent Beaudoin, Loïca Avanthey | title = How to help digital-native students to successfully take control o..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-01-01<br />
| authors = Laurent Beaudoin, Loïca Avanthey<br />
| title = How to help digital-native students to successfully take control of their learning : A return of 8 years of experience on a computer science e-learning platform in higher education<br />
| journal = Education and Information Technologies (EIT) [Springer Nature]<br />
| None = https://doi.org/10.1007/s10639-022-11407-8<br />
| pages = 1 to 21<br />
| volume = -<br />
| number = -<br />
| type = article<br />
| id = beaudoin.22.eit<br />
| identifier = doi:10.1007/s10639-022-11407-8<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> beaudoin.22.eit,<br />
author = <nowiki>{</nowiki>Beaudoin, Laurent and Avanthey, Lo\"<nowiki>{</nowiki>i<nowiki>}</nowiki>ca<nowiki>}</nowiki>,<br />
year = 2022,<br />
title = <nowiki>{</nowiki><nowiki>{</nowiki>H<nowiki>}</nowiki>ow to help digital-native students to successfully take<br />
control of their learning : <nowiki>{</nowiki>A<nowiki>}</nowiki> return of 8 years of<br />
experience on a computer science e-learning platform in<br />
higher education<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki><nowiki>{</nowiki>E<nowiki>}</nowiki>ducation and <nowiki>{</nowiki>I<nowiki>}</nowiki>nformation <nowiki>{</nowiki>T<nowiki>}</nowiki>echnologies (<nowiki>{</nowiki>EIT<nowiki>}</nowiki>)<br />
[<nowiki>{</nowiki>S<nowiki>}</nowiki>pringer Nature]<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://doi.org/10.1007/s10639-022-11407-8<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>1--21<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>-<nowiki>}</nowiki>,<br />
number = <nowiki>{</nowiki>-<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1007/s10639-022-11407-8<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/NewsEntry_(2022/11/22)2NewsEntry (2022/11/22)22022-12-13T16:44:14Z<p>Daniela: </p>
<hr />
<div>{{News<br />
|title=Joseph Chazalon and Edwin Carlinet from LRE at Seminar on IA at DGFIP<br />
|subtitle=At the Seminar on Automatic Information Extraction in Administrative Forms organized by [https://www.economie.gouv.fr/dgfip Public Finances General Directorate (DGFIP)] Joseph Chazalon and Edwin Carlinet from LRE presented the challenges and perspectives of data extraction in documents. After a review of the limitations of traditional document analysis techniques, they introduced recent advances brought by the latest deep-learning methods, and the new problems they enable to solve.<br />
|date=2022/11/22<br />
}}</div>Danielahttps://www.lrde.epita.fr/wiki/Publications/saouli.23.vmcaiPublications/saouli.23.vmcai2022-12-12T21:23:11Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-08<br />
| authors = S Saouli, S Baarir, C Dutheillet, J Devriendt<br />
| title = CosySEL: Improving SAT Solving Using Local Symmetries<br />
| booktitle = 24th International Conference on Verification, Model Checking, and Abstract Interpretation<br />
| volume = 13881<br />
| pages = 252 to 266<br />
| publisher = Springer<br />
| None = https://doi.org/10.1007/978-3-031-24950-1_12<br />
| lrdenewsdate = 2022-12-08<br />
| lrdeprojects = AA<br />
| abstract = Many satisfiability problems exhibit symmetry properties. Thus, the development of symmetry exploitation techniques seems a natural way to try to improve the efficiency of solvers by preventing them from exploring isomorphic parts of the search space. These techniques can be classified into two categories: dynamic and static symmetry breaking. Static approaches have often appeared to be more effective than dynamic ones. But although these approaches can be considered as complementary, very few works have tried to combine them. In this paper, we present a new toolCosySEL, that implements a composition of the static Effective Symmetry Breaking Predicates (esbp) technique with the dynamic Symmetric Explanation Learning (sel). esbp exploits symmetries to prune the search tree and sel uses symmetries to speed up the tree traversal. These two accelerations are complementary and their combination was made possible by the introduction of Local symmetries. We conduct our experiments on instances issued from the last ten sat competitions and the results show that our tool outperforms the existing tools on highly symmetrical problems.<br />
| type = inproceedings<br />
| id = saouli.23.vmcai<br />
| identifier = doi:10.1007/978-3-031-24950-1_12<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> saouli.23.vmcai,<br />
author = <nowiki>{</nowiki>S. Saouli and S. Baarir and C. Dutheillet and J.<br />
Devriendt<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki><nowiki>{</nowiki>CosySEL<nowiki>}</nowiki>: <nowiki>{</nowiki>I<nowiki>}</nowiki>mproving <nowiki>{</nowiki>SAT<nowiki>}</nowiki> Solving Using Local<br />
Symmetries<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>24th International Conference on Verification, Model<br />
Checking, and Abstract Interpretation<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>13881<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>252--266<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>Springer<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2023<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://doi.org/10.1007/978-3-031-24950-1\_12<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1007/978-3-031-24950-1\_12<nowiki>}</nowiki>,<br />
month = jan,<br />
abstract = <nowiki>{</nowiki>Many satisfiability problems exhibit symmetry properties.<br />
Thus, the development of symmetry exploitation techniques<br />
seems a natural way to try to improve the efficiency of<br />
solvers by preventing them from exploring isomorphic parts<br />
of the search space. These techniques can be classified<br />
into two categories: dynamic and static symmetry breaking.<br />
Static approaches have often appeared to be more effective<br />
than dynamic ones. But although these approaches can be<br />
considered as complementary, very few works have tried to<br />
combine them. In this paper, we present a new tool,<br />
CosySEL, that implements a composition of the static<br />
Effective Symmetry Breaking Predicates (esbp) technique<br />
with the dynamic Symmetric Explanation Learning (sel). esbp<br />
exploits symmetries to prune the search tree and sel uses<br />
symmetries to speed up the tree traversal. These two<br />
accelerations are complementary and their combination was<br />
made possible by the introduction of Local symmetries. We<br />
conduct our experiments on instances issued from the last<br />
ten sat competitions and the results show that our tool<br />
outperforms the existing tools on highly symmetrical<br />
problems.<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/rida.22.racciscPublications/rida.22.raccisc2022-12-12T21:23:08Z<p>Bot: Created page with "{{Publication | published = true | date = 2022-01-01 | authors = A Abou Rida, R Amhaz, P Parrend | title = Anomaly Detection on Static and Dynamic Graphs using Graph Convoluti..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-01-01<br />
| authors = A Abou Rida, R Amhaz, P Parrend<br />
| title = Anomaly Detection on Static and Dynamic Graphs using Graph Convolutional Neural Networks<br />
| booktitle = Robotics and AI for Cybersecurity and Critical Infrastructure in Smart Cities<br />
| series = Studies in Computational Intelligence Series<br />
| pages = 23<br />
| editors = Ahmed A Abd Al-Latif Nadia Nedjah<br />
| publisher = Springer<br />
| x-language = EN<br />
| None = http://icube-publis.unistra.fr/1-AAP22<br />
| type = inbook<br />
| id = rida.22.raccisc<br />
| identifier = doi:https://doi.org/10.1007/978-3-030-96737-6_12<br />
| bibtex = <br />
@InBook<nowiki>{</nowiki> rida.22.raccisc,<br />
author = <nowiki>{</nowiki>Abou Rida, A. and Amhaz, R. and Parrend, P.<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Anomaly Detection on Static and Dynamic Graphs using Graph<br />
Convolutional Neural Networks<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Robotics and AI for Cybersecurity and Critical<br />
Infrastructure in Smart Cities<nowiki>}</nowiki>,<br />
series = <nowiki>{</nowiki>Studies in Computational Intelligence Series<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>23<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
editor = <nowiki>{</nowiki>Nadia Nedjah, Ahmed A. Abd Al-Latif, Brij B. Gupta, Luiza<br />
M. Mourelle<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>Springer<nowiki>}</nowiki>,<br />
x-language = <nowiki>{</nowiki>EN<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>http://icube-publis.unistra.fr/1-AAP22<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>https://doi.org/10.1007/978-3-030-96737-6_12<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/rida.21.cnPublications/rida.21.cn2022-12-12T21:23:07Z<p>Bot: Created page with "{{Publication | published = true | date = 2021-10-01 | authors = A Abou Rida, P Parrend, R Amhaz | title = Evaluation of Anomaly Detection for Cybersecurity Using Inductive No..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2021-10-01<br />
| authors = A Abou Rida, P Parrend, R Amhaz<br />
| title = Evaluation of Anomaly Detection for Cybersecurity Using Inductive Node Embedding with Convolutional Graph Neural Networks<br />
| booktitle = Complex Network 2021<br />
| abstract = In the face of continuous cyberattacks, many scientists have proposed machine learning-based network anomaly detection methods. While deep learning effectively captures unseen patterns of Euclidean data, there is a huge number of applications where data are described in the form of graphs. Graph analysis have improved detecting anomalies in non-Euclidean domains, but it suffered from high computational cost. Graph embeddings have solved this problem by converting each node in the network into low dimensional representation, but it lacks the ability to generalize to unseen nodes. Graph convolution neural network methods solve this problem through inductive node embedding (inductive GNN). Inductive GNN shows better performance in detecting anomalies with less complexity than graph analysis and graph embedding methods.<br />
| x-international-audience = Yes<br />
| x-language = EN<br />
| None = http://icube-publis.unistra.fr/4-APA21<br />
| type = inproceedings<br />
| id = rida.21.cn<br />
| identifier = doi:https://doi.org/10.1007/978-3-030-93413-2_47<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> rida.21.cn,<br />
author = <nowiki>{</nowiki>Abou Rida, A. and Parrend, P. and Amhaz, R.<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Evaluation of Anomaly Detection for Cybersecurity Using<br />
Inductive Node Embedding with Convolutional Graph Neural<br />
Networks<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Complex Network 2021<nowiki>}</nowiki>,<br />
month = oct,<br />
year = <nowiki>{</nowiki>2021<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>In the face of continuous cyberattacks, many scientists<br />
have proposed machine learning-based network anomaly<br />
detection methods. While deep learning effectively captures<br />
unseen patterns of Euclidean data, there is a huge number<br />
of applications where data are described in the form of<br />
graphs. Graph analysis have improved detecting anomalies in<br />
non-Euclidean domains, but it suffered from high<br />
computational cost. Graph embeddings have solved this<br />
problem by converting each node in the network into low<br />
dimensional representation, but it lacks the ability to<br />
generalize to unseen nodes. Graph convolution neural<br />
network methods solve this problem through inductive node<br />
embedding (inductive GNN). Inductive GNN shows better<br />
performance in detecting anomalies with less complexity<br />
than graph analysis and graph embedding methods.<nowiki>}</nowiki>,<br />
x-international-audience=<nowiki>{</nowiki>Yes<nowiki>}</nowiki>,<br />
x-language = <nowiki>{</nowiki>EN<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>http://icube-publis.unistra.fr/4-APA21<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>https://doi.org/10.1007/978-3-030-93413-2_47<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/raymon.21.uraiPublications/raymon.21.urai2022-12-12T21:23:01Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2021-10-01<br />
| authors = A Raymond, B Brument, P Parrend<br />
| title = VizNN: Visual Data Augmentation with Convolutional Neural Networks for Cybersecurity Investigation<br />
| booktitle = Upper-Rhine Artificial Intelligence Symposium<br />
| abstract = One of the key challenges of Security Operating Centers (SOCs) is to provide rich information to the security analyst to ease the investigation phase in front of a cyberattack. This requires the combination of supervision with detection capabilities. Supervision enables the security analysts to gain an overview on the security state of the information system under protection. Detection uses advanced algorithms to extract suspicious events from the huge amount of traces produced by the system. To enable coupling an efficient supervision with performance detection, the use of visualisation-based analysis is a appealing approach, which into the bargain provides an elegant solution for data augmentation and thus improved detection performance. We propose VizNN, a Convolutional Neural Networks for analysing trace features through their graphical representation. VizNN enables to gain a visual overview of the traces of interests, and Convolutional Neural Networks leverage a scalability capability. An evaluation of the proposed scheme is performed against reference classifiers for detecting attacks, XGBoost and Random Forests<br />
| x-international-audience = Yes<br />
| x-language = EN<br />
| None = http://icube-publis.unistra.fr/4-RBP21<br />
| nodoi = <br />
| type = inproceedings<br />
| id = raymon.21.urai<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> raymon.21.urai,<br />
author = <nowiki>{</nowiki>Raymond, A. and Brument, B. and Parrend, P.<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki><nowiki>{</nowiki>VizNN<nowiki>}</nowiki>: <nowiki>{</nowiki>V<nowiki>}</nowiki>isual Data Augmentation with Convolutional<br />
Neural Networks for Cybersecurity Investigation<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Upper-Rhine Artificial Intelligence Symposium<nowiki>}</nowiki>,<br />
month = oct,<br />
year = <nowiki>{</nowiki>2021<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>One of the key challenges of Security Operating Centers<br />
(SOCs) is to provide rich information to the security<br />
analyst to ease the investigation phase in front of a<br />
cyberattack. This requires the combination of supervision<br />
with detection capabilities. Supervision enables the<br />
security analysts to gain an overview on the security state<br />
of the information system under protection. Detection uses<br />
advanced algorithms to extract suspicious events from the<br />
huge amount of traces produced by the system. To enable<br />
coupling an efficient supervision with performance<br />
detection, the use of visualisation-based analysis is a<br />
appealing approach, which into the bargain provides an<br />
elegant solution for data augmentation and thus improved<br />
detection performance. We propose VizNN, a Convolutional<br />
Neural Networks for analysing trace features through their<br />
graphical representation. VizNN enables to gain a visual<br />
overview of the traces of interests, and Convolutional<br />
Neural Networks leverage a scalability capability. An<br />
evaluation of the proposed scheme is performed against<br />
reference classifiers for detecting attacks, XGBoost and<br />
Random Forests<nowiki>}</nowiki>,<br />
x-international-audience=<nowiki>{</nowiki>Yes<nowiki>}</nowiki>,<br />
x-language = <nowiki>{</nowiki>EN<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>http://icube-publis.unistra.fr/4-RBP21<nowiki>}</nowiki>,<br />
nodoi = <nowiki>{</nowiki><nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/mandel.22.aghmPublications/mandel.22.aghm2022-12-12T21:22:47Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-02-01<br />
| authors = J-L Mandel, P Burger, A Strehle, F Colin, T Mazzucotelli, N Collot, S Baer, B Durand, A Piton, R Coutelle, E Schaefer, P Parrend, L Faivre, K Jobard Garou, D Geneviève, V Ruault, D Martin, CaumesR., T Smol, J Ghoumid, F Ropert Conquer, J Kummeling, C Ockeloen, T Kleefstra, D Koolen<br />
| title = GenIDA, une base de données participative internationale permettant de mieux connaître l'histoire naturelle et les comorbidités des formes génétiques de troubles neurodéveloppementaux<br />
| booktitle = Assises de Génétique Humaine et Médicale<br />
| x-international-audience = No<br />
| x-language = EN<br />
| None = http://icube-publis.unistra.fr/7-MBSC22<br />
| nodoi = <br />
| type = inproceedings<br />
| id = mandel.22.aghm<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> mandel.22.aghm,<br />
author = <nowiki>{</nowiki>Mandel, J-L. and Burger, P. and Strehle, A. and Colin, F.<br />
and Mazzucotelli, T. and Collot, N. and Baer, S. and<br />
Durand, B. and Piton, A. and Coutelle, R. and Schaefer, E.<br />
and Parrend, P. and Faivre, L. and Jobard Garou, K. and<br />
Genevi\`eve, D. and Ruault, V. and Martin, D. and Caumes,<br />
R. and Smol, T. and Ghoumid, J. and Ropert Conquer, F. and<br />
Kummeling, J. and Ockeloen, C. and Kleefstra, T. and<br />
Koolen, D.<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki><nowiki>{</nowiki>GenIDA<nowiki>}</nowiki>, une base de donn\'ees participative<br />
internationale permettant de mieux conna\^itre l'histoire<br />
naturelle et les comorbidit\'es des formes g\'en\'etiques<br />
de troubles neurod\'eveloppementaux<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Assises de G\'en\'etique Humaine et M\'edicale<nowiki>}</nowiki>,<br />
month = feb,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
x-international-audience=<nowiki>{</nowiki>No<nowiki>}</nowiki>,<br />
x-language = <nowiki>{</nowiki>EN<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>http://icube-publis.unistra.fr/7-MBSC22<nowiki>}</nowiki>,<br />
nodoi = <nowiki>{</nowiki><nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/maldonado-ruiz.22.ieeePublications/maldonado-ruiz.22.ieee2022-12-12T21:22:47Z<p>Bot: Created page with "{{Publication | published = true | date = 2022-01-01 | authors = Daniel Maldonado-Ruiz, Jenny Torres, El MadhounNour, Mohamad Badra | journal = IEEE Access | title = Current T..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-01-01<br />
| authors = Daniel Maldonado-Ruiz, Jenny Torres, El MadhounNour, Mohamad Badra<br />
| journal = IEEE Access<br />
| title = Current Trends in Blockchain Implementations on the Paradigm of Public Key Infrastructure: A Survey<br />
| abstract = Since the emergence of the Bitcoin cryptocurrency, the blockchain technology has become the new Internet tool with which researchers claim to be able to solve any existing online problem. From immutable log ledger applications to authorisation systems applications, the current technological consensus implies that most of Internet problems could be effectively solved by deploying some form of blockchain environment. Regardless this 'consensus'there are decentralised Internet-based applications on which blockchain technology can actually solve several problems and improve the functionality of these applications. The development of these new blockchain-based solutions is grouped into a new paradigm called Blockchain 3.0 and its concepts go far beyond the well-known cryptocurrencies. In this paper, we study the current trends in the application of blockchain on the paradigm of Public Key Infrastructures (PKI). In particular, we focus on how these current trends can guide the exploration of a fully Decentralised Identity System, with blockchain as be part of the core technology.<br />
| None = https://ieeexplore.ieee.org/abstract/document/9687536<br />
| type = article<br />
| id = maldonado-ruiz.22.ieee<br />
| identifier = doi:10.1109/ACCESS.2022.3145156<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> maldonado-ruiz.22.ieee,<br />
author = <nowiki>{</nowiki>Maldonado-Ruiz, Daniel and Torres, Jenny and El Madhoun,<br />
Nour and Badra, Mohamad<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>IEEE Access<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Current Trends in Blockchain Implementations on the<br />
Paradigm of Public Key Infrastructure: <nowiki>{</nowiki>A<nowiki>}</nowiki> Survey<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Since the emergence of the Bitcoin cryptocurrency, the<br />
blockchain technology has become the new Internet tool with<br />
which researchers claim to be able to solve any existing<br />
online problem. From immutable log ledger applications to<br />
authorisation systems applications, the current<br />
technological consensus implies that most of Internet<br />
problems could be effectively solved by deploying some form<br />
of blockchain environment. Regardless this 'consensus',<br />
there are decentralised Internet-based applications on<br />
which blockchain technology can actually solve several<br />
problems and improve the functionality of these<br />
applications. The development of these new blockchain-based<br />
solutions is grouped into a new paradigm called Blockchain<br />
3.0 and its concepts go far beyond the well-known<br />
cryptocurrencies. In this paper, we study the current<br />
trends in the application of blockchain on the paradigm of<br />
Public Key Infrastructures (PKI). In particular, we focus<br />
on how these current trends can guide the exploration of a<br />
fully Decentralised Identity System, with blockchain as be<br />
part of the core technology.<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1109/ACCESS.2022.3145156<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://ieeexplore.ieee.org/abstract/document/9687536<nowiki>}</nowiki>,<br />
issn = <nowiki>{</nowiki>2169-3536<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/maldonado-ruiz.21.ifipPublications/maldonado-ruiz.21.ifip2022-12-12T21:22:46Z<p>Bot: Created page with "{{Publication | published = true | date = 2021-04-01 | authors = Daniel Maldonado-Ruiz, Jenny Torres, El MadhounNour, Mohamad Badra | booktitle = 2021 11th IFIP International..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2021-04-01<br />
| authors = Daniel Maldonado-Ruiz, Jenny Torres, El MadhounNour, Mohamad Badra<br />
| booktitle = 2021 11th IFIP International Conference on New Technologies, Mobility and Security (NTMS)<br />
| title = An Innovative and Decentralized Identity Framework Based on Blockchain Technology<br />
| pages = 1 to 8<br />
| abstract = Network users usually need a third party validation to prove that they are who they claim to be. Authentication systems mostly assume the existence of a Trusted Third Party (TTP) in the form of a Certificate Authority (CA) or as an authentication server. However, relying on a TTP implies that users do not directly manage their identitiesbut delegate this role to a third party. This intrinsic issue can generate trust concerns (e.g., identity theft)as well as privacy concerns towards the third party. The main objective of this research is to present an autonomous and independent solution where users can store their self created credentials without depending on TTPs. To this aimthe use of an TTP autonomous and independent network is needed, where users can manage and assess their identities themselves. In this paper, we propose the framework called Three Blockchains Identity Management with Elliptic Curve Cryptography (3BI-ECC). With our proposed framework, the users' identities are self-generated and validated by their owners. Moreover, it allows the users to customize the information they want to share with third parties.<br />
| None = https://ieeexplore.ieee.org/document/9432656<br />
| type = inproceedings<br />
| id = maldonado-ruiz.21.ifip<br />
| identifier = doi:10.1109/NTMS49979.2021.9432656<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> maldonado-ruiz.21.ifip,<br />
author = <nowiki>{</nowiki>Maldonado-Ruiz, Daniel and Torres, Jenny and El Madhoun,<br />
Nour and Badra, Mohamad<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>2021 11th IFIP International Conference on New<br />
Technologies, Mobility and Security (NTMS)<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>An Innovative and Decentralized Identity Framework Based<br />
on Blockchain Technology<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2021<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>1-8<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Network users usually need a third party validation to<br />
prove that they are who they claim to be. Authentication<br />
systems mostly assume the existence of a Trusted Third<br />
Party (TTP) in the form of a Certificate Authority (CA) or<br />
as an authentication server. However, relying on a TTP<br />
implies that users do not directly manage their identities,<br />
but delegate this role to a third party. This intrinsic<br />
issue can generate trust concerns (e.g., identity theft),<br />
as well as privacy concerns towards the third party. The<br />
main objective of this research is to present an autonomous<br />
and independent solution where users can store their self<br />
created credentials without depending on TTPs. To this aim,<br />
the use of an TTP autonomous and independent network is<br />
needed, where users can manage and assess their identities<br />
themselves. In this paper, we propose the framework called<br />
Three Blockchains Identity Management with Elliptic Curve<br />
Cryptography (3BI-ECC). With our proposed framework, the<br />
users' identities are self-generated and validated by their<br />
owners. Moreover, it allows the users to customize the<br />
information they want to share with third parties.<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1109/NTMS49979.2021.9432656<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://ieeexplore.ieee.org/document/9432656<nowiki>}</nowiki>,<br />
issn = <nowiki>{</nowiki>2157-4960<nowiki>}</nowiki>,<br />
month = apr<br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/hammi.21.ieeePublications/hammi.21.ieee2022-12-12T21:22:31Z<p>Bot: Created page with "{{Publication | published = true | date = 2021-01-01 | authors = Badis Hammi, Sherali Zeadally, Yves Christian Elloh Adja, Manlio Del Giudice, Jamel Nebhen | journal = IEEE Tr..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2021-01-01<br />
| authors = Badis Hammi, Sherali Zeadally, Yves Christian Elloh Adja, Manlio Del Giudice, Jamel Nebhen<br />
| journal = IEEE Transactions on Engineering Management<br />
| title = Blockchain-Based Solution for Detecting and Preventing Fake Check Scams<br />
| pages = 1 to 16<br />
| None = https://ieeexplore.ieee.org/document/9469218<br />
| abstract = Fake check scam is one of the most common attacks used to commit fraud against consumers. This fraud is particularly costly for victims because they generally lose thousands of dollars as well as being exposed to judicial proceedings. Currently, there is no existing solution to authenticate checks and detect fake ones instantly. Instead, banks must wait for a period of more than 48 h to detect the scam. In this context, we propose a blockchain-based scheme to authenticate checks and detect fake check scams. Moreoverour approach allows the revocation of used checks. More precisely, our approach helps the banks to share information about provided checks and used ones, without exposing the banks' customers' personal data. We demonstrate a proof of concept of our proposed approach using Namecoin and Hyperledger blockchain technologies.<br />
| type = article<br />
| id = hammi.21.ieee<br />
| identifier = doi:10.1109/TEM.2021.3087112<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> hammi.21.ieee,<br />
author = <nowiki>{</nowiki>Hammi, Badis and Zeadally, Sherali and Adja, Yves<br />
Christian Elloh and Giudice, Manlio Del and Nebhen, Jamel<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>IEEE Transactions on Engineering Management<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Blockchain-Based Solution for Detecting and Preventing<br />
Fake Check Scams<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2021<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>1-16<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1109/TEM.2021.3087112<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://ieeexplore.ieee.org/document/9469218<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Fake check scam is one of the most common attacks used to<br />
commit fraud against consumers. This fraud is particularly<br />
costly for victims because they generally lose thousands of<br />
dollars as well as being exposed to judicial proceedings.<br />
Currently, there is no existing solution to authenticate<br />
checks and detect fake ones instantly. Instead, banks must<br />
wait for a period of more than 48 h to detect the scam. In<br />
this context, we propose a blockchain-based scheme to<br />
authenticate checks and detect fake check scams. Moreover,<br />
our approach allows the revocation of used checks. More<br />
precisely, our approach helps the banks to share<br />
information about provided checks and used ones, without<br />
exposing the banks' customers' personal data. We<br />
demonstrate a proof of concept of our proposed approach<br />
using Namecoin and Hyperledger blockchain technologies.<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/guillaume.22.egcPublications/guillaume.22.egc2022-12-12T21:22:29Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-01-12<br />
| authors = Pierre Guillaume, Corentin Duchene, Reda Dehak<br />
| title = Hate Speech and Toxic Comment Detection using Transformers<br />
| booktitle = Workshop EGC 2022 DL for NLP<br />
| abstract = Hate speech and toxic comment detection on social media has proven to be an essential issue for content moderation. This paper displays a comparison between different Transformer models for Hate Speech detection such as Hate BERT, a BERT-based model, RoBERTa and BERTweet which is a RoBERTa based model. These Transformer models are tested on Jibes&amp;Delight 2021 reddit dataset using the same training and testing conditions. Multiple approaches are detailed in this paper considering feature extraction and data augmentation. The paper concludes that our RoBERTa st4-aug model trained with data augmentation outperforms simple RoBERTa and HateBERT models.<br />
| lrdekeywords = IA<br />
| category = national<br />
| lrdenewsdate = 2022-01-12<br />
| note = accepted<br />
| nodoi = <br />
| type = inproceedings<br />
| id = guillaume.22.egc<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> guillaume.22.egc,<br />
author = <nowiki>{</nowiki>Pierre Guillaume and Corentin Duchene and Reda Dehak<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Hate Speech and Toxic Comment Detection using<br />
Transformers<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Workshop EGC 2022 DL for NLP<nowiki>}</nowiki>,<br />
month = jan,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Hate speech and toxic comment detection on social media<br />
has proven to be an essential issue for content moderation.<br />
This paper displays a comparison between different<br />
Transformer models for Hate Speech detection such as Hate<br />
BERT, a BERT-based model, RoBERTa and BERTweet which is a<br />
RoBERTa based model. These Transformer models are tested on<br />
Jibes&amp;Delight 2021 reddit dataset using the same<br />
training and testing conditions. Multiple approaches are<br />
detailed in this paper considering feature extraction and<br />
data augmentation. The paper concludes that our RoBERTa<br />
st4-aug model trained with data augmentation outperforms<br />
simple RoBERTa and HateBERT models.<nowiki>}</nowiki>,<br />
category = <nowiki>{</nowiki>national<nowiki>}</nowiki>,<br />
note = <nowiki>{</nowiki>accepted<nowiki>}</nowiki>,<br />
nodoi = <nowiki>{</nowiki><nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/grelot.21.cesarPublications/grelot.21.cesar2022-12-12T21:22:27Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2021-01-01<br />
| title = Automation of Binary Analysis: From Open Source Collection to Threat Intelligence<br />
| authors = Frederic Grelot, Sébastien Larinier, SalmonMarie<br />
| booktitle = Proceedings of the 28th C&ESAR<br />
| pages = 41<br />
| abstract = Many open sources of binaries, including malware, have emerged in the landscape in recent years. Their quality compares very favourably with commercial sources, as emphasised by Thibaud Binetruy (Twitter influencer under a pseudonym, Société Générale CERT, 2020): "Integrating operational threat intelin your defense mechanisms doesn't mean buying Threat Intel. You can start by using the [mass] of open source indicators available for free." Some are provided by official sources (Abuse.chwith data supplied by the Swiss national CERT, among others), while others are made available in more obscure ways, sometimes anonymously (VirusShare, VX-Undergroundetc.). Our examination of these sources underlines the wide disparity in quality and quantity between them. We have had to take this diversity into account in our researchdesigning a dedicated platform that enables us to supply information to our binary analysis products and to conduct daily analyses of correlations between and within malware families on a large scale. This work can then be applied to concrete cases such as Babuk, Ryuk and Conti. We have been able to highlight links for these families by immediately identifying correlations, with additional manual analysis then confirming the genealogy of the samples precisely.<br />
| nodoi = <br />
| type = inproceedings<br />
| id = grelot.21.cesar<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> grelot.21.cesar,<br />
title = <nowiki>{</nowiki>Automation of Binary Analysis: <nowiki>{</nowiki>F<nowiki>}</nowiki>rom Open Source<br />
Collection to Threat Intelligence<nowiki>}</nowiki>,<br />
author = <nowiki>{</nowiki>Grelot, Frederic and Larinier, S\'ebastien and Salmon,<br />
Marie<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Proceedings of the 28th C\&ESAR<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>41<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2021<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Many open sources of binaries, including malware, have<br />
emerged in the landscape in recent years. Their quality<br />
compares very favourably with commercial sources, as<br />
emphasised by Thibaud Binetruy (Twitter influencer under a<br />
pseudonym, Soci<nowiki>{</nowiki>\'e<nowiki>}</nowiki>t<nowiki>{</nowiki>\'e<nowiki>}</nowiki> G<nowiki>{</nowiki>\'e<nowiki>}</nowiki>n<nowiki>{</nowiki>\'e<nowiki>}</nowiki>rale CERT, 2020):<br />
"Integrating operational threat intelin your defense<br />
mechanisms doesn't mean buying Threat Intel. You can start<br />
by using the [mass] of open source indicators available for<br />
free." Some are provided by official sources (Abuse.ch,<br />
with data supplied by the Swiss national CERT, among<br />
others), while others are made available in more obscure<br />
ways, sometimes anonymously (VirusShare, VX-Underground,<br />
etc.). Our examination of these sources underlines the wide<br />
disparity in quality and quantity between them. We have had<br />
to take this diversity into account in our research,<br />
designing a dedicated platform that enables us to supply<br />
information to our binary analysis products and to conduct<br />
daily analyses of correlations between and within malware<br />
families on a large scale. This work can then be applied to<br />
concrete cases such as Babuk, Ryuk and Conti. We have been<br />
able to highlight links for these families by immediately<br />
identifying correlations, with additional manual analysis<br />
then confirming the genealogy of the samples precisely.<nowiki>}</nowiki>,<br />
nodoi = <nowiki>{</nowiki><nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/espie.21.euroconPublications/espie.21.eurocon2022-12-12T21:22:14Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2021-01-01<br />
| authors = Marc Espie<br />
| booktitle = EuroBSDCon 2021<br />
| title = Debug Packages in OpenBSD<br />
| None = https://www.openbsd.org/papers/eurobsdcon2021-espie-debug.pdf<br />
| type = inproceedings<br />
| id = espie.21.eurocon<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> espie.21.eurocon,<br />
author = <nowiki>{</nowiki>Marc Espie<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>EuroBSDCon 2021<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Debug Packages in <nowiki>{</nowiki>OpenBSD<nowiki>}</nowiki><nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2021<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://www.openbsd.org/papers/eurobsdcon2021-espie-debug.pdf<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/el-madhoune.21.csnetPublications/el-madhoune.21.csnet2022-12-12T21:22:13Z<p>Bot: Created page with "{{Publication | published = true | date = 2021-10-01 | authors = Darine Al-Mohtar, Amani Ramzi Daou, Nour El Madhoun, Rachad Maallawi | booktitle = 2021 5th Cyber Security in..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2021-10-01<br />
| authors = Darine Al-Mohtar, Amani Ramzi Daou, Nour El Madhoun, Rachad Maallawi<br />
| booktitle = 2021 5th Cyber Security in Networking Conference (CSNet)<br />
| title = A Secure Blockchain-Based Architecture for the COVID-19 Data Network<br />
| pages = 1 to 5<br />
| abstract = The COVID-19 pandemic has impacted the world economy and mainly all activities where social distancing cannot be respected. In order to control this pandemic, screening tests such as PCR have become essential. For example, in the case of a trip, the traveler must carry out a PCR test within 72 hours before his departure and if he is not a carrier of the COVID-19, he can therefore travel by presenting, during check-in and boarding, the negative result sheet to the agent. The latter will then verify the presented sheet by trusting: (a) the medical biology laboratory, (b) the credibility of the traveler for not having changed the PCR result from "positive to negative". Therefore, this confidence and this verification are made without being based on any mechanism of security and integrity, despite the great importance of the PCR test results to control the COVID-19 pandemic. Consequently, we propose in this paper a blockchain-based decentralized trust architecture that aims to guarantee the integrityimmutability and traceability of COVID-19 test results. Our proposal also aims to ensure the interconnection between several organizations (airports, medical laboratoriescinemas, etc.) in order to access COVID-19 test results in a secure and decentralized manner.<br />
| None = https://ieeexplore.ieee.org/document/9614272<br />
| type = inproceedings<br />
| id = el-madhoune.21.csnet<br />
| identifier = doi:10.1109/CSNet52717.2021.9614272<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> el-madhoune.21.csnet,<br />
author = <nowiki>{</nowiki>Al-Mohtar, Darine and Daou, Amani Ramzi and Madhoun, Nour<br />
El and Maallawi, Rachad<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>2021 5th Cyber Security in Networking Conference (CSNet)<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>A Secure Blockchain-Based Architecture for the <nowiki>{</nowiki>COVID<nowiki>}</nowiki>-19<br />
Data Network<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2021<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>1-5<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>The COVID-19 pandemic has impacted the world economy and<br />
mainly all activities where social distancing cannot be<br />
respected. In order to control this pandemic, screening<br />
tests such as PCR have become essential. For example, in<br />
the case of a trip, the traveler must carry out a PCR test<br />
within 72 hours before his departure and if he is not a<br />
carrier of the COVID-19, he can therefore travel by<br />
presenting, during check-in and boarding, the negative<br />
result sheet to the agent. The latter will then verify the<br />
presented sheet by trusting: (a) the medical biology<br />
laboratory, (b) the credibility of the traveler for not<br />
having changed the PCR result from "positive to negative".<br />
Therefore, this confidence and this verification are made<br />
without being based on any mechanism of security and<br />
integrity, despite the great importance of the PCR test<br />
results to control the COVID-19 pandemic. Consequently, we<br />
propose in this paper a blockchain-based decentralized<br />
trust architecture that aims to guarantee the integrity,<br />
immutability and traceability of COVID-19 test results. Our<br />
proposal also aims to ensure the interconnection between<br />
several organizations (airports, medical laboratories,<br />
cinemas, etc.) in order to access COVID-19 test results in<br />
a secure and decentralized manner.<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1109/CSNet52717.2021.9614272<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://ieeexplore.ieee.org/document/9614272<nowiki>}</nowiki>,<br />
issn = <nowiki>{</nowiki>2768-0029<nowiki>}</nowiki>,<br />
month = oct<br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/el-madhoun.22.ainaPublications/el-madhoun.22.aina2022-12-12T21:22:12Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-01-01<br />
| authors = Nour El Madhoun, Emmanuel Bertin, Mohamad Badra, Guy Pujolle<br />
| booktitle = 36th International Conference on Advanced Information Networking and Applications (AINA)<br />
| title = New Security Protocols for Offline Point-of-Sale Machines<br />
| abstract = EMV (Europay MasterCard Visa) is the protocol implemented to secure the communication, between a client's payment device and a Point-of-Sale machine, during a contact or an NFC (Near Field Communication) purchase transaction. In several studies, researchers have analyzed the operation of this protocol in order to verify its safety: unfortunatelythey have identified two security vulnerabilities that lead to multiple attacks and dangerous risks threatening both clients and merchants. In this paper, we are interested in proposing new security solutions that aim to overcome the two dangerous EMV vulnerabilities. Our solutions address the case of Point-of-Sale machines that do not have access to the banking network and are therefore in the "offline" connectivity mode. We verify the accuracy of our proposals by using the Scyther security verification tool.<br />
| publisher = Springer<br />
| series = Lecture Notes in Networks and Systems<br />
| volume = 450<br />
| type = inproceedings<br />
| id = el-madhoun.22.aina<br />
| identifier = doi:10.1007/978-3-030-99587-4_38<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> el-madhoun.22.aina,<br />
author = <nowiki>{</nowiki>Nour El Madhoun and Emmanuel Bertin and Mohamad Badra and<br />
Guy Pujolle<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>36th International Conference on Advanced Information<br />
Networking and Applications (AINA)<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>New Security Protocols for Offline Point-of-Sale<br />
Machines<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>EMV (Europay MasterCard Visa) is the protocol implemented<br />
to secure the communication, between a client's payment<br />
device and a Point-of-Sale machine, during a contact or an<br />
NFC (Near Field Communication) purchase transaction. In<br />
several studies, researchers have analyzed the operation of<br />
this protocol in order to verify its safety: unfortunately,<br />
they have identified two security vulnerabilities that lead<br />
to multiple attacks and dangerous risks threatening both<br />
clients and merchants. In this paper, we are interested in<br />
proposing new security solutions that aim to overcome the<br />
two dangerous EMV vulnerabilities. Our solutions address<br />
the case of Point-of-Sale machines that do not have access<br />
to the banking network and are therefore in the "offline"<br />
connectivity mode. We verify the accuracy of our proposals<br />
by using the Scyther security verification tool.<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>Springer<nowiki>}</nowiki>,<br />
series = <nowiki>{</nowiki>Lecture Notes in Networks and Systems<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>450<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1007/978-3-030-99587-4_38<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/dziadek.23.fmPublications/dziadek.23.fm2022-12-12T21:22:11Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-08<br />
| authors = Sven Dziadek, Uli Fahrenberg, Philipp Schlehuber-Caissier<br />
| title = Energy Problems in Finite and Timed Automata with Büchi Conditions<br />
| series = Lecture Notes in Computer Science<br />
| publisher = Springer<br />
| booktitle = International Symposium on Formal Methods (FM)<br />
| lrdeprojects = AA, Spot<br />
| abstract = We show how to efficiently solve energy Büchi problems in finite weighted automata and in one-clock weighted timed automata. Solving the former problem is our main contribution and is handled by a modified version of Bellman-Ford interleaved with Couvreur's algorithm. The latter problem is handled via a reduction to the former relying on the corner-point abstraction. All our algorithms are freely available and implemented in a tool based on the open-source tools TChecker and Spot.<br />
| lrdenewsdate = 2022-12-08<br />
| note = Accepted<br />
| type = inproceedings<br />
| id = dziadek.23.fm<br />
| identifier = doi:10.1007/978-3-031-27481-7_14<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> dziadek.23.fm,<br />
author = <nowiki>{</nowiki>Sven Dziadek and Uli Fahrenberg and Philipp<br />
Schlehuber-Caissier<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Energy Problems in Finite and Timed Automata with<br />
<nowiki>{</nowiki>B<nowiki>{</nowiki>\"u<nowiki>}</nowiki>chi<nowiki>}</nowiki> Conditions<nowiki>}</nowiki>,<br />
series = <nowiki>{</nowiki>Lecture Notes in Computer Science<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>Springer<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>International Symposium on Formal Methods (FM)<nowiki>}</nowiki>,<br />
year = 2023,<br />
month = mar,<br />
doi = <nowiki>{</nowiki>10.1007/978-3-031-27481-7_14<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>We show how to efficiently solve energy B<nowiki>{</nowiki>\"u<nowiki>}</nowiki>chi problems<br />
in finite weighted automata and in one-clock weighted timed<br />
automata. Solving the former problem is our main<br />
contribution and is handled by a modified version of<br />
Bellman-Ford interleaved with Couvreur's algorithm. The<br />
latter problem is handled via a reduction to the former<br />
relying on the corner-point abstraction. All our algorithms<br />
are freely available and implemented in a tool based on the<br />
open-source tools TChecker and Spot.<nowiki>}</nowiki>,<br />
note = <nowiki>{</nowiki>Accepted<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/christian.21.csPublications/christian.21.cs2022-12-12T21:21:49Z<p>Bot: Created page with "{{Publication | published = true | date = 2021-01-01 | title = A blockchain-based certificate revocation management and status verification system | journal = Computers & Secu..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2021-01-01<br />
| title = A blockchain-based certificate revocation management and status verification system<br />
| journal = Computers & Security<br />
| volume = 104<br />
| pages = 102209<br />
| None = https://www.sciencedirect.com/science/article/pii/S016740482100033X<br />
| authors = Yves Christian Elloh Adja, Badis Hammi, Ahmed Serhrouchni, Sherali Zeadally<br />
| abstract = Revocation management is one of the main tasks of the Public Key Infrastructure (PKI). It is also critical to the security of any PKI. As a result of the increase in the number and sizes of networks as well as the adoption of novel paradigms such as the Internet of Things and their usage of the web, current revocation mechanisms are vulnerable to single point of failures as the network loads increase. To address this challenge, we take advantage of blockchains power and resiliency in order to propose an efficient decentralized certificates revocation management and status verification system. We use the extension field of the X509 certificate's structure to introduce a field that describes to which distribution point the certificate will belong to if revoked. Each distribution point is represented by a Bloom filter filled with revoked certificates. Bloom filters and revocation information are stored in a public blockchain. We developed a real implementation of our proposed mechanism in Python and the Namecoin blockchain. Then, we conducted an extensive evaluation of our scheme using performance metrics such as execution time and data consumption to demonstrate that it can meet the needed requirements with high efficiency and low cost. Moreover, we compare the performance of our approach with two of the most well-known/used revocation techniques which are Online Certificate Status Protocol (OCSP) and Certificate Revocation List (CRL). The results obtained show that our proposed approach outperforms these current schemes.<br />
| type = article<br />
| id = christian.21.cs<br />
| identifier = doi:https://doi.org/10.1016/j.cose.2021.102209<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> christian.21.cs,<br />
title = <nowiki>{</nowiki>A blockchain-based certificate revocation management and<br />
status verification system<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Computers \& Security<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>104<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>102209<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2021<nowiki>}</nowiki>,<br />
issn = <nowiki>{</nowiki>0167-4048<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>https://doi.org/10.1016/j.cose.2021.102209<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://www.sciencedirect.com/science/article/pii/S016740482100033X<nowiki>}</nowiki>,<br />
author = <nowiki>{</nowiki>Yves Christian <nowiki>{</nowiki>Elloh Adja<nowiki>}</nowiki> and Badis Hammi and Ahmed<br />
Serhrouchni and Sherali Zeadally<nowiki>}</nowiki>,<br />
keywords = <nowiki>{</nowiki>Authentication, Blockchain, Bloom filter, Certificate,<br />
Revocation, Decentralization, PKI, Security, X509<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Revocation management is one of the main tasks of the<br />
Public Key Infrastructure (PKI). It is also critical to the<br />
security of any PKI. As a result of the increase in the<br />
number and sizes of networks as well as the adoption of<br />
novel paradigms such as the Internet of Things and their<br />
usage of the web, current revocation mechanisms are<br />
vulnerable to single point of failures as the network loads<br />
increase. To address this challenge, we take advantage of<br />
blockchains power and resiliency in order to propose an<br />
efficient decentralized certificates revocation management<br />
and status verification system. We use the extension field<br />
of the X509 certificate's structure to introduce a field<br />
that describes to which distribution point the certificate<br />
will belong to if revoked. Each distribution point is<br />
represented by a Bloom filter filled with revoked<br />
certificates. Bloom filters and revocation information are<br />
stored in a public blockchain. We developed a real<br />
implementation of our proposed mechanism in Python and the<br />
Namecoin blockchain. Then, we conducted an extensive<br />
evaluation of our scheme using performance metrics such as<br />
execution time and data consumption to demonstrate that it<br />
can meet the needed requirements with high efficiency and<br />
low cost. Moreover, we compare the performance of our<br />
approach with two of the most well-known/used revocation<br />
techniques which are Online Certificate Status Protocol<br />
(OCSP) and Certificate Revocation List (CRL). The results<br />
obtained show that our proposed approach outperforms these<br />
current schemes.<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/avanthey.22.rsPublications/avanthey.22.rs2022-12-12T21:21:23Z<p>Bot: Created page with "{{Publication | published = true | date = 2022-01-01 | authors = Loïca Avanthey, Laurent Beaudoin | title = How to Boost Close-Range Remote Sensing Courses Using a Serious Ga..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-01-01<br />
| authors = Loïca Avanthey, Laurent Beaudoin<br />
| title = How to Boost Close-Range Remote Sensing Courses Using a Serious Game: Uncover in a Fun Way the Complexity and Transversality of Multi-Domain Field Acquisitions<br />
| journal = Remote Sensing<br />
| volume = 14<br />
| number = 4<br />
| article-number = 817<br />
| None = https://www.mdpi.com/2072-4292/14/4/817<br />
| abstract = Close-range remote sensing, and more particularly, its acquisition part that is linked to field robotics, is at the crossroads of many scientific and engineering fields. Thus, it takes time for students to acquire the solid foundations needed before practicing on real systems. Therefore, we are interested in a means that allow students without prerequisites to quickly appropriate the fundamentals of this interdisciplinary field. For this, we adapted a haggle game to the close-range remote sensing theme. In this article, we explain the mechanics that serve our educational purposes. We have used it, so far, for four academic years with hundreds of students. The experience was assessed through quality surveys and quizzes to calculate success indicators. The results show that the serious game is well appreciated by the students. It allows them to better structure information and acquire a good global vision of multi-domain acquisition and data processing in close-range remote sensing. The students are also more involved in the rest of the lessons; all of this helps to facilitate their learning of the theoretical parts. Thus, we were able to shorten the time before moving on to real practice by replacing three lesson sessions with one serious game session, with an increase in mastering fundamental skills. The designed serious game can be useful for close-range remote sensing teachers looking for an effective starting lesson. In addition, teachers from other technical fields can draw inspiration from the creation mechanisms described in this article to create their own adapted version. Such a serious game is also a good asset for selecting promising students in a recruitment context.<br />
| type = article<br />
| id = avanthey.22.rs<br />
| identifier = doi:10.3390/rs14040817<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> avanthey.22.rs,<br />
author = <nowiki>{</nowiki>Avanthey, Lo<nowiki>{</nowiki>\"\i<nowiki>}</nowiki>ca and Beaudoin, Laurent<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>How to Boost Close-Range Remote Sensing Courses Using a<br />
Serious Game: <nowiki>{</nowiki>U<nowiki>}</nowiki>ncover in a Fun Way the Complexity and<br />
Transversality of Multi-Domain Field Acquisitions<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Remote Sensing<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>14<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
number = <nowiki>{</nowiki>4<nowiki>}</nowiki>,<br />
article-number= <nowiki>{</nowiki>817<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://www.mdpi.com/2072-4292/14/4/817<nowiki>}</nowiki>,<br />
issn = <nowiki>{</nowiki>2072-4292<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Close-range remote sensing, and more particularly, its<br />
acquisition part that is linked to field robotics, is at<br />
the crossroads of many scientific and engineering fields.<br />
Thus, it takes time for students to acquire the solid<br />
foundations needed before practicing on real systems.<br />
Therefore, we are interested in a means that allow students<br />
without prerequisites to quickly appropriate the<br />
fundamentals of this interdisciplinary field. For this, we<br />
adapted a haggle game to the close-range remote sensing<br />
theme. In this article, we explain the mechanics that serve<br />
our educational purposes. We have used it, so far, for four<br />
academic years with hundreds of students. The experience<br />
was assessed through quality surveys and quizzes to<br />
calculate success indicators. The results show that the<br />
serious game is well appreciated by the students. It allows<br />
them to better structure information and acquire a good<br />
global vision of multi-domain acquisition and data<br />
processing in close-range remote sensing. The students are<br />
also more involved in the rest of the lessons; all of this<br />
helps to facilitate their learning of the theoretical<br />
parts. Thus, we were able to shorten the time before moving<br />
on to real practice by replacing three lesson sessions with<br />
one serious game session, with an increase in mastering<br />
fundamental skills. The designed serious game can be useful<br />
for close-range remote sensing teachers looking for an<br />
effective starting lesson. In addition, teachers from other<br />
technical fields can draw inspiration from the creation<br />
mechanisms described in this article to create their own<br />
adapted version. Such a serious game is also a good asset<br />
for selecting promising students in a recruitment<br />
context.<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.3390/rs14040817<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/lepage.22.interspeechPublications/lepage.22.interspeech2022-12-09T15:44:55Z<p>Bot: Created page with "{{Publication | published = true | date = 2022-08-28 | title = Label-Efficient Self-Supervised Speaker Verification With Information Maximization and Contrastive Learning | au..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-08-28<br />
| title = Label-Efficient Self-Supervised Speaker Verification With Information Maximization and Contrastive Learning<br />
| authors = Théo Lepage, Réda Dehak<br />
| booktitle = Proc. Interspeech 2022<br />
| pages = 4018 to 4022<br />
| organization = ISCA<br />
| lrdekeywords = IA<br />
| lrdenewsdate = 2022-08-28<br />
| note = accepted<br />
| type = inproceedings<br />
| id = lepage.22.interspeech<br />
| identifier = doi:10.21437/Interspeech.2022-802<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> lepage.22.interspeech,<br />
title = <nowiki>{</nowiki>Label-Efficient Self-Supervised Speaker Verification With<br />
Information Maximization and Contrastive Learning<nowiki>}</nowiki>,<br />
author = <nowiki>{</nowiki>Th\'<nowiki>{</nowiki>e<nowiki>}</nowiki>o Lepage and R\'<nowiki>{</nowiki>e<nowiki>}</nowiki>da Dehak<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Proc. Interspeech 2022<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>4018--4022<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.21437/Interspeech.2022-802<nowiki>}</nowiki>,<br />
month = sep,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
organization = <nowiki>{</nowiki>ISCA<nowiki>}</nowiki>,<br />
note = <nowiki>{</nowiki>accepted<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/renault.22.stttPublications/renault.22.sttt2022-12-09T15:29:11Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-09<br />
| authors = Alexandre Kirszenberg, Antoine Martin, Hugo Moreau, Etienne Renault<br />
| title = Go2Pins: A framework for the LTL verification of Go programs (Extended Version)<br />
| journal = International Journal on Software Tools for Technology Transfer (STTT)<br />
| volume = 25<br />
| pages = 77 to 94<br />
| publisher = Springer<br />
| lrdeprojects = Spot<br />
| lrdenewsdate = 2022-12-09<br />
| abstract = We introduce Go2Pins, a tool that takes a program written in Go and links it with two model-checkers: LTSMin and Spot. Go2Pins is an effort to promote the integration of both formal verification and testing inside industrial-size projects. With this goal in mind, we introduce black-box transitions, an efficient and scalable technique for handling the Go runtime. This approach, inspired by hardware verification techniques, allows easy, automatic and efficient abstractions. Go2Pins also handles basic concurrent programs through the use of a dedicated scheduler. Moreover, in order to efficiently handle recursive programs, we introduce PSLRec, a formalism that augments PSL without changing the complexity of the underlying verification process.<br />
| lrdepaper = http://www.lrde.epita.fr/dload/papers/renault.22.sttt.pdf<br />
| note = First published online on 06 January 2023.<br />
| type = article<br />
| id = renault.22.sttt<br />
| identifier = doi:https://doi.org/10.1007/s10009-022-00692-w<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> renault.22.sttt,<br />
author = <nowiki>{</nowiki>Alexandre Kirszenberg and Antoine Martin and Hugo Moreau<br />
and Etienne Renault<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Go2<nowiki>{</nowiki>P<nowiki>}</nowiki>ins: <nowiki>{</nowiki>A<nowiki>}</nowiki> framework for the <nowiki>{</nowiki>LTL<nowiki>}</nowiki> verification of<br />
<nowiki>{</nowiki>Go<nowiki>}</nowiki> programs (Extended Version)<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>International Journal on Software Tools for Technology<br />
Transfer (STTT)<nowiki>}</nowiki>,<br />
year = 2023,<br />
volume = <nowiki>{</nowiki>25<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>77--94<nowiki>}</nowiki>,<br />
month = feb,<br />
publisher = <nowiki>{</nowiki>Springer<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>We introduce Go2Pins, a tool that takes a program written<br />
in Go and links it with two model-checkers: LTSMin and<br />
Spot. Go2Pins is an effort to promote the integration of<br />
both formal verification and testing inside industrial-size<br />
projects. With this goal in mind, we introduce black-box<br />
transitions, an efficient and scalable technique for<br />
handling the Go runtime. This approach, inspired by<br />
hardware verification techniques, allows easy, automatic<br />
and efficient abstractions. Go2Pins also handles basic<br />
concurrent programs through the use of a dedicated<br />
scheduler. Moreover, in order to efficiently handle<br />
recursive programs, we introduce PSL<nowiki>{</nowiki>Rec<nowiki>}</nowiki>, a formalism that<br />
augments PSL without changing the complexity of the<br />
underlying verification process.<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>https://doi.org/10.1007/s10009-022-00692-w<nowiki>}</nowiki>,<br />
note = <nowiki>{</nowiki>First published online on 06 January 2023.<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/bouarour.22.ieeebigdataPublications/bouarour.22.ieeebigdata2022-12-09T13:53:39Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-12<br />
| authors = Nassim Bouarour, Idir Benouaret, Amer-YahiaSihem<br />
| booktitle = 2022 IEEE International Conference on Big Data (Big Data)<br />
| title = Learning Diversity Attributes in Multi-Session Recommendations<br />
| address = Osaka, Japan<br />
| abstract = Diversity in recommendation has been studied extensively. It has been shown that maximizing diversity subject to constrained relevance yields high user engagement over time. Existing work largely relies on setting some attributes that are used to craft an item similarity function and diversify results. In this paper, we examine the question of learning diversity attributes. That is particularly important when users receive recommendations over multiple sessions. We devise two main approaches to look for the best diversity attribute in each session: the first is a generalization of traditional diversity algorithms and the second is based on reinforcement learning. We implement both approaches and run extensive experiments on a semi-synthetic dataset. Our results demonstrate that learning diversity attributes yields a higher overall diversity than traditional diversity algorithms. We also find that training policies using reinforcement learning is more efficient in terms of response time, in particular for high dimensional data.<br />
| pages = 1 to 10<br />
| publisher = IEEE<br />
| lrdekeywords = IA<br />
| lrdenewsdate = 2022-12-12<br />
| note = accepted<br />
| type = inproceedings<br />
| id = bouarour.22.ieeebigdata<br />
| identifier = doi:10.1109/BigDataXXXX<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> bouarour.22.ieeebigdata,<br />
author = <nowiki>{</nowiki>Bouarour, Nassim and Benouaret, Idir and Amer-Yahia,<br />
Sihem<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>2022 IEEE International Conference on Big Data (Big<br />
Data)<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Learning Diversity Attributes in Multi-Session<br />
Recommendations<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
address = <nowiki>{</nowiki>Osaka, Japan<nowiki>}</nowiki>,<br />
month = dec,<br />
abstract = <nowiki>{</nowiki>Diversity in recommendation has been studied extensively.<br />
It has been shown that maximizing diversity subject to<br />
constrained relevance yields high user engagement over<br />
time. Existing work largely relies on setting some<br />
attributes that are used to craft an item similarity<br />
function and diversify results. In this paper, we examine<br />
the question of learning diversity attributes. That is<br />
particularly important when users receive recommendations<br />
over multiple sessions. We devise two main approaches to<br />
look for the best diversity attribute in each session: the<br />
first is a generalization of traditional diversity<br />
algorithms and the second is based on reinforcement<br />
learning. We implement both approaches and run extensive<br />
experiments on a semi-synthetic dataset. Our results<br />
demonstrate that learning diversity attributes yields a<br />
higher overall diversity than traditional diversity<br />
algorithms. We also find that training policies using<br />
reinforcement learning is more efficient in terms of<br />
response time, in particular for high dimensional data.<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>1-10<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1109/BigDataXXXX<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>IEEE<nowiki>}</nowiki>,<br />
note = <nowiki>{</nowiki>accepted<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/kamal.22.xkddPublications/kamal.22.xkdd2022-12-09T11:02:38Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-09-12<br />
| authors = Ataollah Kamal, Elouan Vincent, Marc Plantevit, Céline Robardet<br />
| booktitle = Workshop on eXplainable Knowledge Discovery in Data Mining. Machine Learning and Principles and Practice of Knowledge Discovery in Databases - International Workshops of ECML PKDD 2022, GrenobleFrance, September 19-23, 2022, Proceedings, Part I<br />
| title = Improving the Quality of Rule-Based GNN Explanations<br />
| address = Grenoble, France<br />
| abstract = Recent works have proposed to explain GNNs using activation rules. Activation rules allow to capture specific configurations in the embedding space of a given layer that is discriminant for the GNN decision. These rules also catch hidden features of input graphs. This requires to associate these rules to representative graphs. In this paper, we propose on the one hand an analysis of heuristic-based algorithms to extract the activation rulesand on the other hand the use of transport-based optimal graph distances to associate each rule with the most specific graph that triggers them.<br />
| pages = 467 to 482<br />
| lrdekeywords = IA<br />
| lrdenewsdate = 2022-09-12<br />
| note = accepted<br />
| type = inproceedings<br />
| id = kamal.22.xkdd<br />
| identifier = doi:10.1007/978-3-031-23618-1_31<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> kamal.22.xkdd,<br />
author = <nowiki>{</nowiki>Ataollah Kamal and Elouan Vincent and Marc Plantevit and<br />
C\'<nowiki>{</nowiki>e<nowiki>}</nowiki>line Robardet<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Workshop on eXplainable Knowledge Discovery in Data<br />
Mining. Machine Learning and Principles and Practice of<br />
Knowledge Discovery in Databases - International Workshops<br />
of <nowiki>{</nowiki>ECML<nowiki>}</nowiki> <nowiki>{</nowiki>PKDD<nowiki>}</nowiki> 2022, Grenoble, France, September 19-23,<br />
2022, Proceedings, Part <nowiki>{</nowiki>I<nowiki>}</nowiki><nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Improving the Quality of Rule-Based <nowiki>{</nowiki>GNN<nowiki>}</nowiki> Explanations<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
address = <nowiki>{</nowiki>Grenoble, France<nowiki>}</nowiki>,<br />
month = sep,<br />
abstract = <nowiki>{</nowiki>Recent works have proposed to explain GNNs using<br />
activation rules. Activation rules allow to capture<br />
specific configurations in the embedding space of a given<br />
layer that is discriminant for the GNN decision. These<br />
rules also catch hidden features of input graphs. This<br />
requires to associate these rules to representative graphs.<br />
In this paper, we propose on the one hand an analysis of<br />
heuristic-based algorithms to extract the activation rules,<br />
and on the other hand the use of transport-based optimal<br />
graph distances to associate each rule with the most<br />
specific graph that triggers them.<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>467--482<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1007/978-3-031-23618-1\_31<nowiki>}</nowiki>,<br />
note = <nowiki>{</nowiki>accepted<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/iferroudjene.22.damiPublications/iferroudjene.22.dami2022-12-09T11:02:37Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-10<br />
| authors = Mouloud Iferroudjene, Corentin Lonjarret, Céline Robardet, Marc Plantevit, AtzmuellerMartin<br />
| title = Methods for Explaining Top-N Recommendations Through Subgroup Discovery<br />
| journal = Data Mining and Knowledge Discovery<br />
| publisher = Springer<br />
| volume = 313<br />
| number = 118752<br />
| None = Recommender Systems, Explainable AI (XAI), Subgroup Discovery<br />
| abstract = Explainable Artificial Intelligence (XAI) has received a lot of attention over the past decade, with the proposal of many methods explaining black box classifiers such as neural networks. Despite the ubiquity of recommender systems in the digital world, only few researchers have attempted to explain their functioning, whereas one major obstacle to their use is the problem of societal acceptability and trustworthiness. Indeed, recommender systems direct user choices to a large extent and their impact is important as they give access to only a small part of the range of items (e.g., products and/or services), as the submerged part of the iceberg. Consequently, they limit access to other resources. The potentially negative effects of these systems have been pointed out as phenomena like echo chambers and winner-take-all effects, because the internal logic of these systems is to likely enclose the consumer in a em deja vu loop. Therefore, it is crucial to provide explanations of such recommender systems and to identify the user data that led the respective system to make the individual recommendations. This then makes it possible to evaluate recommender systems not only regarding their effectiveness (i.e., their capability to recommend an item that was actually chosen by the user), but also with respect to the diversity, relevance and timeliness of the active data used for the recommendation. In this paper, we propose a deep analysis of two state-of-the-art models learnt on four datasets based on the identification of the items or the sequences of items actively used by the models. Our proposed methods are based on subgroup discovery with different pattern languages (i.e., itemsets and sequences). Specifically, we provide interpretable explanations of the recommendations of the Top-N itemswhich are useful to compare different models. Ultimatelythese can then be used to present simple and understandable patterns to explain the reasons behind a generated recommendation to the user.<br />
| lrdekeywords = IA<br />
| lrdenewsdate = 2022-12-10<br />
| type = article<br />
| id = iferroudjene.22.dami<br />
| identifier = doi:10.1007/s10618-022-00897-2<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> iferroudjene.22.dami,<br />
author = <nowiki>{</nowiki>Iferroudjene, Mouloud and Lonjarret, Corentin and<br />
Robardet, C<nowiki>{</nowiki>\'e<nowiki>}</nowiki>line and Plantevit, Marc and Atzmueller,<br />
Martin<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Methods for Explaining Top-<nowiki>{</nowiki>N<nowiki>}</nowiki> Recommendations Through<br />
Subgroup Discovery<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Data Mining and Knowledge Discovery<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>Springer<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>313<nowiki>}</nowiki>,<br />
number = <nowiki>{</nowiki>118752<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
month = nov,<br />
doi = <nowiki>{</nowiki>10.1007/s10618-022-00897-2<nowiki>}</nowiki>,<br />
keywords = <nowiki>{</nowiki>Recommender Systems, Explainable AI (XAI), Subgroup<br />
Discovery<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Explainable Artificial Intelligence (XAI) has received a<br />
lot of attention over the past decade, with the proposal of<br />
many methods explaining black box classifiers such as<br />
neural networks. Despite the ubiquity of recommender<br />
systems in the digital world, only few researchers have<br />
attempted to explain their functioning, whereas one major<br />
obstacle to their use is the problem of societal<br />
acceptability and trustworthiness. Indeed, recommender<br />
systems direct user choices to a large extent and their<br />
impact is important as they give access to only a small<br />
part of the range of items (e.g., products and/or<br />
services), as the submerged part of the iceberg.<br />
Consequently, they limit access to other resources. The<br />
potentially negative effects of these systems have been<br />
pointed out as phenomena like echo chambers and<br />
winner-take-all effects, because the internal logic of<br />
these systems is to likely enclose the consumer in a <nowiki>{</nowiki>\em<br />
deja vu<nowiki>}</nowiki> loop. Therefore, it is crucial to provide<br />
explanations of such recommender systems and to identify<br />
the user data that led the respective system to make the<br />
individual recommendations. This then makes it possible to<br />
evaluate recommender systems not only regarding their<br />
effectiveness (i.e., their capability to recommend an item<br />
that was actually chosen by the user), but also with<br />
respect to the diversity, relevance and timeliness of the<br />
active data used for the recommendation. In this paper, we<br />
propose a deep analysis of two state-of-the-art models<br />
learnt on four datasets based on the identification of the<br />
items or the sequences of items actively used by the<br />
models. Our proposed methods are based on subgroup<br />
discovery with different pattern languages (i.e., itemsets<br />
and sequences). Specifically, we provide interpretable<br />
explanations of the recommendations of the Top-N items,<br />
which are useful to compare different models. Ultimately,<br />
these can then be used to present simple and understandable<br />
patterns to explain the reasons behind a generated<br />
recommendation to the user.<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/diop.22.ieeebigdataPublications/diop.22.ieeebigdata2022-12-09T11:02:13Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-12<br />
| authors = Lamine Diop, Cheikh Talibouya Diop, GiacomettiArnaud, Dominique Li, Arnaud Soulet<br />
| booktitle = 2022 IEEE International Conference on Big Data (Big Data)<br />
| title = Trie-based Output Itemset Sampling<br />
| address = Osaka, Japan<br />
| abstract = Pattern sampling algorithms produce interesting patterns with a probability proportional to a given utility measure. Utility changes need quick re-preprocessing when sampling patterns from large databases. In this context, existing sampling techniques require storing all data in memorywhich is costly. To tackle these issues, this work enriches D. Knuth's trie structure, avoiding 1) the need to access the database to sample since patterns are drawn directly from the enriched trie and 2) the necessity to reprocess the whole dataset when the utility changes. We define the trie of occurrences that our first algorithm TPSpace (Trie-based Pattern Space) uses to materialize all of the database patterns. Factorizing transaction prefixes compresses the transactional database. TPSampling (Trie-based Pattern Sampling), our second algorithm, draws patterns from a trie of occurrences under a length-based utility measure. Experiments show that TPSampling produces thousands of patterns in seconds.<br />
| pages = 1 to 10<br />
| publisher = IEEE<br />
| lrdekeywords = IA<br />
| lrdenewsdate = 2022-12-12<br />
| note = accepted<br />
| type = inproceedings<br />
| id = diop.22.ieeebigdata<br />
| identifier = doi:10.1109/BigDataXXXX<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> diop.22.ieeebigdata,<br />
author = <nowiki>{</nowiki>Diop, Lamine and Diop, Cheikh Talibouya and Giacometti,<br />
Arnaud and Li, Dominique and Soulet, Arnaud<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>2022 IEEE International Conference on Big Data (Big<br />
Data)<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Trie-based Output Itemset Sampling<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
address = <nowiki>{</nowiki>Osaka, Japan<nowiki>}</nowiki>,<br />
month = dec,<br />
abstract = <nowiki>{</nowiki>Pattern sampling algorithms produce interesting patterns<br />
with a probability proportional to a given utility measure.<br />
Utility changes need quick re-preprocessing when sampling<br />
patterns from large databases. In this context, existing<br />
sampling techniques require storing all data in memory,<br />
which is costly. To tackle these issues, this work enriches<br />
D. Knuth's trie structure, avoiding 1) the need to access<br />
the database to sample since patterns are drawn directly<br />
from the enriched trie and 2) the necessity to reprocess<br />
the whole dataset when the utility changes. We define the<br />
trie of occurrences that our first algorithm TPSpace<br />
(Trie-based Pattern Space) uses to materialize all of the<br />
database patterns. Factorizing transaction prefixes<br />
compresses the transactional database. TPSampling<br />
(Trie-based Pattern Sampling), our second algorithm, draws<br />
patterns from a trie of occurrences under a length-based<br />
utility measure. Experiments show that TPSampling produces<br />
thousands of patterns in seconds.<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>1-10<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>IEEE<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1109/BigDataXXXX<nowiki>}</nowiki>,<br />
note = <nowiki>{</nowiki>accepted<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/vallade.22.settaPublications/vallade.22.setta2022-12-08T20:59:18Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-08<br />
| authors = V Vallade, S Nejati, J Sopena, V Ganesh, S Baarir<br />
| title = Diversifying a Parallel SAT Solver with Bayesian Moment Matching<br />
| booktitle = Symposium on Dependable Software Engineering TheoriesTools and Applications<br />
| address = Beijing, China<br />
| lrdenewsdate = 2022-12-08<br />
| lrdeprojects = AA<br />
| abstract = In this paper, we present a Bayesian Moment Matching (BMM) in-processing technique for Conflict-Driven Clause-Learning (CDCL) SAT solvers. BMM is a probabilistic algorithm which takes as input a Boolean formula in conjunctive normal form and a prior on a possible satisfying assignment, and outputs a posterior for a new assignment most likely to maximize the number of satisfied clauses. We invoke this BMM method, as an in-processing technique, with the goal of updating the polarity and branching activity scores. The key insight underpinning our method is that Bayesian reasoning is a powerful way to guide the CDCL search procedure away from fruitless parts of the search space of a satisfiable Boolean formula, and towards those regions that are likely to contain satisfying assignments.<br />
| type = inproceedings<br />
| id = vallade.22.setta<br />
| identifier = doi:10.1007/978-3-031-21213-0_14<br />
| bibtex = <br />
@InProceedings<nowiki>{</nowiki> vallade.22.setta,<br />
author = <nowiki>{</nowiki>V. Vallade and S. Nejati and J. Sopena and V. Ganesh and<br />
S. Baarir<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Diversifying a Parallel <nowiki>{</nowiki>SAT<nowiki>}</nowiki> Solver with Bayesian Moment<br />
Matching<nowiki>}</nowiki>,<br />
booktitle = <nowiki>{</nowiki>Symposium on Dependable Software Engineering Theories,<br />
Tools and Applications<nowiki>}</nowiki>,<br />
year = 2022,<br />
address = <nowiki>{</nowiki>Beijing, China<nowiki>}</nowiki>,<br />
month = oct,<br />
abstract = <nowiki>{</nowiki>In this paper, we present a Bayesian Moment Matching (BMM)<br />
in-processing technique for Conflict-Driven Clause-Learning<br />
(CDCL) SAT solvers. BMM is a probabilistic algorithm which<br />
takes as input a Boolean formula in conjunctive normal form<br />
and a prior on a possible satisfying assignment, and<br />
outputs a posterior for a new assignment most likely to<br />
maximize the number of satisfied clauses. We invoke this<br />
BMM method, as an in-processing technique, with the goal of<br />
updating the polarity and branching activity scores. The<br />
key insight underpinning our method is that Bayesian<br />
reasoning is a powerful way to guide the CDCL search<br />
procedure away from fruitless parts of the search space of<br />
a satisfiable Boolean formula, and towards those regions<br />
that are likely to contain satisfying assignments. <nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1007/978-3-031-21213-0_14<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/or.22.transparencePublications/or.22.transparence2022-12-08T13:45:34Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-05-01<br />
| authors = Olivier Ricou<br />
| title = Données, Transparence et Démocratie<br />
| publisher = AFNIL<br />
| None = http://opendata.ricou.eu.org/<br />
| abstract = Bienvenue dans l'âge des données. Nos actions sur Internet sont enregistrées au profit d'entreprises qui valorisent ces données et offrent des services en échange. Pouvons-nous faire de même ? Pouvons-nous utiliser les données de l'état pour améliorer notre démocratie ? Depuis 2016, les données publiques doivent être ouvertes à tous. Les citoyens peuvent les analyser pour mesurer l'efficacité de l'action publique ou pour leur compte personnel. Les data journalistes les utilisent pour nous éclairer, les chercheurs pour comprendre. Ainsi la transparence permet de lutter contre la corruption et les intox, tout comme elle est source de progrès. Ce changement de paradigme, l'accès aux données de l'état, est surtout une opportunité pour participer. Noter un dysfonctionnement permet de suggérer une amélioration, un manque peut être une opportunité économique, même un jeu de données incomplet est une occasion pour tisser des liens entre l'administration, les associations et les citoyens. À travers cet essai, l'auteur nous propose un voyage optimiste dans le monde des données. Chemin faisant, les liens entre ces données et la transparence ouvrent la voie vers une démocratie plus ouverte, plus interactive et donc plus juste.<br />
| nodoi = <br />
| type = book<br />
| id = or.22.transparence<br />
| identifier = isbn:978-2958187309<br />
| bibtex = <br />
@Book<nowiki>{</nowiki> or.22.transparence,<br />
author = <nowiki>{</nowiki>Olivier Ricou<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>Donn\'ees, Transparence et D\'emocratie<nowiki>}</nowiki>,<br />
publisher = <nowiki>{</nowiki>AFNIL<nowiki>}</nowiki>,<br />
month = may,<br />
year = 2022,<br />
isbn = <nowiki>{</nowiki>978-2958187309<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>http://opendata.ricou.eu.org/<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Bienvenue dans l'\^age des donn\'ees. Nos actions sur<br />
Internet sont enregistr\'ees au profit d'entreprises qui<br />
valorisent ces donn\'ees et offrent des services en<br />
\'echange. Pouvons-nous faire de m\^eme ? Pouvons-nous<br />
utiliser les donn\'ees de l'\'etat pour am\'eliorer notre<br />
d\'emocratie ?<br />
<br />
Depuis 2016, les donn\'ees publiques doivent \^etre<br />
ouvertes <nowiki>{</nowiki>\`a<nowiki>}</nowiki> tous. Les citoyens peuvent les analyser pour<br />
mesurer l'efficacit\'e de l'action publique ou pour leur<br />
compte personnel. Les data journalistes les utilisent pour<br />
nous \'eclairer, les chercheurs pour comprendre. Ainsi la<br />
transparence permet de lutter contre la corruption et les<br />
intox, tout comme elle est source de progr\`es.<br />
<br />
Ce changement de paradigme, l'acc\`es aux donn\'ees de<br />
l'\'etat, est surtout une opportunit\'e pour participer.<br />
Noter un dysfonctionnement permet de sugg\'erer une<br />
am\'elioration, un manque peut \^etre une opportunit\'e<br />
\'economique, m\^eme un jeu de donn\'ees incomplet est une<br />
occasion pour tisser des liens entre l'administration, les<br />
associations et les citoyens.<br />
<br />
\`A travers cet essai, l'auteur nous propose un voyage<br />
optimiste dans le monde des donn\'ees. Chemin faisant, les<br />
liens entre ces donn\'ees et la transparence ouvrent la<br />
voie vers une d\'emocratie plus ouverte, plus interactive et donc plus juste.<nowiki>}</nowiki>,<br />
nodoi = <nowiki>{</nowiki><nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/fahrenberg.22.litesPublications/fahrenberg.22.lites2022-12-08T13:45:00Z<p>Bot: Created page with "{{Publication | published = true | date = 2022-12-08 | title = Higher-Dimensional Timed and Hybrid Automata | volume = 8 | None = https://ojs.dagstuhl.de/index.php/lites/artic..."</p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-08<br />
| title = Higher-Dimensional Timed and Hybrid Automata<br />
| volume = 8<br />
| None = https://ojs.dagstuhl.de/index.php/lites/article/view/lites-v008-i002-a003<br />
| abstract = We introduce a new formalism of higher-dimensional timed automata, based on Pratt and van Glabbeek's higher-dimensional automata and Alur and Dill's timed automata. We prove that their reachability is PSPACE-complete and can be decided using zone-based algorithms. We also extend the setting to higher-dimensional hybrid automata. The interest of our formalism is in modeling systems which exhibit both real-time behavior and concurrency. Other existing formalisms for real-time modeling identify concurrency and interleaving, which, as we shall argue, is problematic.<br />
| number = 2<br />
| journal = Leibniz Transactions on Embedded Systems<br />
| authors = Uli Fahrenberg<br />
| lrdenewsdate = 2022-12-08<br />
| lrdeprojects = AA<br />
| pages = 03:1 to 03:16<br />
| type = article<br />
| id = fahrenberg.22.lites<br />
| identifier = doi:10.4230/LITES.8.2.3<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> fahrenberg.22.lites,<br />
title = <nowiki>{</nowiki>Higher-Dimensional Timed and Hybrid Automata<nowiki>}</nowiki>,<br />
volume = 8,<br />
url = <nowiki>{</nowiki>https://ojs.dagstuhl.de/index.php/lites/article/view/lites-v008-i002-a003<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.4230/LITES.8.2.3<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>We introduce a new formalism of higher-dimensional timed<br />
automata, based on Pratt and van Glabbeek's<br />
higher-dimensional automata and Alur and Dill's timed<br />
automata. We prove that their reachability is<br />
PSPACE-complete and can be decided using zone-based<br />
algorithms. We also extend the setting to<br />
higher-dimensional hybrid automata. The interest of our<br />
formalism is in modeling systems which exhibit both<br />
real-time behavior and concurrency. Other existing<br />
formalisms for real-time modeling identify concurrency and<br />
interleaving, which, as we shall argue, is problematic.<nowiki>}</nowiki>,<br />
number = 2,<br />
journal = <nowiki>{</nowiki>Leibniz Transactions on Embedded Systems<nowiki>}</nowiki>,<br />
author = <nowiki>{</nowiki>Fahrenberg, Uli<nowiki>}</nowiki>,<br />
year = 2022,<br />
month = dec,<br />
pages = <nowiki>{</nowiki>03:1-03:16<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/abate.22.litesPublications/abate.22.lites2022-12-08T13:44:00Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-08<br />
| title = Introduction to the Special Issue on Distributed Hybrid Systems<br />
| volume = 8<br />
| abstract = This special issue contains seven papers within the broad subject of Distributed Hybrid Systems, that is, systems combining hybrid discrete-continuous state spaces with elements of concurrency and logical or spatial distribution. It follows up on several workshops on the same theme which were held between 2017 and 2019 and organized by the editors of this volume. The first of these workshops was held in Aalborg, Denmark, in August 2017 and associated with the MFCS conference. It featured invited talks by Alessandro Abate, Martin Fränzle, Kim G. Larsen, Martin Raussen, and Rafael Wisniewski. The second workshop was held in Palaiseau, France, in July 2018, with invited talks by Luc Jaulin, Thao Dang, Lisbeth FajstrupEmmanuel Ledinot, and André Platzer. The third workshop was held in Amsterdam, The Netherlands, in August 2019associated with the CONCUR conference. It featured a special theme on distributed robotics and had invited talks by Majid Zamani, Hervé de Forges, and Xavier Urbain. The vision and purpose of the DHS workshops was to connect researchers working in real-time systems, hybrid systemscontrol theory, formal verification, distributed computingand concurrency theory, in order to advance the subject of distributed hybrid systems. Such systems are abundant and often safety-critical, but ensuring their correct functioning can in general be challenging. The investigation of their dynamics by analysis tools from the aforementioned domains remains fragmentary, providing the rationale behind the workshops: it was conceived that convergence and interaction of theories, methods, and tools from these different areas was needed in order to advance the subject.<br />
| number = 2<br />
| journal = Leibniz Transactions on Embedded Systems<br />
| authors = Alessandro Abate, Uli Fahrenberg, FränzleMartin<br />
| lrdenewsdate = 2022-12-08<br />
| lrdeprojects = AA<br />
| pages = 00:1 to 00:3<br />
| type = article<br />
| id = abate.22.lites<br />
| identifier = doi:10.4230/LITES.8.2.0<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> abate.22.lites,<br />
title = <nowiki>{</nowiki>Introduction to the Special Issue on Distributed Hybrid<br />
Systems<nowiki>}</nowiki>,<br />
volume = 8,<br />
doi = <nowiki>{</nowiki>10.4230/LITES.8.2.0<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>This special issue contains seven papers within the broad<br />
subject of Distributed Hybrid Systems, that is, systems<br />
combining hybrid discrete-continuous state spaces with<br />
elements of concurrency and logical or spatial<br />
distribution. It follows up on several workshops on the<br />
same theme which were held between 2017 and 2019 and<br />
organized by the editors of this volume. The first of these<br />
workshops was held in Aalborg, Denmark, in August 2017 and<br />
associated with the MFCS conference. It featured invited<br />
talks by Alessandro Abate, Martin Fr<nowiki>{</nowiki>\"a<nowiki>}</nowiki>nzle, Kim G.<br />
Larsen, Martin Raussen, and Rafael Wisniewski. The second<br />
workshop was held in Palaiseau, France, in July 2018, with<br />
invited talks by Luc Jaulin, Thao Dang, Lisbeth Fajstrup,<br />
Emmanuel Ledinot, and Andr<nowiki>{</nowiki>\'e<nowiki>}</nowiki> Platzer. The third workshop<br />
was held in Amsterdam, The Netherlands, in August 2019,<br />
associated with the CONCUR conference. It featured a<br />
special theme on distributed robotics and had invited talks<br />
by Majid Zamani, Herv<nowiki>{</nowiki>\'e<nowiki>}</nowiki> de Forges, and Xavier Urbain.<br />
The vision and purpose of the DHS workshops was to connect<br />
researchers working in real-time systems, hybrid systems,<br />
control theory, formal verification, distributed computing,<br />
and concurrency theory, in order to advance the subject of<br />
distributed hybrid systems. Such systems are abundant and<br />
often safety-critical, but ensuring their correct<br />
functioning can in general be challenging. The<br />
investigation of their dynamics by analysis tools from the<br />
aforementioned domains remains fragmentary, providing the<br />
rationale behind the workshops: it was conceived that<br />
convergence and interaction of theories, methods, and tools<br />
from these different areas was needed in order to advance<br />
the subject.<nowiki>}</nowiki>,<br />
number = 2,<br />
journal = <nowiki>{</nowiki>Leibniz Transactions on Embedded Systems<nowiki>}</nowiki>,<br />
author = <nowiki>{</nowiki>Abate, Alessandro and Fahrenberg, Uli and Fr<nowiki>{</nowiki>\"a<nowiki>}</nowiki>nzle,<br />
Martin<nowiki>}</nowiki>,<br />
year = 2022,<br />
month = dec,<br />
pages = <nowiki>{</nowiki>00:1-00:3<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/Publications/mehta.22.melbaPublications/mehta.22.melba2022-12-08T10:51:26Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-01-09<br />
| authors = Raghav Mehta, Angelos Filos, Ujjwal Baid, Chiharu Sako, Richard McKinley, Michael Rebsamen, Katrin Dätwyler, Raphael Meier, Piotr Radojewski, Gowtham Krishnan Murugesan, Sahil Nalawade, Chandan Ganesh, Ben Wagner, YuFang F., Baowei Fei, Ananth J Madhuranthakam, Joseph A Maldjian, Laura Daza, GómezCatalina, Pablo Arbeláez, Chengliang Dai, Shuo Wang, Hadrien Reynaud, Yuanhan Mo, Elsa Angelini, Yike Guo, Wenjia Bai, BanerjeeSubhashis, Linmin Pei, Murat AK, Sarahi Rosas-González, Ilyess Zemmoura, Clovis Tauber, Minh Hoang Vu, Tufve Nyholm, Tommy Löfstedt, Laura Mora Ballestar, Veronica Vilaplana, Hugh McHugh, Gonzalo Maso Talou, Alan Wang, Jay Patel, Ken Chang, Katharina Hoebel, Mishka Gidwani, Nishanth Arun, Sharut Gupta, Mehak Aggarwal, Praveer Singh, Elizabeth R Gerstner, Jayashree Kalpathy-Cramer, Nicolas Boutry, Alexis Huard, VidyaratneLasitha, Md Monibor Rahman, Khan M Iftekharuddin, Joseph Chazalon, Elodie Puybareau, TochonGuillaume, Jun Ma, Mariano Cabezas, LladoXavier, Arnau Oliver, Liliana Valencia, Sergi Valverde, Mehdi Amian, SoltaninejadMohammadreza, Andriy Myronenko, Ali Hatamizadeh, Xue Feng, Quan Dou, Nicholas Tustison, MeyerCraig, Nisarg A Shah, Sanjay Talbar, WeberMarc-André, Abhishek Mahajan, Andras Jakab, Roland Wiest, Hassan M Fathallah-Shaykh, Arash Nazeri, Mikhail Milchenko, Daniel Marcus, Aikaterini Kotrotsou, Rivka Colen, John Freymann, Justin Kirby, Christos Davatzikos, MenzeBjoern, Spyridon Bakas, Yarin Gal, Tal Arbel<br />
| title = QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation — Analysis of Ranking Scores and Benchmarking Results<br />
| journal = Journal of Machine Learning for Biomedical Imaging (MELBA)<br />
| volume = 26<br />
| pages = 1 to 54<br />
| lrdeprojects = Olena<br />
| abstract = Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this studywe explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.<br />
| lrdepaper = https://www.lrde.epita.fr/dload/papers/boutry.22.melba.pdf<br />
| lrdekeywords = Image<br />
| lrdenewsdate = 2022-01-09<br />
| nodoi = <br />
| type = article<br />
| id = mehta.22.melba<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> mehta.22.melba,<br />
author = <nowiki>{</nowiki>Mehta, Raghav and Filos, Angelos and Baid, Ujjwal and<br />
Sako, Chiharu and McKinley, Richard and Rebsamen, Michael<br />
and D<nowiki>{</nowiki>\"<nowiki>{</nowiki>a<nowiki>}</nowiki><nowiki>}</nowiki>twyler, Katrin and Meier, Raphael and<br />
Radojewski, Piotr and Murugesan, Gowtham Krishnan and<br />
Nalawade, Sahil and Ganesh, Chandan and Wagner, Ben and Yu,<br />
Fang F. and Fei, Baowei and Madhuranthakam, Ananth J. and<br />
Maldjian, Joseph A. and Daza, Laura and G<nowiki>{</nowiki>\'<nowiki>{</nowiki>o<nowiki>}</nowiki><nowiki>}</nowiki>mez,<br />
Catalina and Arbel<nowiki>{</nowiki>\'<nowiki>{</nowiki>a<nowiki>}</nowiki><nowiki>}</nowiki>ez, Pablo and Dai, Chengliang and<br />
Wang, Shuo and Reynaud, Hadrien and Mo, Yuanhan and<br />
Angelini, Elsa and Guo, Yike and Bai, Wenjia and Banerjee,<br />
Subhashis and Pei, Linmin and AK, Murat and<br />
Rosas-Gonz<nowiki>{</nowiki>\'<nowiki>{</nowiki>a<nowiki>}</nowiki><nowiki>}</nowiki>lez, Sarahi and Zemmoura, Ilyess and<br />
Tauber, Clovis and Vu, Minh Hoang and Nyholm, Tufve and<br />
L<nowiki>{</nowiki>\"<nowiki>{</nowiki>o<nowiki>}</nowiki><nowiki>}</nowiki>fstedt, Tommy and Ballestar, Laura Mora and<br />
Vilaplana, Veronica and McHugh, Hugh and Talou, Gonzalo<br />
Maso and Wang, Alan and Patel, Jay and Chang, Ken and<br />
Hoebel, Katharina and Gidwani, Mishka and Arun, Nishanth<br />
and Gupta, Sharut and Aggarwal, Mehak and Singh, Praveer<br />
and Gerstner, Elizabeth R. and Kalpathy-Cramer, Jayashree<br />
and Boutry, Nicolas and Huard, Alexis and Vidyaratne,<br />
Lasitha and Rahman, Md Monibor and Iftekharuddin, Khan M.<br />
and Chazalon, Joseph and Puybareau, Elodie and Tochon,<br />
Guillaume and Ma, Jun and Cabezas, Mariano and Llado,<br />
Xavier and Oliver, Arnau and Valencia, Liliana and<br />
Valverde, Sergi and Amian, Mehdi and Soltaninejad,<br />
Mohammadreza and Myronenko, Andriy and Hatamizadeh, Ali and<br />
Feng, Xue and Dou, Quan and Tustison, Nicholas and Meyer,<br />
Craig and Shah, Nisarg A. and Talbar, Sanjay and Weber,<br />
Marc-Andr<nowiki>{</nowiki>\'<nowiki>{</nowiki>e<nowiki>}</nowiki><nowiki>}</nowiki> and Mahajan, Abhishek and Jakab, Andras<br />
and Wiest, Roland and Fathallah-Shaykh, Hassan M. and<br />
Nazeri, Arash and Milchenko, Mikhail and Marcus, Daniel and<br />
Kotrotsou, Aikaterini and Colen, Rivka and Freymann, John<br />
and Kirby, Justin and Davatzikos, Christos and Menze,<br />
Bjoern and Bakas, Spyridon and Gal, Yarin and Arbel, Tal<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki><nowiki>{</nowiki>QU-BraTS<nowiki>}</nowiki>: <nowiki>{</nowiki>MICCAI<nowiki>}</nowiki> <nowiki>{</nowiki>BraTS<nowiki>}</nowiki> 2020 Challenge on Quantifying<br />
Uncertainty in Brain Tumor Segmentation --- <nowiki>{</nowiki>A<nowiki>}</nowiki>nalysis of<br />
Ranking Scores and Benchmarking Results<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Journal of Machine Learning for Biomedical Imaging<br />
(MELBA)<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>26<nowiki>}</nowiki>,<br />
pages = <nowiki>{</nowiki>1--54<nowiki>}</nowiki>,<br />
month = sep,<br />
year = <nowiki>{</nowiki>2022<nowiki>}</nowiki>,<br />
abstract = <nowiki>{</nowiki>Deep learning (DL) models have provided state-of-the-art<br />
performance in various medical imaging benchmarking<br />
challenges, including the Brain Tumor Segmentation (BraTS)<br />
challenges. However, the task of focal pathology<br />
multi-compartment segmentation (e.g., tumor and lesion<br />
sub-regions) is particularly challenging, and potential<br />
errors hinder translating DL models into clinical<br />
workflows. Quantifying the reliability of DL model<br />
predictions in the form of uncertainties could enable<br />
clinical review of the most uncertain regions, thereby<br />
building trust and paving the way toward clinical<br />
translation. Several uncertainty estimation methods have<br />
recently been introduced for DL medical image segmentation<br />
tasks. Developing scores to evaluate and compare the<br />
performance of uncertainty measures will assist the<br />
end-user in making more informed decisions. In this study,<br />
we explore and evaluate a score developed during the BraTS<br />
2019 and BraTS 2020 task on uncertainty quantification<br />
(QU-BraTS) and designed to assess and rank uncertainty<br />
estimates for brain tumor multi-compartment segmentation.<br />
This score (1) rewards uncertainty estimates that produce<br />
high confidence in correct assertions and those that assign<br />
low confidence levels at incorrect assertions, and (2)<br />
penalizes uncertainty measures that lead to a higher<br />
percentage of under-confident correct assertions. We<br />
further benchmark the segmentation uncertainties generated<br />
by 14 independent participating teams of QU-BraTS 2020, all<br />
of which also participated in the main BraTS segmentation<br />
task. Overall, our findings confirm the importance and<br />
complementary value that uncertainty estimates provide to<br />
segmentation algorithms, highlighting the need for<br />
uncertainty quantification in medical image analyses.<br />
Finally, in favor of transparency and reproducibility, our<br />
evaluation code is made publicly available at<br />
https://github.com/RagMeh11/QU-BraTS. <nowiki>}</nowiki>,<br />
nodoi = <nowiki>{</nowiki><nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/NewsEntry_(2022/11/30)NewsEntry (2022/11/30)2022-12-07T09:34:49Z<p>Daniela: Created page with "{{News |title=Thierry Géraud represents LRE at EDITE-Day |subtitle=The doctoral school of Paris EDITE has organized a conference day [https://www.edite-de-paris.fr/journee-de..."</p>
<hr />
<div>{{News<br />
|title=Thierry Géraud represents LRE at EDITE-Day<br />
|subtitle=The doctoral school of Paris EDITE has organized a conference day [https://www.edite-de-paris.fr/journee-de-ledite-2022/ Journée de l’EDITE] dedicated to PhD-students, their supervisors and the best PhD Thesis Award. Among the labs attending, Thierry has presented the unified EPITA Research Laboratory (LRE) with its newly defined research groups and axes.<br />
|date=2022/11/30<br />
}}</div>Danielahttps://www.lrde.epita.fr/wiki/Publications/movn.22.prPublications/movn.22.pr2022-12-03T10:22:47Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-12-03<br />
| authors = Minh Ôn Vũ Ngoc, Edwin Carlinet, Jonathan Fabrizio, Thierry Géraud<br />
| title = The Dahu Graph-Cut for Interactive Segmentation on 2D/3D Images<br />
| journal = Pattern Recognition<br />
| volume = 136<br />
| number = 109207<br />
| abstract = Interactive image segmentation is an important application in computer vision for selecting objects of interest in images. Several interactive segmentation methods are based on distance transform algorithms. However, the most known distance transform, geodesic distance, is sensitive to noise in the image and to seed placement. Recently, the Dahu pseudo-distance, a continuous version of the minimum barrier distance (MBD), is proved to be more powerful than the geodesic distance in noisy and blurred images. This paper presents a method for combining the Dahu pseudo-distance with edge information in a graph-cut optimization framework and leveraging each's complementary strengths. Our method works efficiently on both 2D/3D images and videos. Results show that our method achieves better performance than other distance-based and graph-cut methods, thereby reducing the user's efforts.<br />
| lrdeprojects = Olena<br />
| lrdekeywords = Image<br />
| lrdepaper = https://www.lrde.epita.fr/dload/papers/movn.22.pr.pdf<br />
| lrdenewsdate = 2022-12-03<br />
| type = article<br />
| id = movn.22.pr<br />
| identifier = doi:10.1016/j.patcog.2022.109207<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> movn.22.pr,<br />
author = <nowiki>{</nowiki>Minh \^On V\~<nowiki>{</nowiki>u<nowiki>}</nowiki> Ng\d<nowiki>{</nowiki>o<nowiki>}</nowiki>c and Edwin Carlinet and Jonathan<br />
Fabrizio and Thierry G\'eraud<nowiki>}</nowiki>,<br />
title = <nowiki>{</nowiki>The <nowiki>{</nowiki>D<nowiki>}</nowiki>ahu Graph-Cut for Interactive Segmentation on<br />
<nowiki>{</nowiki>2D/3D<nowiki>}</nowiki> Images<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Pattern Recognition<nowiki>}</nowiki>,<br />
year = <nowiki>{</nowiki>2023<nowiki>}</nowiki>,<br />
volume = <nowiki>{</nowiki>136<nowiki>}</nowiki>,<br />
number = <nowiki>{</nowiki>109207<nowiki>}</nowiki>,<br />
month = apr,<br />
abstract = <nowiki>{</nowiki>Interactive image segmentation is an important application<br />
in computer vision for selecting objects of interest in<br />
images. Several interactive segmentation methods are based<br />
on distance transform algorithms. However, the most known<br />
distance transform, geodesic distance, is sensitive to<br />
noise in the image and to seed placement. Recently, the<br />
Dahu pseudo-distance, a continuous version of the minimum<br />
barrier distance (MBD), is proved to be more powerful than<br />
the geodesic distance in noisy and blurred images. This<br />
paper presents a method for combining the Dahu<br />
pseudo-distance with edge information in a graph-cut<br />
optimization framework and leveraging each's complementary<br />
strengths. Our method works efficiently on both 2D/3D<br />
images and videos. Results show that our method achieves<br />
better performance than other distance-based and graph-cut<br />
methods, thereby reducing the user's efforts.<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>10.1016/j.patcog.2022.109207<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bothttps://www.lrde.epita.fr/wiki/NewsEntry_()2NewsEntry ()22022-12-01T15:13:00Z<p>Cd: </p>
<hr />
<div>{{News<br />
|title=The LRE hosts a new PhD student, Théo Lepage, who joins the Artificial Intelligence group.<br />
|subtitle=After obtaining EPITA's engineering degree from the [https://www.epita.fr/diplome-ingenieur/cycle-ingenieur/les-majeures/ IMAGE and RDI double major], Théo joins the LRE to complete his PhD. He will focus on improving the robustness of representations used by speaker and language recognition systems while considering the growing risk of spoofing attacks.<br />
|date=2022/11/02<br />
}}</div>Danielahttps://www.lrde.epita.fr/wiki/NewsEntry_(2022/11/22)NewsEntry (2022/11/22)2022-12-01T14:26:26Z<p>Daniela: </p>
<hr />
<div>{{News<br />
|title=Guillaume Tochon from LRE invited speaker at GT-GDMM<br />
|subtitle=Guillaume Tochon gave a plenary talk at the annual seminar of the [https://gdmm2022.sciencesconf.org Discrete Geometry and Mathematical Morphology Research Group], on the topic of learning mathematical morphology operations with morphological neural networks. He presented the latest results obtained at LRE with RDI students (from [https://www.epita.fr/diplome-ingenieur/cycle-ingenieur/les-majeures/ EPITA’s research double major]) and in collaboration with the Center for Mathematical Morphology ([https://www.cmm.minesparis.psl.eu CMM, Mines ParisTech]).<br />
|date=2022/11/22<br />
}}</div>Danielahttps://www.lrde.epita.fr/wiki/NewsEntry_(2022/11/10)NewsEntry (2022/11/10)2022-12-01T14:17:52Z<p>Daniela: </p>
<hr />
<div>{{News<br />
|title=Researchers from LRE at Seminar at the French National Library (BNF) on data extraction from 19th century directories<br />
|subtitle=Edwin Carlinet and Joseph Chazalon from LRE presented at the [https://soduco.github.io/soduco_bnf_seminars BNF Seminar] the latest advances of the [https://anr.fr/Projet-ANR-18-CE38-0013 ANR SoDUCo project] regarding mass extraction of 10 M entries from historical directories. Marie Puren presented the results of the [https://github.com/mpuren/agoda AGODA project], funded by the BNF, showcasing structured transcriptions of French Parliament debates from the 19th century, extracted in a semi-automated fashion.<br />
|date=2022/11/10<br />
}}</div>Danielahttps://www.lrde.epita.fr/wiki/Publications/fahrenberg.22.scpPublications/fahrenberg.22.scp2022-11-01T10:19:16Z<p>Bot: </p>
<hr />
<div>{{Publication<br />
| published = true<br />
| date = 2022-11-01<br />
| title = Featured Games<br />
| journal = Science of Computer Programming<br />
| volume = 223<br />
| pages = 102874<br />
| None = Featured transition system, Two-player game, Family-based model checking<br />
| authors = Uli Fahrenberg, Axel Legay<br />
| lrdepaper = http://www.lrde.epita.fr/dload/papers/fahrenberg.22.scp.pdf<br />
| lrdenewsdate = 2022-11-01<br />
| lrdeprojects = AA<br />
| abstract = Feature-based analysis of software product lines and family-based model checking have seen rapid development. Many model checking problems can be reduced to two-player games on finite graphs. A prominent example is mu-calculus model checking, which is generally done by translating to parity games, but also many quantitative model-checking problems can be reduced to (quantitative) games. As part of a program to make game-based model checking available for software product lines, we introduce featured reachability games, featured minimum reachability games, featured discounted games, featured energy games, and featured parity games. We show that all these admit optimal featured strategies, which project to optimal strategies for any product, and how to compute winners and values of such games in a family-based manner.<br />
| type = article<br />
| id = fahrenberg.22.scp<br />
| identifier = doi:https://doi.org/10.1016/j.scico.2022.102874<br />
| bibtex = <br />
@Article<nowiki>{</nowiki> fahrenberg.22.scp,<br />
title = <nowiki>{</nowiki>Featured Games<nowiki>}</nowiki>,<br />
journal = <nowiki>{</nowiki>Science of Computer Programming<nowiki>}</nowiki>,<br />
volume = 223,<br />
pages = 102874,<br />
year = 2022,<br />
issn = <nowiki>{</nowiki>0167-6423<nowiki>}</nowiki>,<br />
doi = <nowiki>{</nowiki>https://doi.org/10.1016/j.scico.2022.102874<nowiki>}</nowiki>,<br />
url = <nowiki>{</nowiki>https://www.sciencedirect.com/science/article/pii/S0167642322001071<nowiki>}</nowiki>,<br />
author = <nowiki>{</nowiki>Uli Fahrenberg and Axel Legay<nowiki>}</nowiki>,<br />
keywords = <nowiki>{</nowiki>Featured transition system, Two-player game, Family-based<br />
model checking<nowiki>}</nowiki>,<br />
month = nov,<br />
abstract = <nowiki>{</nowiki>Feature-based analysis of software product lines and<br />
family-based model checking have seen rapid development.<br />
Many model checking problems can be reduced to two-player<br />
games on finite graphs. A prominent example is mu-calculus<br />
model checking, which is generally done by translating to<br />
parity games, but also many quantitative model-checking<br />
problems can be reduced to (quantitative) games. As part of<br />
a program to make game-based model checking available for<br />
software product lines, we introduce featured reachability<br />
games, featured minimum reachability games, featured<br />
discounted games, featured energy games, and featured<br />
parity games. We show that all these admit optimal featured<br />
strategies, which project to optimal strategies for any<br />
product, and how to compute winners and values of such<br />
games in a family-based manner.<nowiki>}</nowiki><br />
<nowiki>}</nowiki><br />
<br />
}}</div>Bot