Talk abstract
From LRDE
This is a property of type Text.
S
Dans ce travail en collaboration avec Axel Davy, Mauricio Delbracio et
Thibaud Ehret, je passerai en revue les classes d'algorithmes dont le
but est de détecter des anomalies dans les images digitales. Ces
détecteurs répondent au difficile problème de trouver automatiquement
des exceptions dans des images de fond, qui peuvent être aussi
diverses qu'un tissu ou une mammographie. Des méthodes de détection
ont été proposées par milliers car chaque problème nécessite un modèle
de fond différent. En analysant les approches existantes, nous
montrerons que le problème peut être réduit à la détection d'anomalies
dans les images résiduelles (extraites de l'image cible) dans
lesquelles prédominent le bruit et les anomalies. Ainsi, le problème
général et impossible de la modélisation d'un fond arbitraire est
remplacé par celui de modèliser un bruit. Or un modèle de bruit permet
le calcul de seuils de détection rigoureux. L'approche au problème
peut donc être non supervisée et fonctionner sur des images
arbitraires. Nous illustrerons l'usage de la théorie de détection dite
a contrario, qui évite la sur-détection en fixant des seuils de
détection prenant en compte la multiplicité des tests. +
Recent advances in medical image computing have resulted in automated systems that closely assist physicians in patient
therapy. Computational and personalized patient models benefit diagnosis, prognosis and treatment planning, with a
decreased risk for the patient, as well as potentially lower cost. HeartFlow Inc. is a successful example of a company
providing such a service in the cardiovascular context. Based on patient-specific vascular model extracted from X-ray CT
images, they identify functionally significant disease in large coronary arteries. Their combined anatomical and
functional analysis is nonetheless limited by the image resolution. At the downstream scale, a functional exam called
Myocardium Perfusion Imaging (MPI) highlights myocardium regions with blood flow deficit. However, MPI does not
functionally relate perfusion to the upstream coronary disease. The goal of our project is to build the functional
bridge between coronary and myocardium. To this aim we propose an anatomical and functional extrapolation. We produce an
innovative vascular network generation method extending the coronary model down to the microvasculature. In the
resulting vascular model, we compute a functional analysis pipeline to simulate flow from large coronaries to the
myocardium, and to enable comparison with MPI ground-truth data. +
Rendre la vue à ceux qui l’ont perdue a longtemps été considéré comme un sujet réservé à la science-fiction. Cependant,
sur les vingt dernières années les efforts intensifiés dans le domaine des prothèses visuelles ont abouti à des avancées
significatives, et plusieurs centaines de patients dans le monde ont reçu de tels dispositifs. Ce séminaire présentera
brièvement le domaine des prothèses rétiniennes avec une focalisation particulière sur les aspects de traitement
d’image. Nous exposerons les principales approches, les limitations connues et les résultats. +
Neural networks have been producing impressive results in computer vision these last years, in image classification or
segmentation in particular. To be transferred to remote sensing, this tool needs adaptation to its specifics: large
images, many small objects per image, keeping high-resolution output, unreliable ground truth (usually
mis-registered). We will review the work done in our group for remote sensing semantic segmentation, explaining the
evolution of our neural net architecture design to face these challenges, and finally training a network to register
binary cadaster maps to RGB images while detecting new buildings if any, in a multi-scale approach. We will show in
particular that it is possible to train on noisy datasets, and to make predictions at an accuracy much better than the
variance of the original noise. To explain this phenomenon, we build theoretical tools to express input similarity from
the neural network point of view, and use them to quantify data redundancy and associated expected denoising effects.
If time permits, we might also present work on hurricane track forecast from reanalysis data (2-3D coverage of the
Earth's surface with temperature/pressure/etc. fields) using deep learning. +
The Loci Auto-Parallelizing framework provides a Domain Specific
Language (DSL) for the creation of high performance numerical
models. The framework uses a logic-relation model to describe
irregular computations, provide guarantees of internal logical
consistency, and provides for automatic parallel execution. The
framework has been used to develop a number of advance computational
models used in production engineering processes. Currently Loci based
tools form the backbone of computational fluid dynamics tools used by
NASA Marshall and Loci based codes account for more than 20% of the
computational workload on NASA’s Pleiades supercomputer. This talk
will provide an overview of the framework, discuss its general
approach, and provide comparisons to other programming models through
a mini-app benchmark. In addition, future plans for developing
efficient schedules of fine-grained parallel and memory bandwidth
constrained computations will be discussed. Finally, some examples of
the range of engineering simulations enabled by the technology will be
introduced and briefly discussed. +
The relationship between neighboring pixels plays an
important role in many vision applications. A typical example of a
relationship between neighboring pixels is the intensity order, which
gives rise to some morphological tree-based image representations
(e.g., Min/Max tree and tree of shapes). These trees have been shown
useful for many applications, ranging from image filtering to object
detection and segmentation. Yet, these intensity order based trees do
not always perform well for analyzing complex natural images. The
success of deep learning in many vision tasks motivates us to resort
to convolutional neural networks (CNNs) for learning such a
relationship instead of relying on the simple intensity order. As a
starting point, we propose the flux or direction field representation
that encodes the relationship between neighboring pixels. We then
leverage CNNs to learn such a representation and develop some
customized post-processings for several vision tasks, such as symmetry
detection, scene text detection, generic image segmentation, and crowd
counting by localization. This talk is based on [1] and [2], as well
as extension of those previous works that are currently under review.
[1] Xu, Y., Wang, Y., Zhou, W., Wang, Y., Yang, Z. and Bai, X.,
2019. Textfield: Learning a deep direction field for irregular scene
text detection. IEEE Transactions on Image Processing.
[2] Wang, Y., Xu, Y., Tsogkas, S., Bai, X., Dickinson, S. and Siddiqi,
K., 2019. DeepFlux for Skeletons in the Wild. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition. +
Dans ce séminaire, nous parlerons d'une technologie émergente qu'est l'informatique quantique, exploitant les phénomènes quantiques de l'infiniment petit. Nous verrons que, quand dans le monde de l'informatique classique, les données sont représentées par des bits valant chacun 0 ou 1 exclusivement, alors que l'informatique quantique est déroutante dans le sens où les qubits (bits quantiques) peuvent valoir simultanément 0 et 1. Afin de pouvoir appréhender cette technologie, nous rappellerons ce que sont la dualité onde/corpuscule, la superposition d'états, ainsi que intrication quantique. Nous verrons aussi comment IBM a créé le premier processeur quantique (ou QPU) quelques dizaines d'années après l'idée révolutionnaire du père de l'informatique quantique, Richard Feynman, et quels sont les défis technologiques qui en découlent. Nous verrons que l’informatique quantique offre de nouvelles perspectives dans les domaines comme la cryptographie et l'intelligence artificielle pour ne citer qu'eux. Une étude des complexités des différents algorithmes vus durant le séminaire sera évoqué. Durant cette plénière interactive, une démonstration sera réalisée via l’environnement de développement Qiskit avec accès à distance à une machine quantique IBM. Merci donc d'apporter votre ordinateur portable ! +
In a partially observable system, diagnosis is the task of detecting certain events, for instance fault occurrences. In the presence of hostile observers, on the other hand, one is interested in rendering a system opaque, i.e. making it impossible to detect certain "secret" events. The talk will present some decidability and complexity results for these two problems
when the system is represented as a finite automaton or a Petri net. We then also consider the problem of active diagnosis, where the observer has some control over the system. In this context, we study problems such as the computational complexity of the synthesis problem, the memory required for the controller, and the delay between a fault occurrence and its detection by the diagnoser. The talk is based on joint work with B. Bérard, S. Haar, S. Haddad, T. Melliti, and S. Schmitz. +
In a partially observable system, diagnosis is the task of detecting certain events, for instance fault occurrences. In the presence of hostile observers, on the other hand, one is interested in rendering a system opaque, i.e. making it impossible to detect certain "secret" events. The talk will present some decidability and complexity results for these two problems
when the system is represented as a finite automaton or a Petri net. We then also consider the problem of active diagnosis, where the observer has some control over the system. In this context, we study problems such as the computational complexity of the synthesis problem, the memory required for the controller, and the delay between a fault occurrence and its detection by the diagnoser. The talk is based on joint work with B. Bérard, S. Haar, S. Haddad, T. Melliti, and S. Schmitz. +
We introduce iposets - posets with interfaces - equipped with a novel gluing
composition along interfaces and the standard parallel composition. We study
their basic algebraic properties as well as the hierarchy of gluing-parallel
posets generated from singletons by finitary applications of the two
compositions. We show that not only series-parallel posets, but also
interval orders, which seem more interesting for modeling concurrent and
distributed systems, can be generated, but not all posets. Generating posets
is also important for constructing free algebras for concurrent semi-rings
and Kleene algebras that allow compositional reasoning about such systems. +
Despite its NP-completeness, propositional Boolean satisfiability (SAT) covers a broad spectrum of applications. Nowadays, it is an active research area finding its applications in many contexts like planning decision, cryptology, computational biology, hardware and software analysis. Hence, the development of approaches allowing to handle increasingly challenging SAT problems has become a major focus: during the past eight years, SAT solving has been the main subject of my research work. This talk presents some of the main results we obtained in the field. +
Topological Data Analysis (TDA) is a recent area of computer science that focuses on discovering intrinsic structures hidden in data. Based on solid mathematical tools such as Morse theory and Persistent Homology, TDA enables the robust extraction of the main features of a data set into stable, concise, and multi-scale descriptors that facilitate data analysis and visualization. In this talk, I will give an intuitive overview of the main tools used in TDA (persistence diagrams, Reeb graphs, Morse-Smale complexes, etc.) with applications to concrete use cases in computational fluid dynamics, medical imaging, quantum chemistry, and climate modeling. This talk will be illustrated with results produced with the "Topology ToolKit" (TTK), an open-source library (BSD license) that we develop with collaborators to showcase our research. Tutorials for re-producing these experiments are available on the TTK website. +
Optimal transport (OT) has recently gained a lot of interest in machine learning. It is a natural tool to compare in a geometrically faithful way probability distributions. It finds applications in both supervised learning (using geometric loss functions) and unsupervised learning (to perform generative model fitting). OT is however plagued by the curse of dimensionality, since it might require a number of samples which grows exponentially with the dimension. In this talk, I will review entropic regularization methods which define geometric loss functions approximating OT with a better sample complexity. +
We present a framework for modelling and verifying epistemic properties over
parameterized multi-agent systems that communicate by truthful public
announcements. In this framework, the number of agents or the amount of certain
resources are parameterized (i.e. not known a priori), and the corresponding
verification problem asks whether a given epistemic property is true regardless
of the instantiation of the parameters. As in other regular model checking (RMC)
techniques, a finite-state automaton is used to specify a parameterized family
of systems.
Parameterized systems might also require an arbitrary number of announcements,
leading to the introduction of the so-called iterated public announcement.
Although model checking becomes undecidable because of this operator, we provide
a semi-decision procedure based on Angluin's L*-algorithm for learning finite
automata. Moreover, the procedure is guaranteed to terminate when some
regularity properties are met. We illustrate the approach on the Muddy Children
puzzle, and we further discuss dynamic protocol encodings through the Dining
Cryptographer example.
Initial publication at AAMAS21, joint work with Anthony Lin and Felix Thomas +