Séminaire Performance et Généricité


À propos du séminaire

La modélisation orientée objet permet la classification des problèmes de calcul scientifique, et par conséquent, par la factorisation qu'elle rend possible, elle fournit un excellent support pour la fédération d'efforts de développement. Malheureusement les performances en pâtissent souvent. De nouveaux langages, de nouvelles techniques de programmation réconcilient performance et généricité, permettant la naissance de bibliothèques de nouvelle génération (Boost, Olena, Vcsn, etc.).

L'objet de ce séminaire est la diffusion du savoir et des compétences sur la modélisation de bibliothèques métiers génériques et performantes.

Mots clés: Calcul Scientifique, Distribution, Génie Logiciel, Généricité, Grille, Langages, Multi-cœur, Paradigmes de Programmation, Parallélisme, Recherche reproductible.

Comment venir: Contact.

Prochaines séances


Mardi 17 décembre 2019, 10h - 11h, IP12A

Learning the relationship between neighboring pixels for some vision tasks

Yongchao Xu, Associate Professor at the School of Electronic Information and Communications, HUST, China

The relationship between neighboring pixels plays an important role in many vision applications. A typical example of a relationship between neighboring pixels is the intensity order, which gives rise to some morphological tree-based image representations (e.g., Min/Max tree and tree of shapes). These trees have been shown useful for many applications, ranging from image filtering to object detection and segmentation. Yet, these intensity order based trees do not always perform well for analyzing complex natural images.  The success of deep learning in many vision tasks motivates us to resort to convolutional neural networks (CNNs) for learning such a relationship instead of relying on the simple intensity order.  As a starting point, we propose the flux or direction field representation that encodes the relationship between neighboring pixels. We then leverage CNNs to learn such a representation and develop some customized post-processings for several vision tasks, such as symmetry detection, scene text detection, generic image segmentation, and crowd counting by localization. This talk is based on [1] and [2], as well as extension of those previous works that are currently under review.

[1] Xu, Y., Wang, Y., Zhou, W., Wang, Y., Yang, Z. and Bai, X., 2019. Textfield: Learning a deep direction field for irregular scene text detection. IEEE Transactions on Image Processing. [2] Wang, Y., Xu, Y., Tsogkas, S., Bai, X., Dickinson, S. and Siddiqi, K., 2019. DeepFlux for Skeletons in the Wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.

Yongchao Xu received in 2010 both the engineer degree in electronics & embedded systems at Polytech Paris Sud and the master degree in signal processing & image processing at Université Paris Sud, and the Ph.D. degree in image processing and mathematical morphology at Université Paris Est in 2013. After completing his Ph.D. study at LRDE, EPITA, ESIEE Paris, and LIGM, He worked at LRDE as an assistant professor (Maître de Conférences). He is currently an Associate Professor at the School of Electronic Information and Communications, HUST. His research interests include mathematical morphology, image segmentation, medical image analysis, and deep learning.


Mardi 1 octobre 2019, 11h - 12h, Amphi 4

The Loci Auto-Parallelizing Framework: An Overview and Future Directions

Edward A. Luke, Professor, Department of Computer Science and Engineering, Mississippi State University

The Loci Auto-Parallelizing framework provides a Domain Specific Language (DSL) for the creation of high performance numerical models. The framework uses a logic-relation model to describe irregular computations, provide guarantees of internal logical consistency, and provides for automatic parallel execution. The framework has been used to develop a number of advance computational models used in production engineering processes. Currently Loci based tools form the backbone of computational fluid dynamics tools used by NASA Marshall and Loci based codes account for more than 20% of the computational workload on NASA’s Pleiades supercomputer. This talk will provide an overview of the framework, discuss its general approach, and provide comparisons to other programming models through a mini-app benchmark. In addition, future plans for developing efficient schedules of fine-grained parallel and memory bandwidth constrained computations will be discussed. Finally, some examples of the range of engineering simulations enabled by the technology will be introduced and briefly discussed.

Dr. Ed Luke is a professor of computer science in the computer science department of Mississippi State University. He received his Ph.D. in the field of Computational Engineering in 1999 and conducts research at the intersection between applied math, computer science. His research focuses on creating systems to automatically parallelize numerical algorithms, particularly those used to solve systems of partial differential equations. Currently Dr. Luke is participating in active collaborations with INRIA in Paris conducting research in the areas of solver parallelization and mesh generation.


Mercredi 10 avril 2019, 11h - 12h, Amphi 4

Deep Learning for Satellite Imagery: Semantic Segmentation, Non-Rigid Alignment, and Self-Denoising

Guillaume Charpiat (Équipe TAU, INRIA Saclay / LRI - Université Paris-Sud)

Neural networks have been producing impressive results in computer vision these last years, in image classification or segmentation in particular. To be transferred to remote sensing, this tool needs adaptation to its specifics: large images, many small objects per image, keeping high-resolution output, unreliable ground truth (usually mis-registered). We will review the work done in our group for remote sensing semantic segmentation, explaining the evolution of our neural net architecture design to face these challenges, and finally training a network to register binary cadaster maps to RGB images while detecting new buildings if any, in a multi-scale approach. We will show in particular that it is possible to train on noisy datasets, and to make predictions at an accuracy much better than the variance of the original noise. To explain this phenomenon, we build theoretical tools to express input similarity from the neural network point of view, and use them to quantify data redundancy and associated expected denoising effects. If time permits, we might also present work on hurricane track forecast from reanalysis data (2-3D coverage of the Earth's surface with temperature/pressure/etc. fields) using deep learning.

After a PhD thesis at ENS on shape statistics for image segmentation, and a year in Bernhard Schölkopf's team at MPI Tübingen on kernel methods for medical imaging, Guillaume Charpiat joined INRIA Sophia-Antipolis to work on computer vision, and later INRIA Saclay to work on machine learning. Lately, he has been focusing on deep learning, with in particular remote sensing imagery as an application field.