CSI Seminar 2018-07-03

From LRDE

Lrde.png
Laboratoire de Recherche et Développement de l’EPITA
Séminaire des étudiants-chercheurs
3 July 2018
10h00-15h30, Amphi 4
http://www.lrde.epita.fr
EPITA / LRDE
14-16 rue Voltaire
94276 Le Kremlin-Bicêtre


Spot

10h00 Two-automaton accepting run search in SpotClément Gillard

In a previous paper we introduced the two-automaton emptiness check in the Spot library, that would check for the emptiness of the intersection of the languages of two '"`UNIQ--math-00000003-QINU`"'-automata without building their productcircumventing a limitation in Spot on the number of acceptance sets of the automata. This operation, when the intersection is actually not empty, is usually followed by an accepting run search on the product that would return an example of such a word that is in the languages of both automata. We introduce a new method that computes an accepting run from the data gathered during the two-automaton emptiness check, allowing us to use this new method in algorithms that require the counterexample. In a previous paper we introduced the two-automaton emptiness check in the Spot library, that would check for the emptiness of the intersection of the languages of two -automata without building their productcircumventing a limitation in Spot on the number of acceptance sets of the automata. This operation, when the intersection is actually not empty, is usually followed by an accepting run search on the product that would return an example of such a word that is in the languages of both automata. We introduce a new method that computes an accepting run from the data gathered during the two-automaton emptiness check, allowing us to use this new method in algorithms that require the counterexample.

10h30 Creation of an antichain libraryArthur Remaud

In order theory, an antichain is a subset of elements incomparable with one another, following a given order. In the automata theory, some algorithms use antichains as data structures to store sets of states all disjointed, in simulation algorithms to reduce an automaton for example. The implementation of antichain consists to verify at the insertion if the new element is not comparable to another already in the antichain, which is costly because at each insertion we need to do a linear path. In this report, we detail different algorithms to implement the antichain optimizing the insertion of a new element. Then we will show the results of these implementations with our model checker Spot to compare their performances.


11h00 Implementation of Invisible and Transparent Transitions in SpotVincent Tourneur

In the model checking field, Partial Order Reduction (POR) is a method which allows to notably reduce the size of datastructures used to represent the different possible executions of program model, by considering only representative execution paths, insteed of all. This has a cost: information is lost. When the goal is only to check the absence of deadlocks, enough information is kept, so POR works well. But when in comes to LTL model checkingrelevent information may be dropped: execution paths which modifiy the value of AP (Atomic Proposition) occuring in the LTL formula. Moreover, it is only possible to use POR for LTL X. Invisible and transparent transitions are methods which add additional conditions to fulfill during the POR, in order to keep all execution paths which may modify the value of the AP. We explain these methods and how they have been implemented in the Spot model checking library.

11h30 go2pins: A model checking toolset for the Go programming languageAlexandre Kirszenberg

Model Checking is a set of techniques and algorithms for verifying that programs respect certain properties. In this report, we present a toolchain for applying existing model checking algorithms to Go programs. Through a set of transformations, we compile a Go program to an equivalent Go program whose state can be iterated upon. This new program exposes a standard interface to plug in to model checking solutions.

Climb

13h30 SUBTYPEP: An Implementation of Baker's AlgorithmLeo Valais

The Common Lisp language provides a predicate functionSUBTYPEP, for instrospecting sub-type relationship. In some situations, and given the type system of this languageknowing whether a type is a sub-type of another would require enumerating all the element of that type, possibly an infinite number of them. Because of that, SUBTYPEP is allowed to return the two values (NIL NIL), indicating that it couldn't answer the question. Common Lisp implementations have a tendency to frequently not answereven in situations where they could. Such an abusive behavior prevents potential optimizations to occur, or even leads to violating the standard. In his paper entitled “A Decision Procedure for Common Lisp's SUBTYPEP Predicate”Henry Baker proposes an algorithm that he claims to be both more accurate and more efficient than the average SUBTYPEP implementation. In this report, we present a partial implementation of his algorithm, discuss some technical aspects, present some benchmarks and most importantlycompare the accuracy of both implementations.


Olena

14h00 Optimization of an image processing libraryVirgile Hirtz

With the diversification of the acquisition devices (satellite imagery, medical imagery, HDR photography), the types of images to process has been steadily increasing (2d, 3d, etc...) thus the need for algorithms able to treat all these types. Pylene, an image processing library, aims to bring a simple and generic solution to this problem thanks to high level interfaces. However, such an abstraction often has a cost and we have to do a compromise between to a too gerenic solution which would be inefficient and an efficient solution which would be too specific. Moreover these interfaces are not compliant with the recent versions of the standard which limits his usage simplicity. We propose here a solution which would not has to sacrifice performance at the expense of genericity. This solution relies on the "range" concept, introduced by Eric Niebler, adapted to the image processing needs and to which we introduce the new concept of segmented ranges. It follows a new design which reconciles our interfaces with the new standard to take advantage of the new syntactic simplicity which features a zero cost overhead.

14h30 Integration of histograms in the NL-Mean algorithm for image denoising.Aliona Dangla

Pictures have noise and to reduce it, some algorithms are applied. One of them is the NL-Mean algorithm, implemented as a patch-based strategy. The main idea is to denoise a local region based on other similar neighboring patches. A spatial metric (the Euclidian distance) is used to determine the similarity between patches. The main objective of this project is to incorporate the information related to the statistical distribution of colors within a patch, thanks to the use of histograms, to potentially highlight characteristics that have been missed with the standard strategy. It may give some interesting information which could be helpful to improve picture denoising. This method adds robustness regarding any transformation which does not affect the shape of histograms (overall change of luminance or rotation) and can allow the use of patches which could have been discarded by the standard strategyafter proper post-processing of the patch.

15h00 Automatic Heart SegmentationYounes Khoudli

Atrial fibrillation is a common illness, which can be detected easily if you can localize the human atria in a MRI image. Manual segmentation is labor intensivetherefore here we adapt an automatic method developped for brain image segmentation. Using transfer learning, we retrain a convolutional neural network used for natural image categorization (VGG), and compare the advantages of using a pseudo-3D technique. We also explore ways to preprocess input data to improve final results.