Justification de Paragraph: le Knuth-Plass
- Exposé GUTenberg
-
Article:
PDF,
HTML
-
Reference:
BibTeX
- Live recording
- Malgré son âge, TeX est aujourd'hui encore un standard
de-facto en matière de mise en forme typographique. Une part non négligeable
de son succès est due à l'algorithme de justification de paragraphe dont il
est équipé, le fameux « Knuth-Plass », conçu et développé entre 1977 et 1982,
et que Donald Knuth lui-même a décrit comme « probablement l'algorithme le
plus intéressant de TeX ». Mais le Knuth-Plass est un artefact un peu
impressionnant, que l'on préfère en général tenir à distance...
Du point de vue de l'utilisation, sa très grande flexibilité se paye par un
paramétrage complexe: rien moins que dix « curseurs » numériques permettent
de jouer sur la machinerie interne, influençant par la même occasion le
comportement des neuf autres. Du point de vue de l'implémentation, la
littérature qui le décrit mélange de façon inextricable les principes
généraux et les détails d'implémentation, le tout dans un pseudo-langage
algorithmique très impératif et avec des structures de données de très bas
niveau; somme toute, dans un formalisme qui date... de son époque.
Dans cet exposé, je me propose de montrer qu'il est possible de s'approcher
du Knuth-Plass sans qu'il morde. Nous commencerons par retracer les grandes
lignes de son fonctionnement global ainsi que de sa paramétrisation.
Ensuite, nous décrirons son fonctionnement interne en des termes
suffisamment généraux et compréhensibles par tous. Nous verrons en
particulier comment l'algorithme de départ a été optimisé, dans un contexte
où la puissance de calcul (à la fois spatiale et temporelle) de l'époque
était limitée. Enfin, si le temps l'autorise, nous verrons qu'il existe
encore de nombreuses façon d'améliorer le Knuth-Plass à peu de frais, ou en
allant jusqu'à s'affranchir de ces fameuses optimisations que la puissance
des ordinateurs d'aujourd'hui a rendues essentiellement obsolètes.
La Musique des Programmes
- Soirée EPITA
-
Transparents:
PDF,
HTML
-
Reference:
BibTeX
- Quels sont les liens esthétiques entre la musique et
l'informatique ? En particulier entre le langage Lisp et le Jazz ? En voici
quelques uns...
Un avant-goût de Julia
- Séminaire LRDE
-
Reference:
BibTeX
- Julia est un langage de programmation
relativement jeune, développé au MIT, et vendu comme langage dynamique à haute
performance pour le calcul scientifique numérique. L'un des co-auteurs du
langage a une connaissance de Scheme, et Julia s'inspire en effet largement de
Scheme, Common Lisp et Dylan, au point qu'il pourrait presque revendiquer un
lien de parenté avec Lisp. Tout ceci est déjà suffisant pour capter notre
attention, mais il y a plus: Julia semble également tirer parti de techniques
modernes d'optimisation pour les langages dynamiques, en particulier grâce à
son compilateur « Just-in-Time » basé sur LLVM.
Dans cette présentation, nous ferons un tour des aspects les plus saillants du
langage, avec une légère emphase sur ce qui en fait (ou pas) un Lisp, quelques
fois même (pas toujours) un meilleur Lisp que Lisp lui-même.
A taste of Julia
- ACCU 2016
-
Reference:
BibTeX
- Julia is a recent programming language developed at MIT and
sold as a high level, high performance dynamic language for scientific
computing. One of the co-authors of Julia has a Scheme background, and in
fact, it appears that Julia borrows a lot from Scheme, Common Lisp and
Dylan. This is to the point that Julia may even be considered as a new Lisp
dialect. This is enough, already, to catch our attention, but there's
more. Julia also seems to benefit from modern optimization techniques for
dynamic languages, notably through its LLVM based JIT compiler. In this talk,
we will give a tour of the language's most prominent features, with a slight
focus on what makes it a Lisp, sometimes (not always) an even better one that
the existing alternatives.
Referential Transparency is Overrated
- ACCU 2015
-
Transparents:
PDF,
HTML
-
Reference:
BibTeX
- The expression 'referential transparency' itself is already
confusing and subject to interpretation, according to whether you're in the
context of natural, functional or imperative languages. In any of those
contexts however, referential transparency is generally regarded as a Good
Thing™. In computer science for example, it helps to reason about programs,
prove them, optimize them, and even enables some paradigms such as normal
order (lazy) evaluation.
In this talk, we claim that referential transparency is overrated because it
also limits your expressive power. We demonstrate some neat and tricky
things that we can do only when referential transparency is broken, and we
explain the language constructs and techniques that allow us to break it
intentionally, both at the regular and meta-programming levels. Such tools
include duality of syntax and syntax extension, mixing of different scoping
policies, intentional variable capture and free variable injection, lexical
communication channels, anaphoric macros.
Please fasten your seat belts, as we're going to explore mostly uncharted
territory. Whether these techniques are considered extremely powerful,
extremely unsafe, or extremely bad style is a matter of personal taste. In
fact, they are probably all of that, and much more…
Extensibility for DSL Design and Implementation: a
Case Study in Lisp
- DSLDI 2013
-
Transparents:
PDF,
HTML
-
Reference:
BibTeX
- Out of a concern for focus and concision, domain-specific
languages (DSLs) are usually very different from general purpose programming
languages (GPLs), both at the syntactic and the semantic levels. One approach
to DSL implementation is to write a full language infrastructure, including
parser, interpreter or even compiler. Another approach however, is to ground
the DSL into an extensible GPL, giving you control over its own syntax and
semantics. The DSL may then be designed merely as an extension to the
original GPL, and its implementation may boil down to expressing only the
differences with it. The task of DSL implementation is hence considerably
eased. The purpose of this chapter is to provide a tour of the features that
make a GPL extensible, and to demonstrate how, in this context, the
distinction between DSL and GPL can blur, sometimes to the point of complete
disappearance.
The Bright Side of Exceptions
- ACCU 2013
-
Transparents:
PDF,
HTML
-
Reference:
BibTeX
- In many programming languages, the term 'exception' really means
'error'. This is rather unfortunate because an exception is normally just
something that does not happen very often; not necessarily something bad or
wrong.
Languages with explicit support for exceptions typically provide the
'try/catch/throw' paradigm. As it turns out, this paradigm suffers from
limitations which affect its usability in complex situations. The two major
problems are 1. the obligatory stack unwinding on error recovery and 2. a
two-levels only separation of concerns (throwing / handling).
In this talk, we demonstrate the benefits of using a system which does not
suffer from these limitations: 1. the stack is not necessarily unwound on
error recovery (the full execution context at the time the error was signaled
is still available) and 2. the separation of concerns is 3-fold: the code that
signals an error (throw) is different from the code that handles the error
(restart) which itself is different from the code that chooses how to handle
the error (catch).
Such an exception handling mechanism, like the Common Lisp 'condition system'
is able to handle more than just errors and in fact, even more than just
exceptional events. We provide two examples of 'condition-driven
development'. The first one shows how to handle actual errors, only in a more
expressive and cleaner fashion. The second example demonstrates the
implementation of something completely unrelated to error handling: a
user-level coroutine facility.
Lisp extensibility: Impact on DSL Design and Implementation
- ILC 2012
-
Transparents:
PDF,
HTML
-
Reference:
BibTeX
- Out of a concern for focus and concision, domain-specific languages
(DSLs) are usually very different from general purpose programming languages
(GPLs), both at the syntactic and the semantic levels. One approach to DSL
implementation is to write a full language infrastructure, including parser,
interpreter or even compiler. Another approach however, is to ground the DSL
into an extensible GPL, giving you control over its own syntax and
semantics. The DSL may then be designed merely as an extension to the
original GPL, and its implementation may boil down to expressing only the
differences with it. The task of DSL implementation is hence considerably
eased. The purpose of this chapter is to provide a tour of the features that
make a GPL extensible, and to demonstrate how, in this context, the
distinction between DSL and GPL can blur, sometimes to the point of complete
disappearance.
DSLs from the perspective of extensible languages
- ACCU 2012
-
Transparents:
PDF,
HTML
-
Reference:
BibTeX
- The purpose of this presentation is to envision the process of
DSL design and implementation from the perspective of extensible GPLs, and to
show how the capabilities of the original application's GPL can have a
dramatic impact on DSL conception. More precisely, the objective of is
twofold: 1/ showing that by using a truly extensible GPL, implementing a DSL
is considerably simplified, to the point that it may sometimes boil down to
writing a single line of code, and 2/ exhibiting the most important
characteristics of a programming language that make it truly extensible, and
hence suitable for DSL'ification.
Such characteristics are most notably dynamicity (not only dynamic typing, but
in general all things that can be deferred to the run-time), introspection,
intersession, structural or procedural reflexivity, meta-object protocols,
macro systems and JIT-compilation.
Meta-Circularity, and Vice-Versa
- ACCU 2011
-
Transparents:
PDF,
HTML
-
Reference:
BibTeX
- As complexity increases, one often feels limited by the use of
a single language, and incorporates new technology in order to express the
original problem more abstractly, more precisely, and design solutions more
efficiently. Using better-suited languages also has the advantage of letting
you think about your problem in new and different ways, perhaps ways that you
had not thought of before. It is thus no surprise to see the profusion of new
languages that we face today, notably scripting and domain-specific ones.
But then, why the need for all this new and different technology? Wouldn't it
be better if your primary language could evolve the way you want it to? And
why is it not generally possible? Perhaps, because your primary language is
not really extensible...
Meta-linguistic abstraction, that is, the art of language design plays a
capital role in computer science because we have the ability to actually
implement the languages we design, for instance by creating interpreters for
them. A fundamental idea in this context is that an interpreter is just
another program (by extension, one could argue that any program is an
interpreter for a particular language).
In this session, we will revive a historical moment in computer science: the
birth of meta-circularity. When, in 1958, John McCarthy invented Lisp, he
hadn't foreseen that given the core 7 operators of the language, it was
possible to write Lisp in itself, by way of an interpreter. The practical
implication of meta-circularity is that a meta-circular language gives you
direct control over the semantics of the language itself, and as a
consequence, means to modify or extend it. No wonder, then, why lispers never
felt the need for external DSLs, scripting languages, XML or whatever. The
reason is that Lisp, being extensible, can do all that by itself. Lisp is, by
essence, the 'programmable programming language'.
Clon, the Command-Line Options Nuker
- ILC 2010
-
Reference:
BibTeX
- This talk demonstrates the features of Clon, a Common Lisp
library for managing command-line options in standalone
Executables.
CLOS Efficiency: Instantiation
- Invited Talk at the Vrije University of Brussels
-
Reference:
BibTeX
- This talk reports the results of an ongoing experimental
research on the behavior and performance of CLOS, the Common Lisp Object
System. Our purpose is to evaluate the behavior and performance of the 3 most
important characteristics of any dynamic object oriented system: class
instantiation, slot access and dynamic dispatch. This paper describes the
results of our experiments on instantiation. We evaluate the efficiency of the
instantiation process in both C++ and Lisp under a combination of parameters
such as slot types or classes hierarchy. We show that in a non-optimized
configuration where safety is given priority on speed, the behavior of C++ and
Lisp instantiation can be quite different, which is also the case amongst
different Lisp compilers. On the other hand, we demonstrate that when
compilation is tuned for speed, instantiation in Lisp can become faster than
in C++.
Revisiting the Visitor: the 'Just Do It' Pattern
- ACCU 2009
-
Transparents:
PDF,
HTML
-
Reference:
BibTeX
- A software design pattern is a three-part rule which expresses a
relation between a certain context, a problem, and a solution. The well-known
'GoF Book' describes 23 software design patterns. Its influence in the
software engineering community has been dramatic. However, Peter Norvig notes
that '16 of [these] 23 patterns are either invisible or simpler [...]' in
Dylan or Lisp (Design Patterns in Dynamic Programming, Object World, 1996).
We claim that this is not a consequence of the notion of 'pattern' itself, but
rather of the way patterns are generally described; the GoF book being typical
in this matter. Whereas patterns are supposed to be general and abstract, the
GoF book is actually very much oriented towards mainstream object languages
such as C++. As a result, most of its 23 'design patterns' are actually closer
to 'programming patterns', or 'idioms', if you choose to adopt the terminology
of the POSA Book.
In this talk, we would like to envision software design patterns from the
point of view of dynamic languages and specifically from the angle of CLOS,
the Common Lisp Object System. Taking the Visitor pattern as an illustration,
we will show how a generally useful pattern can be blurred into the language,
sometimes to the point of complete disappearance.
The lesson to be learned is that software design patterns should be used with
care, and in particular, will never replace an in-depth knowledge of your
preferred language (in our case, the mastering of first-class and generic
functions, lexical closures and meta-object protocol). By using patterns
blindly, your risk missing the obvious and most of the time simpler solution:
the 'Just Do It' pattern.
Performance and Genericity: the Forgotten Power of Lisp
- ACCU 2008
-
Transparents:
PDF,
HTML
-
Reference:
BibTeX
- Lisp is one of the eldest languages around, and probably still
is the most versatile of them. In our current times where there seem to be a
regain of interest for dynamic and functional programming, many of those
recent languages (Ruby to name one) acknowledge the fact that they were
inspired by Lisp, but not quite as powerful.
So why is it that so many people seem to acknowledge the power of Lisp but so
few of us are actually using it? Two important reasons are that people either
still think it is slow, or think that being so old, it must be dead, so they
simply have forgotten all about it.
The purpose of this session of twofold: first we want to remind people of the
power of Lisp, and second, we want to break the myth of slowness. In a first
step, we illustrate the expressive power of Lisp by showing how
straightforward it is to implement binary methods, a concept otherwise
difficult to reach in traditionnal OO languages. This will allow us to provide
a 'guided-tour' of some of the powerful features of Common Lisp: CLOS (the
Object System) and its multiple-dispatch paradigm, the CLOS MOP (the
Meta-Object Protocol) and it's ability to let us rewrite new, specialized,
objet-systems for our own purpose, and finally the Common Lisp particular
package system.
In a second step, we present a recent research demonstrating that Lisp can run
as fast as C, given that it is properly typed and optimized. This is done by
analyzing the behavior and performance of pixel access and arithmetic
operations in equivalent Lisp and C code for some simple image processing
algorithms.
Scientific Computing in Lisp: beyond the performances of C
- LaBri, upon invitation by Robert Strandh
-
Transparents:
PDF,
HTML
-
Reference:
BibTeX