Difference between revisions of "Publications/newton.18.phd"

From LRDE

Line 6: Line 6:
 
| school = Sorbonne Université
 
| school = Sorbonne Université
 
| address = Paris, France
 
| address = Paris, France
| abstract = In this report, we present code generation techniques related to run-time type checking of heterogeneous sequences. Traditional regular expressions can be used to recognize well defined sets of character strings called rational languages or sometimes textitregular languages. Newton et al. present an extension whereby a dynamic programming language may recognize a well defined set of heterogeneous sequences, such as lists and vectors. As with the analogous string matching regular expression theory, matching these regular type expressions can also be achieved by using a finite state machine (deterministic finite automata, DFA). Constructing such a DFA can be time consuming. The approach we chose, uses meta-programming to intervene at compile-time, generating efficient functions specific to each DFA, and allowing the compiler to further optimize the functions if possible. The functions are made available for use at run-time. Without this use of meta-programming, the program might otherwise be forced to construct the DFA at run-time. The excessively high cost of such a construction would likely far outweigh the time needed to match a string against the expression. Our technique involves hooking into the Common Lisp type system via the deftype macro. The first time the compiler encounters a relevant type specifier, the appropriate DFA is created, which may be a <math>\Omega(2^n)</math> operation, from which specific low-level code is generated to match that specific expression. Thereafter, when the type specifier is encountered again, the same pre-generated function can be used. The code generated is <math>\Theta(n)</math> complexity at run-time. A complication of this approachwhich we explain in this report, is that to build the DFA we must calculate a disjoint type decomposition which is time consuming, and also leads to sub-optimal use of typecase in machine generated code. To handle this complication, we use our own macro optimized-typecase in our machine generated code. Uses of this macro are also implicitly expanded at compile time. Our macro expansion uses BDDs (Binary Decision Diagrams) to optimize the optimized-typecase into low level code, maintaining the typecase semantics but eliminating redundant type checks. In the report we also describe an extension of BDDs to accomodate subtyping in the Common Lisp type system as well as an in-depth analysis of worst-case sizes of BDDs.
+
| abstract = In this report, we present code generation techniques related to run-time type checking of heterogeneous sequences. Traditional regular expressions can be used to recognize well defined sets of character strings called rational languages or sometimes regular languages. Newton et al. present an extension whereby a dynamic programming language may recognize a well defined set of heterogeneous sequences, such as lists and vectors. As with the analogous string matching regular expression theory, matching these regular type expressions can also be achieved by using a finite state machine (deterministic finite automata, DFA). Constructing such a DFA can be time consuming. The approach we chose, uses meta-programming to intervene at compile-time, generating efficient functions specific to each DFA, and allowing the compiler to further optimize the functions if possible. The functions are made available for use at run-time. Without this use of meta-programming, the program might otherwise be forced to construct the DFA at run-time. The excessively high cost of such a construction would likely far outweigh the time needed to match a string against the expression. Our technique involves hooking into the Common Lisp type system via the DEFTYPE macro. The first time the compiler encounters a relevant type specifier, the appropriate DFA is created, which may be a Omega(2^n operation, from which specific low-level code is generated to match that specific expression. Thereafter, when the type specifier is encountered again, the same pre-generated function can be used. The code generated is Theta(n) complexity at run-time. A complication of this approachwhich we explain in this report, is that to build the DFA we must calculate a disjoint type decomposition which is time consuming, and also leads to sub-optimal use of TYPECASE in machine generated code. To handle this complication, we use our own macro OPTIMIZED-TYPECASE in our machine generated code. Uses of this macro are also implicitly expanded at compile time. Our macro expansion uses BDDs (Binary Decision Diagrams) to optimize the OPTIMIZED-TYPECASE into low level code, maintaining the TYPECASE semantics but eliminating redundant type checks. In the report we also describe an extension of BDDs to accomodate subtyping in the Common Lisp type system as well as an in-depth analysis of worst-case sizes of BDDs.
 
| lrdepaper = http://www.lrde.epita.fr/dload/papers/newton.18.phd.pdf
 
| lrdepaper = http://www.lrde.epita.fr/dload/papers/newton.18.phd.pdf
 
| lrdeslides = http://www.lrde.epita.fr/dload/papers/newton.18.phd.slides.pdf
 
| lrdeslides = http://www.lrde.epita.fr/dload/papers/newton.18.phd.slides.pdf
Line 27: Line 27:
 
sequences. Traditional regular expressions can be used to
 
sequences. Traditional regular expressions can be used to
 
recognize well defined sets of character strings called
 
recognize well defined sets of character strings called
\textit<nowiki>{</nowiki>rational languages<nowiki>}</nowiki> or sometimes \textit<nowiki>{</nowiki>regular
+
rational languages or sometimes regular languages. Newton
languages<nowiki>}</nowiki>. Newton et al. present an extension whereby a
+
et al. present an extension whereby a dynamic programming
dynamic programming language may recognize a well defined
+
language may recognize a well defined set of heterogeneous
set of heterogeneous sequences, such as lists and vectors.
+
sequences, such as lists and vectors. As with the analogous
As with the analogous string matching regular expression
+
string matching regular expression theory, matching these
theory, matching these \textit<nowiki>{</nowiki>regular type expressions<nowiki>}</nowiki>
+
regular type expressions can also be achieved by using a
can also be achieved by using a finite state machine
+
finite state machine (deterministic finite automata, DFA).
 
Constructing such a DFA can be time consuming. The approach
(deterministic finite automata, DFA). Constructing such a
 
 
we chose, uses meta-programming to intervene at
DFA can be time consuming. The approach we chose, uses
 
  +
compile-time, generating efficient functions specific to
meta-programming to intervene at compile-time, generating
 
efficient functions specific to each DFA, and allowing the
+
each DFA, and allowing the compiler to further optimize the
compiler to further optimize the functions if possible. The
+
functions if possible. The functions are made available for
functions are made available for use at run-time. Without
+
use at run-time. Without this use of meta-programming, the
 
program might otherwise be forced to construct the DFA at
this use of meta-programming, the program might otherwise
 
 
run-time. The excessively high cost of such a construction
be forced to construct the DFA at run-time. The excessively
 
 
would likely far outweigh the time needed to match a string
high cost of such a construction would likely far outweigh
 
  +
against the expression. Our technique involves hooking into
the time needed to match a string against the expression.
 
Our technique involves hooking into the Common Lisp type
+
the Common Lisp type system via the DEFTYPE macro. The
 
first time the compiler encounters a relevant type
system via the \texttt<nowiki>{</nowiki>deftype<nowiki>}</nowiki> macro. The first time the
 
 
specifier, the appropriate DFA is created, which may be a
compiler encounters a relevant type specifier, the
 
  +
Omega(2^n operation, from which specific low-level code is
appropriate DFA is created, which may be a $\Omega(2^n)$
 
operation, from which specific low-level code is generated
+
generated to match that specific expression. Thereafter,
  +
when the type specifier is encountered again, the same
to match that specific expression. Thereafter, when the
 
type specifier is encountered again, the same pre-generated
+
pre-generated function can be used. The code generated is
 
Theta(n) complexity at run-time. A complication of this
function can be used. The code generated is $\Theta(n)$
 
 
approach, which we explain in this report, is that to build
complexity at run-time. A complication of this approach,
 
 
the DFA we must calculate a disjoint type decomposition
which we explain in this report, is that to build the DFA
 
 
which is time consuming, and also leads to sub-optimal use
we must calculate a disjoint type decomposition which is
 
 
of TYPECASE in machine generated code. To handle this
time consuming, and also leads to sub-optimal use of
 
  +
complication, we use our own macro OPTIMIZED-TYPECASE in
\texttt<nowiki>{</nowiki>typecase<nowiki>}</nowiki> in machine generated code. To handle this
 
complication, we use our own macro
+
our machine generated code. Uses of this macro are also
  +
implicitly expanded at compile time. Our macro expansion
\texttt<nowiki>{</nowiki>optimized-typecase<nowiki>}</nowiki> in our machine generated code.
 
  +
uses BDDs (Binary Decision Diagrams) to optimize the
Uses of this macro are also implicitly expanded at compile
 
  +
OPTIMIZED-TYPECASE into low level code, maintaining the
time. Our macro expansion uses BDDs (Binary Decision
 
 
TYPECASE semantics but eliminating redundant type checks.
Diagrams) to optimize the \texttt<nowiki>{</nowiki>optimized-typecase<nowiki>}</nowiki> into
 
 
In the report we also describe an extension of BDDs to
low level code, maintaining the \texttt<nowiki>{</nowiki>typecase<nowiki>}</nowiki> semantics
 
 
accomodate subtyping in the Common Lisp type system as well
but eliminating redundant type checks. In the report we
 
 
as an in-depth analysis of worst-case sizes of BDDs. <nowiki>}</nowiki>
also describe an extension of BDDs to accomodate subtyping
 
in the Common Lisp type system as well as an in-depth
 
analysis of worst-case sizes of BDDs. <nowiki>}</nowiki>
 
 
<nowiki>}</nowiki>
 
<nowiki>}</nowiki>
   

Revision as of 13:40, 17 January 2020

Abstract

In this report, we present code generation techniques related to run-time type checking of heterogeneous sequences. Traditional regular expressions can be used to recognize well defined sets of character strings called rational languages or sometimes regular languages. Newton et al. present an extension whereby a dynamic programming language may recognize a well defined set of heterogeneous sequences, such as lists and vectors. As with the analogous string matching regular expression theory, matching these regular type expressions can also be achieved by using a finite state machine (deterministic finite automata, DFA). Constructing such a DFA can be time consuming. The approach we chose, uses meta-programming to intervene at compile-time, generating efficient functions specific to each DFA, and allowing the compiler to further optimize the functions if possible. The functions are made available for use at run-time. Without this use of meta-programming, the program might otherwise be forced to construct the DFA at run-time. The excessively high cost of such a construction would likely far outweigh the time needed to match a string against the expression. Our technique involves hooking into the Common Lisp type system via the DEFTYPE macro. The first time the compiler encounters a relevant type specifier, the appropriate DFA is created, which may be a Omega(2^n operation, from which specific low-level code is generated to match that specific expression. Thereafter, when the type specifier is encountered again, the same pre-generated function can be used. The code generated is Theta(n) complexity at run-time. A complication of this approachwhich we explain in this report, is that to build the DFA we must calculate a disjoint type decomposition which is time consuming, and also leads to sub-optimal use of TYPECASE in machine generated code. To handle this complication, we use our own macro OPTIMIZED-TYPECASE in our machine generated code. Uses of this macro are also implicitly expanded at compile time. Our macro expansion uses BDDs (Binary Decision Diagrams) to optimize the OPTIMIZED-TYPECASE into low level code, maintaining the TYPECASE semantics but eliminating redundant type checks. In the report we also describe an extension of BDDs to accomodate subtyping in the Common Lisp type system as well as an in-depth analysis of worst-case sizes of BDDs.

Documents

Bibtex (lrde.bib)

@PhDThesis{	  newton.18.phd,
  author	= {Jim Newton},
  title		= {Representing and Computing with Types in Dynamically Typed
		  Languages},
  school	= {Sorbonne Universit\'e},
  year		= 2018,
  address	= {Paris, France},
  month		= nov,
  abstract	= {In this report, we present code generation techniques
		  related to run-time type checking of heterogeneous
		  sequences. Traditional regular expressions can be used to
		  recognize well defined sets of character strings called
		  rational languages or sometimes regular languages. Newton
		  et al. present an extension whereby a dynamic programming
		  language may recognize a well defined set of heterogeneous
		  sequences, such as lists and vectors. As with the analogous
		  string matching regular expression theory, matching these
		  regular type expressions can also be achieved by using a
		  finite state machine (deterministic finite automata, DFA).
		  Constructing such a DFA can be time consuming. The approach
		  we chose, uses meta-programming to intervene at
		  compile-time, generating efficient functions specific to
		  each DFA, and allowing the compiler to further optimize the
		  functions if possible. The functions are made available for
		  use at run-time. Without this use of meta-programming, the
		  program might otherwise be forced to construct the DFA at
		  run-time. The excessively high cost of such a construction
		  would likely far outweigh the time needed to match a string
		  against the expression. Our technique involves hooking into
		  the Common Lisp type system via the DEFTYPE macro. The
		  first time the compiler encounters a relevant type
		  specifier, the appropriate DFA is created, which may be a
		  Omega(2^n operation, from which specific low-level code is
		  generated to match that specific expression. Thereafter,
		  when the type specifier is encountered again, the same
		  pre-generated function can be used. The code generated is
		  Theta(n) complexity at run-time. A complication of this
		  approach, which we explain in this report, is that to build
		  the DFA we must calculate a disjoint type decomposition
		  which is time consuming, and also leads to sub-optimal use
		  of TYPECASE in machine generated code. To handle this
		  complication, we use our own macro OPTIMIZED-TYPECASE in
		  our machine generated code. Uses of this macro are also
		  implicitly expanded at compile time. Our macro expansion
		  uses BDDs (Binary Decision Diagrams) to optimize the
		  OPTIMIZED-TYPECASE into low level code, maintaining the
		  TYPECASE semantics but eliminating redundant type checks.
		  In the report we also describe an extension of BDDs to
		  accomodate subtyping in the Common Lisp type system as well
		  as an in-depth analysis of worst-case sizes of BDDs. }
}