
4. Schedule Of The Congress
Wednesday: 1 invited talk, 12 contributed talk. 09.0010.00 Jan Willem Klop: Infinity and Productivity 10.0010.30 Ferruccio Guidi. Towards the Unification of Terms, Types and Contexts
10.3011.00 COFFEE BREAK
11.0011.30 Thorsten Altenkirch and Nicolas Oury. A core language forDependently Typed Programming 11.3012.00 Pierre Corbineau. Towards the end of the Streicher K axiom (asan axiom) 12.0012.30 Stefan Berghofer and Christian Urban. Nominal Inversion Principles 12.3013.00 Cezary Kaliszyk and Russell S S O'Connor. Computing with classicalreal numbers.
13.0014.30 LUNCH BREAK
14.3015.00 Andrew Polonsky. Translation from First Order Logic to Coherent Logic 15.0015.30 Zhaohui Luo. Modelling objectoriented features in intensionaltype theory 15.3016.00 Michele Pagani. Between interaction and semantics: visible acyclic nets 16.0016.30 Alexis Saurin. Interactive Proof Search: Toward Ludics Programming.
16.3017.00 COFFEE BREAK
17.0017.30 Mauro Piccolo and Luca Paolini. ProcessingPrograms: the linear case 17.3018.00 Marco Gaboardi. Towards a Soft logic for PSPACE 18.0018.30 Michele Basaldella and Claudia Faggian. Ludics with repetitions: HylandOng style exponentials
Thursday: 1 invited talk, 12 contributed talk. 09.0010.00 Andrea Asperti: About the formalization of some results by Chebyshev in number theory. 10.0010.30 Carsten Schurmann and Jeffrey Sarnat. Structural Logical Relations
10.3011.00 COFFEE BREAK
11.0011.30 Aarne Ranta, Bjorn Bringert and Krasimir Angelov. PGF: A RunTime Format for TypeTheoretical Grammars 11.3012.00 Ralph Matthes. A representation of untyped lambdacalculus with explicit flattening through a nested datatype 12.0012.30 Herman Geuvers. Formalizing Hybrid Systems in Coq 12.3013.00 Francois Garillot. A small reflection on group automorphisms
13.0014.30 LUNCH BREAK
14.3015.00 Aleksy Schubert. A characterisation of solutions for nonstructural subtype constraints 15.0015.30 Dulma Rodriguez. Algorithmic Subtyping for Higher Order BoundedQuantification Revisited 15.3016.00 Rene David and Delia Kesner. Strong normalisation of MELL proofnets 16.0016.30 Andreas Abel. Normalization by Evaluation for MartinLof TypeTheory
16.3017.00 COFFEE BREAK
17.0017.30 Christophe Raffalli. PML and strong normalisation 17.3018.00 Jose Espirito Santo, Ralph Matthes and Luis Pinto. Cgpstranslations for intuitionsistic higherorder sequent calculi 18.0018.30 Stephane Lengrand. Functionalogic programming: a call for interest 18.3019.00 BUSINESS MEETING
Friday: 1 invited talk, 12 contributed talk. 09.0010.00 Gilles Dowek: TBA 10.0010.30 Alexandre Miquel. Classical program extraction in the calculus of constructions
10.3011.00 COFFEE BREAK
11.0011.30 Luca Chiarabini. Program Development by Proof Transformation. 11.3012.00 Silvia Ghilezan, Hugo Herbelin and Alexis Saurin. A Uniform Approach to CallbyName and CallbyValue Delimited Control 12.0012.30 Maria Emilia Maietti and Giovanni Sambin. A minimalist twolevel foundation for constructive mathematics 12.3013.00 Aldo Ursini. Algebraic Types
13.0014.30 LUNCH BREAK
14.3015.00 Anton Setzer. The strength of MartinLof Type Theory with the logical framework 15.0015.30 JeanFrancois Monin. On defining f91 in Coq 15.3016.00 Yves Bertot and Ekaterina Komendantskaya. Inductive and Coinductive Components of Corecursive Functions in Coq 16.0016.30 Marcin Benke. Multiple Recursion in Type Theory and Generic Programming
16.3017.00 COFFEE BREAK
17.0017.30 Johannes Kanig, Sylvain Conchon and Evelyne Contejean. LightweightEquality Certificates 17.3018.00 Evelyne Contejean. Coccinelle, a Coq library for rewriting 18.0018.30 Zhaohui Luo and Serguei Soloviev. Coercive subtyping and multilingual semantics. 19.3020.00 Bus from Villa Gualino to Restaurant "CIACOLON". 20.0023.30 Social dinner
Saturday: 5 contributed talks. 09.0009.30 Guillaume Burel. Encoding Pure Types Systems in Superdeduction 09.3010.00 Paul Brauner and Allali Lisa. A semantic normalization proof for a system with recursors 10.0010.30 Yves Bertot and Vladimir Komendantsky. Fixed point semantics and partial recursion in Coq
10.3011.00 COFFEE BREAK
11.0011.30 Muhammad Humayoun. Software Specifications and Mathematical Proofs in Natural Languages 11.3012.00 Davide Ancona, Giovanni Lagorio and Elena Zucca. Type inference for Javalike programs by coinductive logic programming
LIST OF PRESENTATIONS (sorted by argument) typed lambda calculus  Ferruccio Guidi. Towards the Unification of Terms, Types and Contexts (typed lambda calculus: abbrevations)
 KEYWORDS: abbreviations, contexts as terms, types as terms, uniformly dependenttypes
 ABSTRACT:lambdadelta is a typed lambda calculus intended as the firststep towards the verification of the following conjecture: theadoption of a single set of constructions for terms, types, andcontexts (i.e. the ``contexts as types as terms'' paradigm) iscompatible with the presence of a desirable metatheory. lambdadeltarealizes the ``types as terms'' paradigm in full and features anuniform typing policy for all binders. Therefore this calculus islambdatyped but differs from the Automathrelated calculi in that itprovides for an abbreviation construction at the level of terms. Inthe ``propositions as types'' perspective it can encode theimplicative fragment of predicative logic without quantifiers becauseuniformly dependent types are allowed.
 Thorsten Altenkirch and Nicolas Oury. A core language forDependently Typed Programming (typed lambda calculus: dependent types)
 KEYWORDS:Functional Programming, core language, recursion
 ABSTRACT: We discuss the design of a core language, whichshould play a similar role for dependently typed programming likeextensions of System F do for conventional functional languages likeHaskell. The core language incorporates mutual dependent recursivedefinitions, type:type, pi and sigma types and finite sets oflabels. We show that standard constructions from Type Theory includinguniverses can be easily encoded in our language. We address someimportant issues: having an equality checker which unfolds recursiononly when needed, avoiding looping when typechecking "sensible"programs; and the simplification of type checking for eliminators likecase by using equational constraints, allowing the flexible use ofcase expressions within dependently typed programming. We discuss someinteresting extensions: lazy datatypes, integrating equality typeswith constraints and reflection.
 Pierre Corbineau. Towards the end of the Streicher K axiom (asan axiom) (typed lambda calculus: Streicher K axiom)
 KEYWORDS: Type Theory, Streicher K axiom, Equality,Inductive types
 ABSTRACT: It is a well known result that the Streicher K axiom, which statesthat all proofs of x=x are canonical, is independent of the rules forthe Calculus of Inductive Constructions. This result has several annoyingconsequences for CIC : the heterogeneous equality (a.k.a. John Majorequality) and the homogeneous equality are not equivalent, and theinjectivity of the second projection does not hold for dependentpairs. In this talk, we present a simple extension of the Calculus ofInductive Constructions that makes the K axiom admissible by allowingmore patternmatching terms to be correctly typed. We show thatconversely the new theory can be simulated by CIC extended with aconstant representing the K axiom equipped with a correspondingreduction rule and hence is consistent.We finally give several examples of the possibilities offered by thenew type system and discuss the adoption of the heterogeneous equalityas the default equality for CIC.
 Stefan Berghofer and Christian Urban. Nominal Inversion Principles (typed lambda calculus: alpha rule)
 KEYWORDS: Nominal syntax, Inductive definitions
 ABSTRACT: When reasoning about inductively defined predicates, such as typing judgements or evaluation relations, proofs are often done by case analysis on the last rule in a derivation. This case analysis involves solving equational constraints on the arguments of the inductively defined predicate athand. This is wellunderstood in case where term constructors areinjective. In case of alpha equivalence classes, which are not injective, thiscase analysis gives rise to annoying variable renamings. In this paper, weshow that more convenient inversion principles can be derived where one doesnot have to deal with explicit variable renamings. An interesting observationis that this fact relies on the inductive rules to satisfy the variableconvention compatibility condition, which was introduced to justify theadmissibility of Barendregt's variable convention in rule inductions.
 Zhaohui Luo. Modelling objectoriented features in intensionaltype theory (typed lambda calculi: objectoriented features)
 KEYWORDS: intensional type theory, objectorientedprogramming
 ABSTRACT: Functional interpretations of objectoriented programming have been studied in the literature by means of type systems with powerful but questionable features such as bounded quantification and manifest types, which have unwelcome concequences including serious metatheoretic difficulties.In this talk, the modelling of OOfeatures in classbased languages is studied in intensional type theory. In particular, different from other existing approaches, classes are interpreted as types and their instances ("objects") as objects of the corresponding types. This is made possible in an intensional type theory by considering the parameterised family of inductive unit types and coercive subtyping.Based on such an interpretation, it is then possible to consider the verification of programs with objectoriented features in a proof assistant that implements an intensional type theory.
 Aleksy Schubert. A characterisation of solutions for nonstructural subtype constraints
 KEYWORDS:subtypes, constraint solving, entailment
 ABSTRACT:The subtype entailment problem is a problem to decide if each solution of a given subtype inequalities constraint set can be a solution of a fixed inequality. This problem is a long time pending problem which was tackled using various techniques.In this work, we present a direct characterisation of all thesolutions of nonstructural subtype constraint inequalities. We hopethat this characterisation can lead to better understanding of thesubtype entailment problem.
Linear Logic  Rene David and Delia Kesner. Strong normalisation of MELL proofnets (linear logic)
 KEYWORDS:proofnets, strong normalisation
 ABSTRACT:We prove strong normalisation for the multiplicative exponential fragment of linear logic (MELL) proofnets. In contrast to known proofs of the same result, which are based on the combination of weak normalisation and confluence properties, our proof is inspired from the simple arithmetical proof of strong normalisation for the lambdacalculus, where a natural notion of commutation is used.
 Michele Pagani. Between interaction and semantics: visible acyclic nets (linear logic)
 KEYWORDS:Linear Logic, Denotational semantics, Weaknormalization, Finiteness spaces, Differential nets
 ABSTRACT:I present a strict correspondence between cutelimination, acyclicityand finiteness spaces in the framework of differential nets withpromotion (DiffNet for short). DiffNet have been defined by Ehrhard&Regnier asan extension of the multiplicative exponential linear logic,introducing the semantical constructions defined by Ehrhard in hisfiniteness spaces.Consider the orthogonal (P)Degree of a net P with conclusion C as the setof all nets R with a conclusion notC s.t. the cut between C and notCis weak normalizable. Let P be a cutfree net with conclusion C, myresults show that the following conditions are equivalent:1. P is interpreted as a finitary relation in the finiteness spaceassociated with C2. P is visible acyclic (a geometric condition more general than theusual Danos&Regnier's correctness criterion)3. (P)Degree contains all visible acyclic nets with conclusion notCThese equivalences provide a generalization of the ones in the maintheorems of the theory of linear logic proofnets, namely: Retore''stheorem (1 > 2), semantic soundness theorem (2 > 1), weaknormalization theorem (2 > 3), and Beche't's theorem (3 > 2).Above all, it discloses a deep unity in DiffNet among normalization,visible acyclicity and finiteness spaces, which was present (even ifnever actually remarked) only in the multiplicative fragment of linearlogic.
 Alexis Saurin. Interactive Proof Search: Toward Ludics Programming. (linear logic)
 KEYWORDS: Ludics, (Interactive) Proof Search, LogicProgramming, Interactive Types
 ABSTRACT: Proof theory and Computation are research areas which have very strong relationships: new concepts in logic and proof theory often apply to the theory of programming languages. The use of proofs to model computation led to the modelling of two main programming paradigms which are functional programming and logic programming. While functional programming is based on proof normalization, logic programming is based on proof search.This approach has been very successful in the sense that it allowed to capture many programming primitives logically. Nevertheless, important fragments of real logic programming languages are still hardly understood from the logical point of view and it has been found very difficult to give a logical semantics to control primitives.Girard introduced Ludics, as a new theory to study interaction. In Ludics, everything is built on interaction or in an interactive way.In this talk, we shall explain how and why Ludics could be consideredas a foundation for interactive proof search. We present the SLAM, anabstract machine (extending Faggian's Loci Abstract Machine fornormalization of designs) allowing to interactively constructdesigns. We examplify interactive proof search on several levels ofexamples (IPS in MALL, runs of the SLAM on simples designs and toyprogramming examples). We then discuss the flexibility we obtain andhow it can be used in order to capture control primitive from logicprogramming.
 Mauro Piccolo and Luca Paolini. Processing Programs: thelinear case (linear logic)
 KEYWORDS:process calculi, linear programming languages, pcf, linear functions
 ABSTRACT:We study how to express computational processesgenerated by programs of SlPCF , a very simple paradigmaticprogramming language conceived in order to program functions, linearin a semantic way. For this purpose we adapt the Solos calculusintroduced by Laneve, Parrow and Viktor. We show a fully abstractembedding of SlPCF in Solos, using a typed version of barbedcongruence.
 Marco Gaboardi. Towards a Soft logic for PSPACE
(linear logic) KEYWORDS:linear logic, Implicit Computational Complexity,type assignment system
 ABSTRACT:In a previous work we have introduced a type assignment system, named STA_B, correct and complete for PSPACE. STA_B is the first type system characterizing this class using the light logics approach. The expressiveness of STA_B depends on the use, both at type and language level, of Booleans constants managed additively. The soundness is obtained by giving a big step abstract machine that performs programs evaluation in polynomial space.Starting from the ideas underlying STA_B, we would here exploit the proofsasprograms correspondence to design a logic characterizing PSPACE. The logic system obtained by forgetting terms in STA_B is unsatisfactory. Boolean constants in Soft Linear Logic are redundant. Moreover, the STA_B evaluation mechanism does not directly translate in a standard cut elimination procedure.Starting from these considerations we propose a logic system extendingSoft Linear Logic by an additive cut rule. This system is expressiveenough to characterize all the PSPACE functions. The correctness ofthe cut elimination process is assured by the use of closedreduction.
 Michele Basaldella and Claudia Faggian. Ludics with repetitions: HylandOng style exponentials
 KEYWORDS:Ludics, Linear Logic, Game Semantics, InteractiveTypes
 ABSTRACT:Ludics is a research program introduced by J.Y.Girard whose aim is to provide an interactivefoundation of logic.It can be seen as a form of GameSemantics whose approach is completely based on interaction:strategies are untyped (given on a universal arena) and they caninteract with each other (by composing them).An interactive type is a setof strategies which react the same way to a set of tests(which are strategies themselves).The semantical types which are interpretationof logical formulas enjoy a closure property, called internalcompleteness, which is essential to provefull completeness.In this talk, we extend the original setting of Ludics byintroducing the exponentials in order to interpret proofsof polarizedMELL (MultiplicativeExponentialsLinearLogic).To this purpose, we considerHylandOng strategies, which naturally admit repetitionof moves.Moreover, we show that we need to extend the universeof strategies by introducing nonuniform strategies:nonuniformity isessential to build exponential types interactively.We show that in our setting itis possible to define interactive types, prove internal completeness,and from this full completeness.
Typed Lambda Calculi: Program Extraction  Alexandre Miquel. Classical program extraction in the calculus of constructions (classical logic)
 KEYWORDS: program extraction, classical logic, calculus ofconstructions, lambdacalculus, continuations
 ABSTRACT: In this talk, I will show that Krivine's techniques of classical realizability can be extended to the calculus of constructions with universes (CCw), which makes possible to define an extraction mechanism from classical proofs (with proofirrelevance and countable axiom of choice) formalized in Coq (in sort Prop).For that, I will recall the basics of classical realizability, and I will show how to extend Krivine's realisability model (initially defined for 2nd order arithmetic) up to CCw, using a structure of Piset which is reminiscent from Hyland and Moggi's omegasets, Streicher's Dsets and Altenkirch lambdasets.After having presented the basic extraction scheme, I will discusspossible ways to extend it to inductively defined datatypes usingoptimized representations of data structures in the target language.
 Luca Chiarabini. Program Development by Proof Transformation.
(typed lambda calculus: program extraction) KEYWORDS:Pruning, Program Extraction
 ABSTRACT:"Prune" a constructive proof means drop somecomputational part of the proof in function, for example, ofadditional knowledge on the input data. Christopher Alan Goad (1980)showed that a program extracted from a pruned proof can be moreperforming w.r.t. a program extracted from the same but not prunedproof. We will show an implementation of the "pruning" method in theMINLOG proof assistant, with an application to the "Bin Packing"Problem. At the end, some ideas on how to extend "pruning" will bepresented.
 Silvia Ghilezan, Hugo Herbelin and Alexis Saurin. A Uniform Approach to CallbyName and CallbyValue Delimited Control
 KEYWORDS:Classical logic, Observational completeness,Delimited control
 ABSTRACT:We present a comparative study of the callbyname and callbyvaluevariants of an extension of lambda mu calculus with delimited control,introduced by Ariola, Herbelin and Sabry, covering in particular thequestions of simple typing, operational semantics, andcontinuationpassingstyle semantics. Finally, we discuss therelevance of this calculus as a uniform framework for representing fourdifferent calculi of delimited continuations: two CBV calculi (Danvyand Filinski shift/reset; Sabry shift/lazy reset) and two CBN calculi(a CBN variant of Danvy and Filinski shift/reset; de Groote and SaurinLambda mu calculus  unexpectedly!).
Typed Lambda Calculi: induction, coinduction and recursion  JeanFranc,ois Monin. On defining f91 in Coq (typed lambda calculus: propbounded recursive functions)
 KEYWORDS: Coq, general recursion
 ABSTRACT:We propose a way to define Propbounded recursive functions in theCalculus of Inductive Constructions, using only inductive types(i.e. without auxiliary coinductive types nor Dybjerinductionrecursion). The technique is illustrated on McCarthy f91function.
 Yves Bertot and Ekaterina Komendantskaya. Inductive and Coinductive Components of Corecursive Functions in Coq
 KEYWORDS:Coq, Coinduction, Productiveness, Guardedness, Accessibility predicates
 ABSTRACT:In Constructive Type Theory, recursive and corecursive definitions are subject to syntactic restrictions which guarantee termination for recursive functions and productivity for corecursive functions.However, many terminating and productive functions do not pass the syntactic tests. Bove proposed in her thesis an elegant reformulation of the method of accessibility predicates thatwidens the range of terminative recursive functions formalisable in Constructive Type Theory. In this talk, we pursue the same goal for productive corecursive functions. Notably, our method of formalisation ofcoinductive definitions of productive functions in Coq requires notonly the use of adhoc predicates, but also a systematic algorithmthat separates the inductive and coinductive parts of functions.
 Marcin Benke. Multiple Recursion in Type Theory and Generic Programming
 KEYWORDS:type theory, recursion, generic programming
 ABSTRACT:Many functions (such as e.g. boolean equality) arenaturally defined by simultaneous recursion in two or morearguments. Yet most formulations of Type Theory allow only primitiverecursion in one argument at a time. We dicsuss some ways in whichmultiplerecursion can be justified and explained in context of genericprogramming in Type Theory.
Typed Lambda Calculi: type assignment algorithm  Dulma Rodriguez. Algorithmic Subtyping for Higher Order BoundedQuantification Revisited (typed lambda calculus: type assignment algorithm)
 KEYWORDS:subtyping, bounded quantification
 ABSTRACT:Algorithmic subtyping for the system Fomegasub of higher order bounded polymorphism has been investigated in the past extensively yielding to positive results. The reason for this new revision of the topic is the development ofnew proof techniques in the recent years and the question, whether thecompleteness of algorithmic subtyping for this system can be provedwith these proof techniques, i.e. syntactically, without constructionof models. We provide a revision of the algorithm and prove directly,using lexicographical induction on kinds and derivations, itssoundness, completeness and termination.
 Davide Ancona, Giovanni Lagorio and Elena Zucca. Type inference for Javalike programs by coinductive logic programming (typed lambda calculus: type assignment algorithm)
 KEYWORDS: class analysis, coinduction, logic programming, objectoriented programming
 ABSTRACT: Although coinductive logic programming (CoLP) has proved to have severalapplications, including for instance verification of infinitaryproperties, model checking and bisimilarity proofs,type inference by CoLP has not been properly investigated yet.In this presentation we show a novel approach to solve the problem of type inference for Javalike programs based onCoLP and on the following generic scheme: first, the program P tobe analyzed is translated into a corresponding logic program P'and into a goal G; then, the solution of the type inference problemcorresponds to find a most general substitution theta s.t. (G theta)belongs to the greatest model of P', that is, the coinductive model ofP'. Operationally, this corresponds to find a coSLD derivation of Gin P', according to the operational semantics of coLP recentlydefined by Simon et al. [ICLP06,ICALP07].A complete example of instantiation of this scheme is considered for a simpleobjectoriented language with nominal subtyping, where type annotationsare optional. In particular, the following two different type inference problems areconsidered: (1) type inference of a complete program, that is,a collection of classes equipped with a main expression; (2) typeinference of a library of classes, where no main expression is provided.In the final part of the talk we try to address some scalabilityissues of the approach, by sketching some possible instantiations ableto deal with more expressive objectoriented languages.
Typed Lambda Calculi: normalization  Andreas Abel. Normalization by Evaluation for MartinLof TypeTheory (typed lambda calculus: normalization)
 KEYWORDS: Normalization by Evaluation for MartinLof Type Theory
 ABSTRACT:The decidability of equality is proved for MartinLof type theory witha universe 'a la Russell and typed betaetaequality judgements. Acorollary of this result is that the constructor for dependent functiontypes is injective, a property which is crucial for establishing thecorrectness of the typechecking algorithm. The decision procedure usesnormalization by evaluation, an algorithm which first interprets termsin a domain with untyped semantic elements and then extracts normalforms. The correctness of this algorithm is established using aPERmodel and a logical relation between syntax and semantics.
 Christophe Raffalli. PML and strong normalisation
(typed lambda calculus: normalization) KEYWORDS: strong normalisation, lambdacalculus, types
 ABSTRACT: The development of the PML proof assistant is progressing: the language is now stable one can start to write proof the theoretical work is also maturingIn this talk, we will concentrate on the last aspect and mainly thestrong normalisation proof for a subset of the language withoutfixpoint. The proof method is quite different from usual ones mainlybecause we use some kind of constraint checking algorithm (as opposedto usual typing rules of constraint solving).
 Jose Espirito Santo, Ralph Matthes and Luis Pinto. Cgpstranslations for intuitionsistic higherorder sequent calculi
(typed lambda calculus: strong normalization) KEYWORDS:cgps translation, strong normalisation, sequentcalculus, higher order logic
 ABSTRACT:In this talk we consider type theories defined in the sequent calculus format. Specifically, we want to obtain strong normalisation of these theories as corollary to strong normalisation of the corresponding theories in the natural deduction format. The technique used is called continuationandgarbage passing style translation, a refinement of cpstranslation achieving a strict simulation of reduction in the source calculus by reduction in the target calculus. We review the technique in the propositional / simple type setting, and then we extend it to system F^omega and intuitionistic higherorder logic.Joint work with R. Matthes and L. Pinto.
Foundation of constructive mathematics Maria Emilia Maietti and Giovanni Sambin. A minimalist twolevel foundation for constructive mathematics (foundation of constructive mathematics)
 KEYWORDS: intuitionistic logic, quotients, type theory
 ABSTRACT:We present a twolevel theory to formalizeconstructive mathematics. One level is givenby an intensional type theory previously introduced andcalled Minimal type theory. The other level is givenby an extensional set theory which is interpretedin the first one by means of a quotient model.This twolevel theory has two main features: it is minimalamong the most relevant foundations for constructive mathematics;it fulfils the ``proofsasprograms'' paradigm thanks to the waythe two levels are linked.
 Aldo Ursini. Algebraic Types (foundation of typed lambda calculus)
 KEYWORDS:Types, Algebraic terms
 ABSTRACT:To a philosopher, a type is a 'coset'; to a logician, a type is a 'formula'; to a computer scientist a type is a datatype.What is then a type to a poor mathematician? The proposal is: a type is a (FOL)term, i.e. an algebraic term. This is not so silly, as far as one can give an algebraic account for the crucial notions: being an object a of a given type t; for such an object a, to be in normal form; for such an object a, what is a procedure of reduction of a to normal form; provide some (denotational) semantics for types and objects.This can be achieved using (a common generalization of) classical algebraic notions:ideals of rings, normal subgroups, filters in Heyting or Boolean algebras, etc.; and equational logic.Thus we get a "onedimensional world": objects, types, derivations areall just algebraic terms. Whence, reduction is handled via equationallogic. And at least in lucky cases namely when there is a "generic"algebra available, denotational semantics is dealt with justalgebraic terms.
 Anton Setzer. The strength of MartinLof Type Theory with the logical framework (typed lambda calculus: foundations)
 KEYWORDS:MartinLof Type Theory, Proof Theory, Unverses, Wtype, Mahlo universe
 ABSTRACT: Our previous investigations into MartinLof Type Theory wererestricted to the version without the logical framework.This avoided having to give an interpretation of Set all what was needed is to state when the judgementA: Set is correct  this was the case essentially whenthe interpretation of A is a partial equivalence relation.We show how to overcome this problem by interpreting Set andother higher types as proper set theoretic classes.We then determine upper bounds for MartinLof Type Theorywith one universe and the Wtype, and for the same type theorywith one Mahlo universe and the Wtype.
 Guillaume Burel. Encoding Pure Types Systems in Superdeduction
(typed lambda calculus: subsuming PTS into superdeduction) KEYWORDS:firstorder logic, deduction modulo, pure type systems, supernatural deduction
 ABSTRACT:Superdeduction is a formalism closely related todeduction modulowhich permits to enrich a firstorder deductive systemsuch asnatural deduction or sequent calculuswith new inference rulesautomatically computed from the presentation of a theory. We give anatural encoding from every Pure Type System (PTS) into superdeductionby defining an appropriate firstorder theory. We prove that thistranslation is correct and conservative, showing a correspondencebetween valid typing judgment in the PTS and provable sequents in thecorresponding superdeductive system. This implies that superdeductioncan be easily used as a logical framework, and leads to a betterunderstanding of the implementation and the automation of proof searchfor PTS, as well as to more cooperation between proof assistants. Thisalso opens new approaches regarding normalization of PTS.
Implementations Johannes Kanig, Sylvain Conchon and Evelyne Contejean. LightweightEquality Certificates (applications: implementing proof assistents) KEYWORDS:automation, computerassisted reasoning
 ABSTRACT: e present a mechanism to automate equality proofsin skeptical proof assistants. The equality we consider is thecombination of the pure theory of equality over uninterpreted symbolsand a builtin decidable theory X. Our approach is intermediatebetween traditional complete proof reconstruction and verifiedprovers. Reasoning about X is implemented independently in the proofassistant and as an external decision procedure. This enables bothtools to interact only by lightweight traces about pure equalitieswhich are produced by the decision procedure and interpreted by theproof assistant. We validate our approach with the Coq proof assistant and our decision procedure CC(X), a congruence closure algorithm modulo a builtin theory. CC(X) is implemented as a module parameterized by X and the Coq development follows the same modular architecture. Currently, we have instantiated X by the theory of linear arithmetic.
Evelyne Contejean. Coccinelle, a Coq library for rewriting(applications: implementing proof assistents)  KEYWORDS:term rewriting, termination, equational theory
 ABSTRACT:The main purpose of the Coq library Coccinelle is to model termrewriting and to use this modelling together with automaticalledgenerated traces in order to prove some key properties of rewritesystems such as termination and local confluence.For the moment Coccinelle contains: universal (firstorder) term algebras generic equational theories over terms associativity and commutivativity (AC) as a builtin equational theory matching modulo AC multisets (thanks to permuations modulo an arbitrary relation) and Dickon lemma RPO several criteria for termination of rewriting such as general dependencypairs, modular termination, innermost terminationThe traces are produced by the automated prover CiME,and currently Coccinelle + CiME can automatically prove thetermination of a significant part of the TPDB (termination problemdata base).
Stephane Lengrand. Functionalogic programming: a call for interest (implementing proof assistents: proofsearch) KEYWORDS: Proof search, Logic Programming, Metavariables,Unification, Workshop / project proposal
 ABSTRACT: At Types'06 we presented a Sequent Calculus for Pure Type Systems, building on pioneering works by Pym, Dowek, Pinto, Munoz & others. We since turned it into a framework with metavariables to represent proofs to be found. The aim is to formalise the process of proofsearch in proofassistants based on PTS & connect it to the computation paradigm of (pure) Logic Programming and its recent developments that build on eg LambdaProlog. They share indeed some standard techniques/issues such as resolution and unification but also face challenges such as objectlevel binders (cf Nominal Isabelle) and searching for proofs by induction.Also, programming in proofassistants based on PTS can require tedious extra work (namely, the proving) from the programmer, for which it is highly desirable to raise the level of automation, inspired by computation in logic programming. While the addition of uncontrolled fixpoints (eg let rec in ML) break the CurryHoward paradigm of programs as proofs, we push instead the idea of offering the user the convenience of metavariables, to use the search for type inhabitants as a programming paradigm, in the tradition of logic programming (but in more expressive type theories).We would like to put forward such a research direction as a suggestionfor a focused TYPES workshop and/or subproject and thus call forinterests.
Automated Deduction Andrew Polonsky. Translation from First Order Logic to CoherentLogic (Automated deduction) KEYWORDS:Automated deduction, coherent logic, interactivetheorem proving
 ABSTRACT:Coherent Logic (CL) is a subsystem of FOL with a simple, natural prooftheory. Therefore it is possible to write an automated prover for CLwhich is quick yet allows recovery of highquality proof objects.These properties would make CL an ideal candidate for automatingreasoning in interactive proof assistants, if an efficient translationfrom arbitrary formulas to coherent logic can be found. A naiveapproach, based on the method of semantic tableaux, has proved to beinadequate for practical problems. We present recent efforts toimprove the translator, and the latest results on the performance ofthe prover as measured by the problems translated from the TPTPdatabase.
Formalization works Carsten Schu"rmann and Jeffrey Sarnat. Structural Logical Relations (machinechecked proofs: logical relations) KEYWORDS:Twelf, logical frameworks
 ABSTRACT:Tait's method (a.k.a.\ proof by logical relations) is a powerful proof technique frequently used for showing foundational properties of languages based on typed lambdacalculi. Historically, these proofs have been extremely difficult to formalize in proof assistants with weak metalogics, such as Twelf allowing only $\forall\exists$sentences, some would even go as far and say impossible; and yet they are often straightforward in proof assistants with stronger metalogics. In my talk I propose a technique called "structural logical relation" settling once and for all this debate. In support of our claims, we give a Twelf checked proof of the completeness of an equality algorithm for the simply typed $\lambda$calculus, a potential problem for the POPLmark challenge.
Ralph Matthes. A representation of untyped lambdacalculus with explicit flattening through a nested datatype (machinechecked proofs: injectivity of explicit flattening) KEYWORDS: nested datatype, iteration in Mendler's style,termination guarantee, explicit flattening, Coq theorem prover
 ABSTRACT: Lambdacalculus can be turned into a monad by an notion of parallel substitution that becomes the Kleisli extension operation. FollowingBellegarde and Hook (1994), one can also study monad multiplication directly,and this amounts to an operation of flattening terms whose free variables arethemselves terms into just terms. Nested datatypes (Bird & Meertens 1998) area good means of representing untyped lambdacalculus (Altenkirch & Reus 1999,Bird & Paterson 1999). This representation can be extended by an explicit formof flattening, hence with a new datatype constructor with a nested call to thedatatype, yielding a truly nested datatype. For truly nested datatypes, nodirect support by the Calculus of Inductive Constructions (CIC)  asimplemented by the theorem prover Coq  is available. In previous work withAbel and Uustalu (2005), the author proposed Mendlerstyle (Mendler 1987)iteration schemes that can work on truly nested datatypes and guaranteetermination. In previous work, the author introduced an extension of the CIC Logic of Natural Mendlerstyle Iteration (LNMIt)  that allows in particularto prove naturality of functions defined by these iteration schemes, and gavean implementation of LNMIt within Coq (assuming impredicative Set andpropositional proof irrelevance).However, LNMIt makes an essential use of noncanonical elements that preventthe proof of basic properties such as injectivity of the datatype constructorsfor application, abstraction and explicit flattening. Through a relativizationto hereditarily canonical elements, the problems with noncanonical elementscan be overcome and yield a truly nested datatype where exhaustivity andinjectivity of the datatype constructors can be proven. The whole developmentis done in Coq and uses propositional proof irrelevance in order to discardthe proof of hereditary canonicity (impredicativity of Set is not needed forthe development within LNMIt). The proof of injectivity of explicit flatteningis amazingly complicated at the moment, and feedback would be verywelcome.
Herman Geuvers. Formalizing Hybrid Systems in Coq(machineassisted proofs: hybrid systems)  KEYWORDS: Hybrid Systems, Formal Verification, Proof Assistant
 ABSTRACT: We are in the process of formalizing Hybrid Systems in Coq, with the end goal to verify safety properties of these systems. We use the abstraction method to do this, which maps the (uncountable) continuous state space to a finite state space.We will outline the general approach and indicate how safetyproperties can be decided for simple examples, where the continuousbehavior is of a relatively simple form. In doing so, we have to solveinequalities over real number expressions, for which we resort to adecidability algorithm for exact reals, as implemented in CoRN.
Franc,ois Garillot. A small reflection on group automorphisms (machineassisted proofs: group automorphisms) KEYWORDS: proof assistants, finite groups, coq
 ABSTRACT:A Small Scale Reflection extension for the Coq proof assistant has recently been developped in synergy with a longterm effortto formalize the FeitThompson theorem. Their reunion features atactic language as well as a significative library on finitegroups, and multiple uses of the canonical structure mechanism ofCoq.We describe our experience using this extension to reason aboutgroup automorphisms, and an application to the study of cyclicgroups. Our work articulates various components of the library wecontribute to, and thus allows an analysis of the costs andbenefits  both significant  of their design andcompositionality.
Constructive real analysis Cezary Kaliszyk and Russell S S O'Connor. Computing with classicalreal numbers. (constructive real analysis) KEYWORDS: real numbers, Coq, CoRN
 ABSTRACT:There are two Coq libraries that have a theory of the realnumbers. The Coq standard library gives an axiomatic treatment ofclassical reals, while the CoRN library from Nijmegen definesconstructive reals. We present a way of making these two librariescompatible by showing that their real number structures are isomorphicassuming the classical axioms.To do this we show that the axioms of classical Coq reals implydecidability of pi 0 1 sentences, namely:(forall n : nat, {P n} + {~P n}) > {forall n, P n} + {~forall n, P n}.This shows that classical logic leaks into the computational part.We then show that this isomorphism preserves some basic constants(0, 1, e, pi), arithmetic operations and the basic transcendentalfunctions (exponential, logarithm, sine, cosine). This allows us touse O'Connor's decision procedure for solving ground inequalitiespresent in CoRN with the reals from the Coq standard library, whichwill be used for formalizing hybrid systems within Coq. It also makes itpossible to use some of the theorems from the Coq standard librarywith the CoRN reals assuming classical logic.
Semantics Paul Brauner and Allali Lisa. A semantic normalization proof for a system with recursors (semantics: cutelimination) KEYWORDS: recursors, cut elimination, deduction modulo, superconsistency
 ABSTRACT:Semantics methods have been used to prove cut elimination theorems for a long time. It is only recently that they have been extend to prove strong normalization results, in particular for theories in deduction modulo. However such semantic methods did not apply for systems with recursors like Godel system T. We show in this paper how supernatural deduction provides a bridge between superconsistency ofarithmetic and strong normalization of system T. We then generalizethis result to a family of inductive types before discussing thedependant case.
Yves Bertot and Vladimir Komendantsky. Fixed point semantics and partial recursion in Coq (semantics: fixed point in Coq) KEYWORDS:KnasterTarski least fixed point theorem, partialrecursive functions, denotational semantics, nontermination, programextraction
 ABSTRACT: We propose to use KnasterTarski least fixedpoint theorem as a basis to define recursive functions in the Calculusof Inductive Constructions. This widens the class of functions thatcan be modeled in typetheory based theorem proving tools topotentially nonterminating functions. This is only possible if weextend the logical framework by adding some axioms of classicallogic. We claim that the extended framework makes it possible toreason about terminating or nonterminating computations and we showthat extraction can also be extended to handle the new functions.
Linguistic (17) Muhammad Humayoun. Software Specifications and Mathematical Proofs in Natural Languages (linguistic) KEYWORDS:Natural languages, Formal languages, MathematicalProofs, Software Specifications, translation
 ABSTRACT:This project is an attempt to make a connectionbetween formal and natural languages. We are developing a controllednatural language having large coverage, which will be good enough forwriting Software Specifications and Mathematical Proofs interactively.
(18) Aarne Ranta, Bjo"rn Bringert and Krasimir Angelov. PGF: A RunTime Format for TypeTheoretical Grammars (linguistic) KEYWORDS: grammatical framework, grammar formalism, natural language processing, type theory
 ABSTRACT: PGF (Portable Grammar Format) is aruntime format for type theoretical grammars which is designedfor easy and efficient embedding of grammarbased processingin programs written in different programming languages. The PGFformat allows parsing, linearization, and type checkingto be performed by a simple interpreter, which can be seamlesslyused as a module in a larger program, such as a translator ora dialogue system. PGF interpreters already exist for C, C++, Haskell,Java, and JavaScript.The main way to produce grammars in the PGF format is compilation fromhigherlevel grammar formalisms. As an example, a compiler fromGF (Grammatical Framework) to PGF is outlined.The main goal of the present paper is to give a complete description of PGF  bothits syntax and semantics  to support the implementation of PGF interpretersin new programming language, and the compilation of other grammarformalisms to PGF.
Zhaohui Luo and Serguei Soloviev. Coercive subtyping and multilingual semantics. KEYWORDS:coercive subtyping, multilingual semantics
 ABSTRACT:In [1] the use of coercive subtyping in a dependent type framework wassuggested to explain certain lexical phenomena, such as homonymywithin a single language, for example English. Now we generalize thisapproach to the correspondence between different languages, withapplications to the problems of translation.[1] Z. Luo and P. Callaghan. Coercive subtyping and lexical semantics(extended abstract). Logical Aspects of Computational Linguistics(LACL'98), Grenoble, 1998.
Invited Talks Infinity and Productivity. Jan Willem KlopVU University Amsterdam Abstract. In the setting of infinitary term rewriting,both for the first order case and for lambda calculus,we discuss the problem of productivity, which is forinfinitary rewriting what termination is for finitary rewriting. While in general undecidable, methods can be givenfor subclasses of infinitary specifications. In particularwe treat such a method for infinite stream specifications,but we will also sketch an approach for a wider context.We illustrate this with our pet guiding examples such as the ThueMorse sequence, the Toeplitz sequence (and as an aside their connection to the snow flake of von Koch), and tree ordinals. These topics stem from joint work within our Infinity project, in particularwith Dimitri, Joerg, Ariya, Roel, Clemens. (See http://infinity.few.vu.nl/)
About the formalization of some results by Chebyshev in number theory.Andrea Asperti, Universita di Bologna. AbstractWe discuss the formalization, with the Matita Interative Theorem Prover,of a few famous results by Chebyshev concerning the distribution ofprime numbers, comprising a purely arithmetical proof of Bertrand'spostulate.
