Previous Up Next

Chapter 6 Continuations and the CPS transformation

6.1 Notion of continuation

Continuation of a program point.Consider a point in the execution of a program. The continuation of this program point in this program is the sequence of computations that remain to be performed once the execution reaches the given point in order to complete the execution of the whole program.

Often, the continuation can be represented within the programming language, as a command or a function. (See examples below.) The continuation can also be a semantic object, as shown in section 6.3 below.

Examples of continuations in a statement-based language.Consider the following four programs, written in an Algol-style imperative language with structured control:

p1=s1(a); s2
p2=if be then (b)s1 else (c)s2;  s3
p3=while be do (d)s(e)
p4=for i := 1 to 10 do (f)s

Program points are written (a), (b), etc., and mark either the beginning of the execution of a statement s, as in (a)s, or the end of the execution of s, as in s(b).

In program p1, the continuation of (a) is the statement s2: once execution reaches the end of s1, the sequence s1;s2 continues by executing s2, reaching the end of the program.

In p2, the continuation of (b) is s1; s3, and the continuation of (c) is s2; s3. When the then branch s1 of the conditional has been executed, we still need to execute s3. The same is true for the else branch s2.

In p3, the continuation of (d) is swhile be do s, since we need to execute the body s of the loop, then re-execute the while loop until be is false. For the same reasons, the continuation of (e) is the loop while be do s.

In p4, the continuation of (f) is swhile i ≠ 10 do (i := i + 1; s). We need to execute the body s of the counted loop, then re-execute the body for increasing values of i until i reaches the upper bound of 10.

Continuation of a subexpression.  In functional languages, where programs are expressions, program points usually mark the end of the evaluation of a subexpression. Thus, we think of the continuation of a subexpression e of a program p as a function

value of e ↦ value of p

This function captures the computations that remain to be done once we have the value of e to complete the evaluation of p and produce its value.

Examples of continuations in an expression-based language.  Consider a language of arithmetic expressions, with left-to-right evaluation, and the program

p = (1 + 2) × (3 + 4)

The continuation of 1 in p is the function λ v. (v + 2) × (3 + 4). Intuitively, after evaluating 1 to a value v, we still need to evaluate “plus 2”, then “times 3 + 4”.

The continuation of 1+2 in p is the function λ v. v × (3 + 4). Once we have evaluated 1+2, we still have to compute “times 3 + 4”.

The continuation of 3+4 in p is the function λ v. 3 × v, and not λ v. (1+2) × v. Since evaluation proceeds from left to right, we evaluate 1+2 before we start evaluating 3+4. Therefore, by the time we have evaluated 3+4 to v, we have already evaluated 1+2 to 3, and all that remains to compute is 3 × v.

Note that continuations depend on the evaluation strategy. If we had chosen to evaluate from right to left, the continuation of 1+2 in p would be λ v. v × 7, and that of 3+4 would be λ v. (1 + 2) × v.

Continuations and jumps.The notion of continuation can help us understand the difference between jumps (goto, break, return, throwing an exception, etc.) and other commands (assignments, conditionals, etc.).

Consider a command (a)s(b) with its two program points, (a) at the beginning and (b) at the end. If s does not contain any jump, the continuation of (a) is always s followed by the continuation of (b). This is not the case if s is a jump:

If s is…the continuation of (a) is …
goto Lthe continuation of the label L
breakthe continuation of the enclosing loop
returnthe continuation of the latest call to the current function
throwthe continuation of the catch clause of the nearest try

For example, the continuation of (a) in

while be do (if be′ then (a)break;  s1);  s2

is just s2 and not s1; while…; s2.

6.2 Denotational semantics reminder

Denotational semantics, introduced by Christopher Strachey, Dana Scott, Christopher Wadsworth and others in the late 1960s, is a way to define the meaning of programs with mathematical precision. A denotational semantics associates a mathematical object with each syntactic element of a programming language (expression, statement, function, …) in a compositional manner.

For example, the meaning of an integer arithmetic expression e involving variables can be defined as a function from stores to integers, with stores associating integers to variables:

⟦ e ⟧ : Store → Int  where  Store = Variable →finInt

This denotation function is defined by induction and case analysis on the structure of e. For example:

     
⟦ c ⟧ σ = c    if c is a constant        
⟦ x ⟧ σ = σ(x)    if x is a variable       
⟦ e1 + e2 ⟧ σ = (⟦ e1 ⟧ σ + ⟦ e2 ⟧ σ) mod 232         
⟦ e1 * e2 ⟧ σ = (⟦ e1 ⟧ σ × ⟦ e2 ⟧ σ) mod 232          

This example uses 32-bit unsigned arithmetic, so all arithmetic operations are done modulo 232.

The denotational semantics of a statement s is, to a first approximation, a store transformer: a function mapping the store “before” the execution of s to the store “after” the execution of s.

⟦ s ⟧ : Store → Store

For example, consider the assignment x := x + 1. Started with store σ, it terminates with store σ′ = σ [x ↦ (σ(x) + 1) mod 232], that is, the store that maps x to (σ(x) + 1) mod 232 (the initial value of x plus one) and all other variables y to σ(y).

The semantics of statements must also take into account divergence: statements that do not terminate, such as infinite loops (while true). We write ⊥ (read: “bottom”) to denote divergence. The form of the denotation function is, therefore

⟦ s ⟧ : Store → Store + { ⊥ }

Arithmetic expressions:  e  ::=  0 ∣ 1 ∣ 2 ∣ …constants
     xvariables
     e1 + e2e1 × e2sums, products
Boolean expressions:  be  ::=  truefalseconstants
     e1 = e2e1 < e2comparisons
     not ee1 and e2negation, conjunction
Commands:  s  ::=  skipdo nothing
     x := eassignment
     s1; s2sequence
     if be then s1 else s2conditional
     while be do sgeneral loop
     begin s1; L: s2 endblock-scoped label declaration
     goto Ljump to a label
Figure 6.1: The IMP toy imperative language: abstract syntax.

Consider the IMP toy language shown in figure 6.1. Here is the denotational semantics for its structured statements:

     
⟦ skip ⟧ σ= σ          
⟦ x := e ⟧ σ= σ [x ↦ ⟦ e ⟧ σ]          
⟦ s1; s2 ⟧ σ
= 


if ⟦ s1 ⟧ σ = ⊥ 
⟦ s2 ⟧ (⟦ s1 ⟧ σ)otherwise
         
⟦ if be then s1 else s2 ⟧ σ
=


⟦ s1 ⟧ σif ⟦ be ⟧ σ = true
⟦ s2 ⟧ σif ⟦ be ⟧ σ = false
         
⟦ while be do s ⟧= lfp (λ d.λ σ.  if ⟦ be ⟧ σ then d(⟦ s ⟧ σ) else σ)          

In the while case, lfp stands for the least fixed point of the given operator, with the ordering ⊥ < σ for all σ. This definition ensures that the denotation of a while loop is ⊥ if and only if the loop does not terminate, and that it satisfies the loop unrolling equation

⟦ while be do s ⟧ = ⟦ if be then (swhile be do selse skip ⟧

6.3 Continuation-based denotational semantics

At the end of section 6.1, we mentioned that we can think of a goto L statement as invoking the continuation of the label L. As a first step in this direction, we now make continuations explicit in the denotational semantics of statements. Instead of writing the denotation function in the so-called direct style

⟦ s ⟧ : Store → Res

let us write it in the so-called continuation-passing style

⟦ s ⟧ : Store → 
 
 
(Store → Res)


continuation
 → Res

Here, we write Res for Store + {⊥}, that is, the result of an execution: either the final state, or divergence.

The form of the semantics is ⟦ s ⟧ σ k, where σ is the state “before” the execution of s, and k is the denotation of the continuation of s, as a function taking the state “after” the execution of s as its argument and returning the state at the end of the program execution. In other words, ⟦ s ⟧ computes the state “after” the execution of s and passes it to k, which produces the state at the end of the execution of the whole program. The definition of ⟦ s ⟧ becomes:

     
⟦ skip ⟧ σ k= k(σ)          
⟦ x := e ⟧ σ k= k(σ [x ↦ ⟦ e ⟧ σ])          
⟦ s1; s2 ⟧ σ k= ⟦ s1 ⟧ (λ σ′. ⟦ s2 ⟧ σ′ k)          
⟦ if be then s1 else s2 ⟧ σ
=


⟦ s1 ⟧ σ kif ⟦ be ⟧ σ = true
⟦ s2 ⟧ σ kif ⟦ be ⟧ σ = false
         
⟦ while be do s ⟧ σ k= k′(σ)          
where   k′ = lfp (λ k′. λ σ′. if ⟦ be ⟧ σ′ then ⟦ s ⟧ σ′ k′ else k(σ′))          

For base commands such as skip and assignments, the denotation function simply computes the store after the statement and passes it to the continuation k.

For a sequence, ⟦ s1; s2 ⟧ σ k computes the denotation of s1 with the initial store σ and the continuation λ σ′. ⟦ s2 ⟧ σ′ k, which computes the denotation of s2 with the store at the end of s1 as the initial store and k as the continuation. Note that the case where s1 diverges when started in σ needs no special handling: we arrange for ⟦ s1 ⟧ σ k′ to produce ⊥ and never call the continuation k′ in this case.

For an if-then-else conditional, we simply recurse on the “then” or “else” branch, as appropriate, giving them the same initial store and the same continuation.

For a while be do s loop, the denotation of the loop body s must be given a continuation that re-executes the loop until be is false. To do this, we use a least fixed point to define a continuation k′ such that

k′(σ′) = 


k(σ′)if ⟦ e ⟧ σ′ = false
⟦ s ⟧ σ′ kif ⟦ e ⟧ σ′ = true

We then apply k′ to the store σ at the beginning of the loop. If the loop does not terminate, the least fixed point produces ⊥. If the loop terminates, the final store σ′ is passed to the initial continuation k.

6.4 Denotational semantics for labels and jumps

Now that the denotational semantics for a statement has access to the continuation for that statement, we can extend it to handle labels and goto jumps. To do so, we add a parameter to the denotation function: an environment ρ that associates continuations to labels. The continuation ρ(L) is the one that goto L calls, instead of the current continuation k.

More precisely, the denotation function now has the type

⟦ s ⟧ : Env → Store → 
 
 
(Store → Res)


continuation
 → Res

where Res = Store + {⊥}, as before, and Env = Labelfin (StoreRes).

For base commands, sequences, conditionals and loops, the denotational semantics is essentially unchanged: the environment parameter ρ is added and propagated to substatements.

     
⟦ skip ⟧ ρ σ k= k(σ)          
⟦ x := e ⟧ ρ σ k= k(σ [x ↦ ⟦ e ⟧ σ])          
⟦ s1; s2 ⟧ ρ σ k= ⟦ s1 ⟧ ρ σ (λ σ′. ⟦ s2 ⟧ ρ σ′ k)          
⟦ if be then s1 else s2 ⟧ ρ σ
=


⟦ s1 ⟧ ρ σ kif ⟦ be ⟧ σ = true
⟦ s2 ⟧ ρ σ kif ⟦ be ⟧ σ = false
         
⟦ while be do s ⟧ ρ σ k= k′(σ)          
where   k′ = lfp (λ k′. λ σ′. if ⟦ be ⟧ σ′ then ⟦ s ⟧ ρ σ′ k′ else k(σ′))          

The goto statement ignores the current continuation k. Instead, it restarts the continuation associated with L in ρ.

     
⟦ goto L ⟧ ρ σ k= ρ(L) σ          

A definition of label L in a beginend block associates the continuation of L to L in the environment, then calculates the denotation of the block in this extended environment.

     
⟦ begin s1; L: s2 end ⟧ ρ σ k= ⟦ s1; s2 ⟧ ρ′ σ k         
     where ρ′ = ρ [Lk′]          
     and k′ = λ σ′. ⟦ s2 ⟧ ρ′ σ′ k          

Note the mutual recursion between the extended environment ρ′ and the associated continuation k′. (This can be written more formally as a least fixed point.) It reflects the fact that s2 can contain jumps to L, thus creating loops, whose denotations necessarily involve least fixed points.

6.5 The CPS transformation

In denotational semantics, continuations are represented as mathematical objects, such as functions (in set theory) for simple languages like IMP, or continuous functions in Scott domains for richer languages involving functions or delayed computations as first-class values. Alternatively, for programming languages that support functions with free variables as first-class values, continuations can be represented as functions of the programming language itself. This way of materializing continuations is called the CPS transformation, where CPS stands for continuation-passing style. The CPS transformation has many uses: to support advanced functional programming idioms, as shown in chapter 7; to give semantics to control operators, the counterpart of jumps in functional languages, as shown in chapter 8; but also to specify the evaluation strategy followed by a functional language, as we now explain.

Form of the CPS transformation.  The transform of an expression e is a function from continuations to the final value of the program:

 
 
(value of e → final value)


continuation
 → final value

More specifically, the transform of e is a function λ k … that

For example, the transform of a constant c is λ k . k c. Since c is already a value, it can be passed directly to the continuation.

The transformation produces functions that are in continuation-passing style (CPS): functions that do not return a value to their callers, but always terminate by calling the continuation given as an extra argument. The only way to get the value of an expression after CPS transformation is to apply the transformed expression to λ x . x, the identity continuation.

The CPS transformation applies equally well to untyped and typed languages. However, it is interesting to consider its effect on types. For instance, if an expression e has base type int, its transform has type (intR) → R, where R is the “result type”, i.e. the type of the whole program, and intR is the type of continuations that expect a value of type int. (See section 13.2 for the general case.)

CPS transformation for call by value.  Consider the functional language FUN introduced in section 5.4. Assume a call-by-value evaluation strategy: function arguments are reduced to values before being passed to the functions. The corresponding CPS transformation, written 𝒞, is defined by recursion and case analysis on the expression:

     
𝒞(c)= λ k . k c         
𝒞(x)= λ k . k x         
𝒞(λ x . e)= λ k . k (λ x . 𝒞(e))          
𝒞(e1 e2)= λ k . 𝒞(e1) (λ v1 .  𝒞(e2) (λ v2 . v1 v2 k))          
𝒞(if e1 then e2 else e3)= λ k . 𝒞(e1) (λ v1 . if v1 then 𝒞(e2k else 𝒞(e3k)          

In call by value, variables are bound to values. Therefore, variables are transformed like constants: the transform of a variable x is λ k . k x.

Function abstractions λ x . e are also values, hence they can be passed directly to the continuation k. However, the body e of the function must be transformed to get a function λ x . λ k … that takes two arguments, the parameter x and the continuation k for the call.

For a conditional expression if e1 then e2 else e3, we need the Boolean value of e1. Therefore, the transformed term calls 𝒞(e1), giving it a continuation λ v1 … that binds the value of e1 to variable v1, then calls 𝒞(e2k if v1 = true or 𝒞(e3k if v1 = false.

Function applications e1 e2 follow a similar pattern: we need both the value of e1 (to know which function to call) and the value of e2 (since we use call by value). Therefore, the transform calls 𝒞(e1) with a continuation λ v1 … that binds the value of e1 to v1, then calls 𝒞(e2) with a continuation λ v2 … that binds the value of e2 to v2, and finally performs the actual application v1 v2 k, passing the continuation k of the application to the function.

Administrative reductions.The CPS transformation above produces terms that contain obviously redundant computations and are more verbose than the terms we would write by hand. For example, in the case of an application f x where f and x are variables, we get

𝒞(f x) = λ k. (λ k1.k1 f)  (λ v1. (λ k2.k2 x) (λ v2. v1 v2 k))

instead of the expected transform λ k . f x k.

This verbosity can be avoided by performing so-called administrative reductionsadm on the result of the CPS transformation. These reductions are β-reductions that remove the “administrative redexes” introduced by the transformation. In particular, we have

(λ k. k a) (λ x. e) →adm (λ x. ea →adme [x := a] 

whenever a is an atom: a variable or a value (constant or λ-abstraction).

Alternatively, we can use the one-pass CPS transformation of Danvy et al. (2007), which produces terms free of administrative redexes. This one-pass transformation is defined by two functions, Ψ(a) for atoms a and 𝒞1(e, k) for general expressions e, where k is an atom representing the continuation. We write s for a “serious” expression, i.e. an expression that is not an atom.

     
Ψ(c)= c         
Ψ(x)= x         
Ψ(λ x . e)= λ x. λ k. 𝒞1(e, k)          
𝒞1(a, k)= k Ψ(a)          
𝒞1(a1 a2, k)= Ψ(a1) Ψ(a2k         
𝒞1(a1 s2, k)= 𝒞1(s2, λ v2. Ψ(a1v k)          
𝒞1(s1 a2, k)= 𝒞1(s1, λ v1. v1 Ψ(a2k)          
𝒞1(s1 s2, k)= 𝒞1(s1, λ v1. 𝒞1(s2, λ v2. v1 v2 k))          
𝒞1(if a then e1 else e2, k)= if Ψ(athen 𝒞1(e1, kelse 𝒞1(e2, k)          
𝒞1(if s then e1 else e2, k)= 𝒞1(s, λ v. if v then 𝒞1(e1, kelse 𝒞1(e2, k))          
           

Examples of call-by-value CPS transformations.In the following examples, f, g, h, x, y are variables. The results are shown after administrative reductions, as suggested by the =adm sign. We have:

𝒞(f (g x)) =adm λ k. g x (λ v. f v k) 

Note how the CPS transformation makes the order of evaluation explicit: first, the argument g x is computed, and bound to variable v; then, f is applied to v. In the same vein,

𝒞(f (if y then g x else h x)) =adm  λ k.  (λ k′. if y then g x k′ else h x k′)   (λ v. f v k)

Here, the intermediate continuation λ v. f v k is bound to a variable k′ and used for the calls to g and h, depending on the Boolean y.

For a more realistic example, suppose the FUN language is extended with recursive function definitions, written let rec. Here is the factorial function and its CPS transformation:

𝒞(let rec fact = λ n. if n=0 then 1 else n × fact(n−1))
 =admlet rec fact = λ n. λ k.  
     if n=0 then k 1 else fact (n−1) (λ v. k (n × v))

Note that unlike the original function, the transformed function is tail-recursive: the else branch directly calls fact with the argument n−1 and a continuation that will multiply the result by n before passing it to the original continuation k.

CPS transformation for call by name.As an alternative to call by value, we can also evaluate FUN programs using call by name: the argument e to a function application is passed to the function unevaluated and bound to the function parameter x; only when the value of x is needed is the argument e evaluated; moreover, there is no sharing of evaluations, so e is re-evaluated every time x is needed. This change of evaluation strategy is reflected in the call-by-name CPS transformation 𝒩:

     
𝒩(c)= λ k. k c         
𝒩(x)= x         
𝒩(λ x . e)= λ k. k (λ x . 𝒩(e))          
𝒩(e1 e2)= λ k . 𝒩(e1) (λ v1 . v1 𝒩(e2k)          
𝒩(if e1 then e2 else e3)= λ k . e1 (λ v1 . if v1 then 𝒩(e2k else 𝒩(e3k)          

The cases are similar to those of the call-by-value transformation 𝒞, except for the cases of variables and function applications. For a function application e1 e2, the argument e2 is not evaluated; instead, its CPS transformation 𝒩(e2) is passed to the function. This delays the evaluation of e2, since the CPS transformation is a function λ k … that waits for a continuation k before proceeding with the evaluation of e2.

Since function arguments are passed as CPS-transformed terms, all variables are also bound to CPS-transformed terms. Therefore, the transform of a variable x is just x. Compare this to the call-by-value translation. In call by value, a variable x is bound to a value, so its CPS transformation λ k. k x takes a continuation k and immediately applies it to the value x. In call by name, a variable x is bound to a CPS-transformed term, so it must be applied to a continuation k in order to produce the value of x. Therefore, the CPS translation of x is λ k. x k, which can be simplified to x.

Examples of call-by-name CPS transformations.The function application f x, where f and x are variables, becomes

𝒩(f x) =adm λ k. f (λ v. v x k)

Even though f is a variable, it represents an arbitrary computation, so it must be evaluated to a value v before v can be applied. The argument x is passed as is, since it is already a CPS-transformed term.

Consider the more complex application f (g x). The CPS transformation is

𝒩(f (g x)) =adm λ k. f (λ v. v (λ k′. g (λ v′. v′ x k′)) k)

Again, f is applied to a continuation in order to recover its value v. The argument g x is passed unevaluated to v, in CPS-transformed form.

6.6 Semantic properties of the CPS transformation

Shape of CPS terms.The terms produced by a CPS transformation have a specific shape, which is described by the following grammar:

    Atoms:  a  ::=  xc ∣ λ v . b ∣ λ x . λ k . b
    Function bodies:  b  ::=  aa1 a2a1 a2 a3

We have functions with one argument (continuations) or two arguments (CPS-transformed functions), whose bodies consist of function applications to one or two arguments. In the applications, the function parts and the argument parts are atoms, i.e., variables, constants or function abstractions.

Note that function applications are always in tail position inside CPS functions. (Provided we treat an application to two arguments a1 a2 a3 as a single application, not as two curried applications (a1 a2a3.) Consequently, CPS-transformed terms can be evaluated without using a call stack: the continuations passed to function calls play the role of stack frames in a stack-based evaluator.

For any source term e, the CPS transformations 𝒞(e) and 𝒩(e) are atoms, and their applications to the initial continuation 𝒞(e) (λ x . x) and 𝒩(e) (λ x . x) are bodies. With the one-pass, optimized CPS transformation, 𝒞1(e, k) is a body if k is an atom.

Indifference to the evaluation strategy.One of the motivations of Plotkin (1975) and others to study CPS transformations formally was to make evaluation strategies such as call by name and call by value explicit in the transformed terms, instead of relying on an operational semantics to implement the desired evaluation strategy. Indeed, we have the following result, called the indifference property by Plotkin (1975):

Theorem 1 (Indifference) A CPS-transformed program evaluates identically in call by name, in call by value, and in any weak reduction strategy.

We show that this result holds for any closed function body b. It has the form a1 or a1 a2 or a1 a2 a3, where the ai are closed atoms, i.e., values. Therefore, no weak reductions can take place in the ai, and the only reductions that can take place are at the top of b:

     
   (λ v . b′) a2 → b′ [v := a2]          
(λ x . λ k . b′) a2 a3 → (λ k . b′ [x := a2]) a3 → b′ [x := a2, k := a3]          

Since the arguments are values, these beta reductions are valid both in call by name and in call by value. Since atoms are stable under substitution of atoms for variables, the result of the reduction is a closed function body.

This shows that only one weak reduction sequence is possible for a closed function body. Now, if e is a closed source term, its CPS transformation applied to the initial continuation, 𝒞(e) (λ x . x) or 𝒩(e) (λ x . x), is a closed function body. The expected result follows.

Semantic preservation.Another important result of Plotkin (1975) is that CPS transformations preserve the meaning of programs: applying the call-by-value CPS transformation 𝒞(e) of an expression e to the initial continuation λ x . x produces the same result as evaluating the original expression e using call by value semantics. This result also holds for call by name.

Theorem 2 (CPS transformation preserves semantics.) For all terms e and constants c,

In the statement above, →v is call by value reduction, →n is call by name reduction, and → is weak reduction without a fixed strategy.

We first outline a proof of semantic preservation for the one-pass, optimized call-by-value CPS transformation of Danvy et al. (2007), which avoids some difficulties with administrative reductions. We first show that source-level call-by-value reductions are simulated by weak reductions of the CPS transform: if ev e′, then 𝒞1(e, k) →+ 𝒞1(e′, k) for any continuation k. This holds for a head reduction (λ x . evv e [x := v], since

     
𝒞1((λ x . ev, k)= (λ x . λ k′ . 𝒞1(e, k′)) Ψ(vk         
  → (λ k′ . 𝒞1(e, k′) [x := Ψ(v)]) k         
  → 𝒞1(e, k′) [x := Ψ(v), k′ := k] = 𝒞1(e [x := v], k)          

The result extends to reductions under a context C[e] →v C[e′] by induction and case analysis on the structure of C.

A similar result holds for source terms that are stuck, i.e. that are not values and cannot be reduced, such as c v (a constant applied as if it were a function): if e is stuck, then 𝒞1(e, k) is stuck.

Putting it all together, we have three cases:

  1. if ev* v and v is a value, then 𝒞1(e, λ x. x) →* Ψ(v);
  2. if e diverges (reduces infinitely), then 𝒞1(e, λ x. x) diverges;
  3. if e goes wrong (reduces to a term that is stuck), 𝒞1(e, λ x. x) goes wrong.

Property (1) proves that ev* c implies 𝒞1(e, λ x . x) →* c. For the converse implication, we use the indifference theorem: if 𝒞1(e, λ x. x) reduces to c, it cannot reduce to any other value, nor diverge, nor go wrong; hence, it must be the case that ev* c, otherwise one of (1), (2) or (3) would be false.

The proof sketched above does not apply directly to the standard call-by-value CPS transformation 𝒞(e), since in general ev e′ does not imply 𝒞(ek+ 𝒞(e′) k. To get around this problem, Plotkin (1975) defines e : k, the colon translation, which is 𝒞(ek where some well-chosen administrative redexes have been reduced. We have 𝒞(ek* e : k and e : k+ e′ : k if ev e′. Using these properties, we can show the analogues of properties (1), (2) and (3) above, and conclude.

6.7 Further reading

The textbook by Nielson and Nielson (2007) gives a nice introduction to operational and denotational semantics for first-order languages such as IMP. The textbook by Winskel (1993) goes further, including the elements of domain theory that are needed to give denotational semantics to higher-order languages such as FUN.

Reynolds (1993) gives a historical account of the notion of continuation, showing how it has been discovered several times, both to give semantics to Algol 60’s jumps and as an implementation technique for functional languages.

Chapters 5 and 6 of Friedman and Wand (2008) give a good introduction to continuations in functional languages and to the CPS transformation. Danvy and Filinski (1992) study CPS transformations in depth.


Previous Up Next