A Scheme Timeline

Sometimes understanding the history behind a technology and a community can make it easier to understand. A few months ago I wanted to better understand the history of the various reports on Scheme so I made some notes in the form of a timeline.

It includes everything up to R6RS along with the IEEE standard, and publishing of SICP and HtDP. Obviously a lot more things could be added here but I wasn’t trying to take notes for an exhaustive history. All of the sources for the notes can be found in the dot-file below.

The timeline is here; and source file is here.

What does Scheme do well?

What does Scheme do well? What does Scheme (not just RnRS Scheme, but Scheme more broadly) have that no one else has? It isn’t lexical scope – everyone but elisp has that, these days. It isn’t real first-class functions – lots of languages have that too. It isn’t proper tail calls – ML does that right. It’s not advanced compilers for functional languages – those are a dime a dozen. It’s not even first-class control, which a few other languages have. And the REPL – even Python has that.

It’s macros. Since 1986, Scheme has had a macro system that other languages can’t compete with, and haven’t succeeded in matching in the last 23 years. And over those 23 years, Scheme hasn’t stood still – Schemers have developed a vastly more expressive system in which huge numbers of new and powerful language extensions are possible.

So I say, press our advantage. Improve the macro system. Show the programming language world what the real power of “a very small number of rules for forming expressions” is.

That’s not to say that we should neglect the other things that make Scheme a high-quality programming language. They are important, and Scheme needs a community that cares about all her aspects. But this is not the tail wagging the dog – it’s knowing where our strengths lie.

– Sam T.H.

(via R6RS)

How letrec differs from letrec* in practice

I was asking about the difference in practice between letrec and letrec* here and got a bunch of great replies.

I didn’t understand why you would bother to use letrec at all when you could only expect it to work predictably when binding mutually recursive lambda expressions since the order of evaluation was not guaranteed (it occurs in some unspecified order). Thanks to everyone’s feedback I realized that the answer lay in my confusion: the value of letrec is specifically to indicate the fact that order of evaluation is not of concern. That is the whole point of providing both letrec and letrec*: the former tells the reader that it is purely functional, the latter that it is not. Perhaps this is a big “doh!” on my part; but I am glad that I asked.

On review of the R6RS rationale here; one finds that this was indeed the intent:

9.1 Unspecified evaluation order

The order in which the subexpressions of an application are evaluated is unspecified, as is the order in which certain subexpressions of some other forms such as letrec are evaluated. While this causes occasional confusion, it encourages programmers to write programs that do not depend on a specific evaluation order, and thus may be easier to read. Moreover, it allows the programmer to express that the evaluation order really does not matter for the result. A secondary consideration is that some compilers are able to generate better code if they can choose evaluation order.

We need an SRFI for Design by Contract

Two R6RS versions are mentioned here, as is one for PLT, and additionally I am pretty sure that one of the posters here has an implementation that he uses only for his code.

It doesn’t make sense for people to waste time reimplementing the wheel. It would be great if everyone could converge on a single, portable, performant implementation.

Is it important to understand continuations conceptually?

Here I asked:


I get the feeling that it is important to understand continuations
conceptually, and specifically not in terms of their implementation.

The TSPL book, for example, does so; and I often see it quoted word
for word in explanations of continuations.

However, the number of articles that describe continuations in terms
of the stack far outweigh explaining them conceptually.

Does describing them in terms of their implementation serve as a
disservice? Will it be an impediment later on?

Here was one reply:


Grant, you're correct in that an understanding of one particular
implementation technique for a linguistic construct causes huge and
ubiquitous misunderstandings.

Procedures and procedure calls are the examples that come to mind.
Those things were explained via a stack-based implementation in the
1950s and 1960s although [&] abstract explanations in terms of 8th
grade algebra all the way to Lambda Calculus had been around for, oh,
a while. [*] As a result, procedure calls had been considered
expensive and a thing to be avoided. Steele pointed out our
misunderstanding of this issue AND YET, to this day, people don't
implement procedures and procedure calls properly and we are still
suffering from this perception. People still write huge procedures to
avoid another call, and people still want to see complete stack
traces in their debuggers for their function calls. So the sentence
labeled with [*] uses the incorrect tense. It should use "have been
and are" instead. It is one sad state of affairs. Of course, this
just refers back to the sentence with [&]: people who design and
implement programming languages do not wish to study mathematical
models of PLs, can't and won't. But they sure want credit on all
fronts. That's why the problem is pervasive.

Continuation objects in Scheme are special-purpose procedures. That
is, they are procedural representations of the 'rest of the
computation with respect to some expression evaluation.' So the story
is related but fortunately (or whatever) doesn't have as much of an
impact. Continuations aren't as useful as procedure. Yes, there are
kids out there who think that if you don't implement continuations
with fast code etc your Scheme implementation isn't worth much. But
those are just mislead.

Continuations can be implemented with at least four basic techniques
that I can remember right now. Clinger et al (a nice scientific paper
from the 80s revised in the 90s) lays out a beautiful and well-
presented comparison of such techniques. I recommend reading it. And
of course Dybvig/Hieb's lazy stack copy technique in the original
paper. Of course in SML/NJ callcc = cons. So that's that.

Continuations can be understood as all kinds of abstract beasts, with
little more knowledge than 8th grade algebra or Lambda Calculus. But
that is just an abstract form of 'how'. I have spent a good deal of
time on this question.

Finally, continuations can be understood from a 'pragmatic'
perspective ('what are they useful for, and how are they used'). For
this question, I recommend two books and a paper:
  -- Shriram's PLAI
  -- Friedman and Springers, "Art and Scheme"
  -- Friedman's POPL talk from 1988 on "Applications of Continuations"

Good luck -- Matthias

A Guide to the R7RS Steering Committee Candidates

Will Clinger posted “a politically incorrect guide to the candidates” in hopes of helping registered voters here.

It is well written and worth a read by anyone interested in the future of Scheme.

The following is a copy of the entire message:


The Scheme Language Steering Committee's announcement
of the forthcoming election says

    When the nomination period ends, the complete list
    of candidates will be published on www.r6rs.org.

    Candidates may also post whatever messages they wish
    to comp.lang.scheme, the r6rs-discuss mailing list,
    or whatever other forums they feel appropriate, and
    voters should feel free to discuss the candidates
    and their positions on these fora.

After 12 candidates have been nominated, and 129 voters
registered, we now begin the campaign and/or endorsement
period.

Voters will be asked to rank the candidates, and three
candidates will be elected by "single transferable vote
proportional representation."

The top two votes on my ballot will be John Cowan and
Jonathan Rees (in some order).  My reasoning is simple:
In my opinion, any subset of the twelve candidates that
includes those two would make a fine Scheme Language
Steering Committee.

The Steering Committee is responsible for the process
and its direction.  As Rees and Cowan indicated in their
statements, they have experience with process issues and
also understand that the direction of this process needs
to change.

That does not make them unique among this group of twelve
candidates.  Indeed, I have not decided which of several
worthy candidates I should rank third.

The candidates can be aggregated along several dimensions.
Three voted to ratify the R6RS; five voted against
ratification; four did not vote.  Three candidates were
editors of the R6RS, and two more served on the editors'
committee but resigned before any documents were put to
a vote.  Eight candidates have implemented or are now
maintaining a popular implementation of Scheme.

There is much to be said for the neutrality that comes of
not being associated with any particular implementation;
all three members of the original Steering Committee were
neutral in that sense.  Of the twelve new candidates,
four are not associated with any major implementation:
John Cowan, Anton van Straaten, Ray Dillinger, and
Richard O'Keefe.

John Cowan is an expert on Unicode, and his comments
on drafts of the R6RS showed excellent judgment.

Anton van Straaten was the only R6RS editor who was not
associated with a particular implementation.  That gave
him a broader perspective that was sometimes difficult
for the other editors to appreciate.  He was the only
one of the four R6RS editors who abstained from the
vote on ratification.

As Ray Dillinger noted in his statement, he took the
initiative to renew IEEE Standard 1178, which is still
the only standard for Scheme that is recognized as a
national or international standard.  Dillinger did not
vote on R6RS ratification.

Richard O'Keefe is an accomplished Scheme programmer.
Many implementations of Scheme rely on his efficient
code for merge sorting of lists.  O'Keefe did not vote
on R6RS ratification.

Turning to the implementors, Will Clinger is the only
candidate who has implemented the R6RS.  Larceny was,
in fact, the first substantially complete implementation
of the R6RS.  If you think that's a good reason *not*
to vote for Clinger, then I have no argument with you.

If you'd like to vote for an implementor who has shown
less enthusiasm for the R6RS than Clinger, then you
have seven to choose from.

Marc Feeley polled implementors of the R3RS/R4RS/R5RS
and IEEE/ANSI Scheme to gauge their enthusiasm for the
draft R6RS that was put up for ratification; that was
something the editors themselves should have done.
Marc also served as the original chair of the R6RS
editors' committee.

Aubrey Jaffer, the implementor of SCM, has also been
the driving force behind SLIB, which was arguably the
first successful collection of portable libraries for
Scheme.

Chris Hanson, who has been maintaining MIT Scheme, is
a charter author of the R*RS documents, and was an
editor of the IEEE-1178 standard.  He also wrote much
of that standard's Appendixes B and C, which were
significant milestones during the standardization of
Scheme (and Lisp generally).

Jonathan Rees was one of the original implementors of
T and Scheme 48.  He too is a charter author of the
R*RS documents, and was editor of the much-beloved
R3RS.

Olin Shivers is responsible for scsh.  Among the
twelve candidates, he is the only implementor who
abstained from the R6RS ratification vote.

Kent Dybvig is responsible for Chez Scheme, and did
a good job of chairing the R6RS editors' committee
after Feeley resigned.  In his statement, Kent Dybvig
said he "will not use a position on the steering
committee as a mechanism to push" his opinions.

Mike Sperber, who has been maintaining Scheme 48,
edited the R6RS documents.  He has also served as an
editor for the SRFI process, which was arguably the
second successful collection of portable libraries
for Scheme.

In October 2007, Dybvig and Sperber announced their
intentions to implement the R6RS in Chez Scheme and
Scheme 48 (respectively) within a year.  There is
much to be said for their moderate approach to
implementing the R6RS, just as there is much that
could be said against the pioneering approach taken
by Ikarus, Larceny, PLT Scheme, Ypsilon, IronScheme,
and Mosh.

I wrote this guide to the candidates in hope it will
help some voters.

Someone nominated me.  As the only candidate who is
responsible for maintaining implementations of both
the R5RS and R6RS, I certainly have a stake in the
outcome of this process.  If elected, I will serve
to the best of my ability.  Urging you to vote for
me would have been the politically correct thing for
me to do.  Instead, I urge you to vote for the best
committee you can imagine.

Will

Returning multiple values in Scheme

In this thread in comp.lang.scheme the means to return multiple values are discussed. There are seemingly 3 solutions in R6RS:


(import (rnrs))

; let-values + values
(define (foo1)
  (values 1 2 3))

(let-values (((a b c) (foo1)))
  (display (list a b c))
  (newline))

; cps
(define (foo2 k)
  (k 1 2 3))

(foo2 (lambda (a b c)
        (display (list a b c))
        (newline)))

; list
(define (foo3)
  (list 1 2 3))
(let ((result (foo3)))
  (display result)
  (newline))

Per Aziz and Aaron’s point; you should use the approach that communicates the most information to the reader.

Compiling with Continuations

On the PLT list, the original poster asked about dynamic, static scope and closures. Shriram replied that:

My favorite reference for these kinds of questions is Andrew Appel’s Compiling with Continuations. If you ignore the slightly intimidating title, just about every question along these lines is answered – by the author of the Standard ML compiler. It is, I believe, a vast improvement over its successor, the Tiger book.

The book has been mentioned a few time before. It seems to be available here.

(via PLT)