Lisp as a crucible

Scheme and Lisp force you *think* from the get-go. Most engineers and programmers hate to do that and it makes them uncomfortable. Starting a program in Java or C is easy. There’s a pile of boilerplate you can type without thinking about it, and it `feels’ like you’re programming. Then you have to haul out the bag of tools like the compiler and the linker and makefiles or ant. There’s a lot of busy work needed just to get going, and you feel like you’ve accomplished something.
In Scheme or Lisp, you start it up and there you are at the REPL. You have to decide what your program is going to do. You can’t waste time writing boilerplate (it’s unnecessary), designing data structures (just cons one and specialize it later), figuring out how to build complex inheritance hierarchies (do that once you know what you are doing), you have to dive into the problem right away. If you are willing to make mistakes and learn from them, then the REPL is a great place to play. If you prefer to plan ahead so you don’t make mistakes, a REPL is a very uncomfortable place to be.

— jrm
Having experienced the “discomfort” myself, I recognize now that this development approach acts as a mirror to your strengths and weaknesses. It reveals, very very quickly, whether or not you really have got a grasp both on the problem and how you plan to solve it. There is no where to hide! It is great.
(via R6RS)

A Scheme Timeline

Sometimes understanding the history behind a technology and a community can make it easier to understand. A few months ago I wanted to better understand the history of the various reports on Scheme so I made some notes in the form of a timeline.

It includes everything up to R6RS along with the IEEE standard, and publishing of SICP and HtDP. Obviously a lot more things could be added here but I wasn’t trying to take notes for an exhaustive history. All of the sources for the notes can be found in the dot-file below.

The timeline is here; and source file is here.

What does Scheme do well?

What does Scheme do well? What does Scheme (not just RnRS Scheme, but Scheme more broadly) have that no one else has? It isn’t lexical scope – everyone but elisp has that, these days. It isn’t real first-class functions – lots of languages have that too. It isn’t proper tail calls – ML does that right. It’s not advanced compilers for functional languages – those are a dime a dozen. It’s not even first-class control, which a few other languages have. And the REPL – even Python has that.

It’s macros. Since 1986, Scheme has had a macro system that other languages can’t compete with, and haven’t succeeded in matching in the last 23 years. And over those 23 years, Scheme hasn’t stood still – Schemers have developed a vastly more expressive system in which huge numbers of new and powerful language extensions are possible.

So I say, press our advantage. Improve the macro system. Show the programming language world what the real power of “a very small number of rules for forming expressions” is.

That’s not to say that we should neglect the other things that make Scheme a high-quality programming language. They are important, and Scheme needs a community that cares about all her aspects. But this is not the tail wagging the dog – it’s knowing where our strengths lie.

– Sam T.H.

(via R6RS)

Why should programmers care about currying in practice in Scheme?

I wanted to know what currying means in practice for programmers who are not themselves theoretical computer scientists so I asked about it here. There were a lot of informative replies; and the most direct answers to the question seem to be:
Anthony Cowley:

There are many cases where Curried functions can be convenient, but I’ll just pick one class of examples. In FP, one is often passing around bundles of state in the form of parameters to a function.

Joe Marshall:

Your best bet would be to look at existing Scheme code. I found this example in the MIT Scheme loader: [click link for code]

Richard Cleis:

Currying permits repeated use of intermediate functions as an alternative to overtly managing arguments to a core function.

David Van Horn [on how to use the state monad in Scheme]:

You might have a look at these:
http://groups.google.com/group/comp.lang.functional/msg/2fde5545c6657c81
http://okmij.org/ftp/Scheme/monad-in-Scheme.html

HtDP Makes SICP Easier

Recommendation: do the math-y sections in HTDP. This will give you a flavor of the kind of mathematics you get in SICP, even though there are non-overlapping examples in each. I would especially focus on the examples from calculus (numeric differentiation, integration, taylor series etc) but the graph traversal (network flow) things are good for you too. Once you are at east with those, you can tackle SICP and get through fast

–Matthias
(via PLT)

Unhygienic macros inside of unhygienic macros are difficult

In this post; Ryan explains why unhygienic macros inside of unhygienic macros are often difficult.

;; the working if-it & when-it
(define-syntax (if-it stx)
  (syntax-case stx ()
    ((if-it test? then else)
     (with-syntax ((it (datum->syntax #'if-it 'it)))
       #'(let ((it test?))
           (if it then else))))))
(define-syntax (when-it stx)
  (syntax-case stx ()
    ((~ test? exp exp2 ...)
     (with-syntax ((it (datum->syntax #'~ 'it)))
       #'(let ((it test?)) (when it exp exp2 ...))))))
;; the non-working cond-it
(define-syntax (cond-it stx)
  (syntax-case stx (else)
    ((cond-it (else exp exp2 ...))
     #'(begin exp exp2 ...))
    ((cond-it (test? exp exp2 ...))
     #'(when-it test? exp exp2 ...))
    ((cond-it (test? exp exp2 ...) cond1 cond2 ...)
     #'(if-it test? (begin exp exp2 ...)
              (cond-it cond1 cond2 ...)))))

When ‘cond-it’ expands and produces an ‘if-it’ expression, the ‘if-it’ is marked by the macro expander as coming from a macro. That means its lexical context is different from the ‘it’ variables in the branches. That means that the ‘it’ variable binding produced by ‘if-it’ does not capture the ‘it’ references in the branches.

— Ryan

How letrec differs from letrec* in practice

I was asking about the difference in practice between letrec and letrec* here and got a bunch of great replies.

I didn’t understand why you would bother to use letrec at all when you could only expect it to work predictably when binding mutually recursive lambda expressions since the order of evaluation was not guaranteed (it occurs in some unspecified order). Thanks to everyone’s feedback I realized that the answer lay in my confusion: the value of letrec is specifically to indicate the fact that order of evaluation is not of concern. That is the whole point of providing both letrec and letrec*: the former tells the reader that it is purely functional, the latter that it is not. Perhaps this is a big “doh!” on my part; but I am glad that I asked.

On review of the R6RS rationale here; one finds that this was indeed the intent:

9.1 Unspecified evaluation order

The order in which the subexpressions of an application are evaluated is unspecified, as is the order in which certain subexpressions of some other forms such as letrec are evaluated. While this causes occasional confusion, it encourages programmers to write programs that do not depend on a specific evaluation order, and thus may be easier to read. Moreover, it allows the programmer to express that the evaluation order really does not matter for the result. A secondary consideration is that some compilers are able to generate better code if they can choose evaluation order.

SRFI 97: SRFI Libraries

Thanks to the efforts of David Van Horn and the ratification participants; SRFI 97 was produced. Here is the abstract:

Over the past ten years, numerous libraries have been specified via the Scheme Requests for Implementation process. Yet until the recent ratification of the Revised6 Report on the Algorithmic Language Scheme, there has been no standardized way of distributing or relying upon library code. Now that such a library system exists, there is a real need to organize these existing SRFI libraries so that they can be portably referenced.
This SRFI is designed to facilitate the writing and distribution of code that relies on SRFI libraries. It identifies a subset of existing SRFIs that specify features amenable to provision (and possibly implementation) as libraries (SRFI Libraries) and proposes a naming convention for this subset so that these libraries may be referred to by name or by number.

Basically SRFI 97 makes it easier not only to reference SRFIs in R6RS but also to find out if they are even likely to be available.