Tarski’s World

The package is intended as a supplement to any standard logic text or for use by anyone who wants to learn the language of first order logic. The main body of the book contains a collection of exericses which use the Tarski’s World software to teach the language and semantics of first order logic. The Tarski’s World application allows the evaluation of first-order sentences within blocks world which users may construct using a simple editor. The worlds consist of collections of blocks of varying sizes and shapes, and placed on a checkerboard. We use an interpreted first-order language which allows users to write sentences about these worlds and evaluate their truth. A Henkin-Hintikka game may be used to elucidate the evaluation procedure.

Via PLT, where a few folks shared that this package is quite good.

Functional Objects

Functional objects is a presentation by Matthias Felleisen from ECOOP 2004. It was mentioned more than a few times during the past month on the PLT discussion list.
Though it is 74 pages, it doesn’t feel very long; and there is a lot of good content in there. “Java people” will even be happy to see Joshoua Bloch’s quotes scattered liberally about.
Basically it tells a story and makes an argument about how one might go about moving forward with programming, and it does well enough in both regards.

SRFI 97: SRFI Libraries

Thanks to the efforts of David Van Horn and the ratification participants; SRFI 97 was produced. Here is the abstract:

Over the past ten years, numerous libraries have been specified via the Scheme Requests for Implementation process. Yet until the recent ratification of the Revised6 Report on the Algorithmic Language Scheme, there has been no standardized way of distributing or relying upon library code. Now that such a library system exists, there is a real need to organize these existing SRFI libraries so that they can be portably referenced.
This SRFI is designed to facilitate the writing and distribution of code that relies on SRFI libraries. It identifies a subset of existing SRFIs that specify features amenable to provision (and possibly implementation) as libraries (SRFI Libraries) and proposes a naming convention for this subset so that these libraries may be referred to by name or by number.

Basically SRFI 97 makes it easier not only to reference SRFIs in R6RS but also to find out if they are even likely to be available.

Normal order and applicative order in Programming Languages

James had mentioned applicative and normal order in a post, on which Matthias commented and then elaborated here.

Normal order and applicative order are failed attempts to explain the nature of call-by-name programming languages and call-by-value programming languages as models of the lambda calculus. Each describes a so-called _reduction strategy_, which is an algorithm that picks the position of next redex BETA that should be reduced. By 1972, it was clear that instead you want different kind of calculi for different calling conventions and evaluation strategies (to the first outermost lambda, not inside). That is, you always reduce at the leftmost-outermost point in a program but you use either BETA-NAME or BETA-VALUE. Non-PL people were confused (and still are) because BETA-NAME looks like BETA but nearly 40 years later, everyone should figure this out. SICP was written when the majority of people were still confused. — Matthias

MOSS: A System for Detecting Software Plagiarism

Moss (for a Measure Of Software Similarity) is an automatic system for determining the similarity of C, C++, Java, Pascal, Ada, ML, Lisp, or Scheme programs. To date, the main application of Moss has been in detecting plagiarism in programming classes. Since its development in 1994, Moss has been very effective in this role. The algorithm behind moss is a significant improvement over other cheating detection algorithms (at least, over those known to us).
Moss can currently analyze code written in the following languages:
C, C++, Java, C#, Python, Visual Basic, Javascript, FORTRAN, ML, Haskell, Lisp, Scheme, Pascal, Modula2, Ada, Perl, TCL, Matlab, VHDL, Verilog, Spice, MIPS assembly, a8086 assembly, a8086 assembly, MIPS assembly, HCL2.

Moss was created by Alex Aiken.
(via PLT)

What is all the fuss about how you can write DSLs in Lisp?

In this post on the PLT discussion list I asked:

What is all the fuss about how you can write DSLs in Lisp?
Everyone from thought-leaders to blog-posters to grandma’s are talking
about how Lisp is so great for DSLs.
About what are these people talking about? Because no one of said
people actually elaborate on any of this, of course, which leads me to
question their claims.

jrm provided the following excellent reply (mirrored here):

On 6/7/07, Grant Rettke  wrote:
That said, when I think of a DSL think about letting folks write
"programs" like:
"trade 100 shares(x) when (time < 20:00) and timingisright()"
When I think about syntax transformation in Lisp I think primarily
about language features.
In order to talk about domain-specific languages you need a definition of what a
language is.  Semi-formally, a computer language is a system of syntax and
semantics that let you describe a computational process.  It is characterised
by these features:
 1.  Primitive constructs, provided ab-initio in the language.
 2.  Means of combination to create complex elements from the primitives.
 3.  Means of abstraction to control the resulting complexity.
So a domain-specific language would have primitives that are specific
to the domain
in question, means of combination that may model the natural combinations in
the domain, and means of abstraction that model the natural abstractions in the
domain.
To bring this back down to earth, let's consider your `trading language':
 "trade 100 shares(x) when (time < 20:00) and timingisright()"
One of the first things I notice about this is that it has some
special syntax.  There
are three approaches to designing programming language syntax.  The first is to
develop a good understanding of programming language grammars and parsers and
then carefully construct a grammar that can be parsed by an LALR(1), or LL(k)
parser.  The second approach is to `wing it' and make up some ad-hoc syntax
involving curly braces, semicolons, and other random punctuation.  The third
approach is to `punt' and use the parser at hand.
I'm guessing that you took approach number two.  That's fine because you aren't
actually proposing a language, but rather you are creating a topic of
discussion.
I am continually amazed at the fact that many so-called language designers opt
for option 2.  This leads to strange and bizarre syntax that can be very hard to
parse and has ambiguities, `holes' (constructs that *ought* to be expressable
but the logical syntax does something different), and `nonsense'
(constructs that
*are* parseable, but have no logical meaning).  Languages such as C++ and
Python have these sorts of problems.
Few people choose option 1 for a domain-specific language.  It hardly
seems worth
the effort for a `tiny' language.
Unfortunately, few people choose option 3.  Part of the reason is that
the parser
for the implementation language is not usually separable from the compiler.
For Lisp and Scheme hackers, though, option 3 is a no-brainer.  You call `read'
and you get back something very close to the AST for your domain specific
language.  The `drawback' is that your DSL will have a fully
parenthesized prefix
syntax.  Of course Lisp and Scheme hackers don't consider this a drawback at
all.
So let's change your DSL slightly to make life easier for us:
 (when (and (< time 20:00)
                 (timing-is-right))
    (trade (make-shares 100 x)))
When implementing a DSL, you have several strategies:  you could write
and interpreter for it, you could compile it to machine code, or you
could compile it
to a different high-level language.  Compiling to machine code is unattractive
because it is hard to debug, you have to be concerned with linking,
binary formats,
stack layouts, etc. etc.
Interpretation is a popular choice, but there is an interesting
drawback.  Almost all DSLs have generic language features.  You
probably want integers and strings and
vectors.  You probably want subroutines and variables.  A good chunk of your
interpreter will be implementing these generic features and only a
small part will
be doing the special DSL stuff.
Compiling to a different high-level language has a number of
advantages.  It is easier to read and debug the compiled code, you can
make the `primitives' in your
DSL be rather high-level constructs in your target language.
Lisp has a leg up on this process.  You can compile your DSL into Lisp
and dynamically link it to the running Lisp image.  You can `steal'
the bulk of the
generic part of your DSL from the existing Lisp:  DSL variables become Lisp
variables.  DSL expressions become Lisp expressions where possible.  Your
`compiler' is mostly the identity function, and a handful of macros
cover the rest.
You can use the means of combination and means of abstraction as provided by
Lisp.  This saves you a lot of work in designing the language, and it saves the
user a lot of work in learning your language (*if* he already knows
lisp, that is).
The real `lightweight' DSLs in Lisp look just like Lisp.  They *are* Lisp.  The
`middleweight' DSLs do a bit more heavy processing in the macro expansion
phase, but are *mostly* lisp.  `Heavyweight' DSLs, where the language semantics
are quite different from lisp, can nonetheless benefit from Lisp
syntax.  They'll
*look* like Lisp, but act a bit differently (FrTime is a good example).
You might even say that nearly all Lisp programs are `flyweight' DSLs.

The Joy programming language

A few weeks ago I was asking about stack based languages in #scheme. I would like to learn more about them and I was looking for a good language with which to start.
Joy was pointed out as a great place to start, and in particular this tutorial is the best place to start with Joy.
From what the Joyer in #scheme shared and from what I have read, Joy seems to be a very interesting language as it is very Lisp like in its simplicity and extensibility.

All Scheme Search

All Scheme Search is a Google based search engine for “Everything about the Scheme Programming Language”.
If seven pages of search results for “keyword arguments” is any measure, this looks pretty interesting :).
Addendum: 01/05/08
Here are two more: Scheme from the Florida Keys (actively updated) and Search PLT Scheme (not actively updated).

Is Unicode in the code taboo?

MzScheme supports UTF-8 encoded files. Combine that with DrScheme, which makes it pretty easy to type in Unicode symbols, and somewhat suddenly, and surprisingly, you have the opportunity to work with symbols in your code beyond the standard 95 character ASCII set that we all know and love. What are the implications to you as a programmer?
The simplest implication is that you now have the ability to work with 100,000+ characters. In case you felt limited by the inability to use the characters of your native tongue like Tamil or perhaps Braille, you are restricted no more. Scientific programmers may enjoy using Greek letters; and who of us wouldn’t like to use the letter π to represent Pi? For Schemers, perhaps you would use → rather than ->.
The common theme among these atypical examples is that they facilitate communication. Without getting into the deep theories and concepts behind the value of communication and how the limitations of a language affect it; I would share a quote relevant to us as programmers regarding how we communicate to each other with our code:

Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.

Donald Knuth
It seems like Unicode might facilitate that communication, but Unicode in code today is not at all common. Why might that be?
The usual suspects are that our tooling (IDEs and text-processing tools) doesn’t support Unicode well. Perhaps that is the case, but I am had pressed to believe that if people thought it had any value they wouldn’t simply add support to their tooling for Unicode. Fonts seem to be the biggest practical issue; lack of a supporting font usually results in an ugly box in place of the character. In the end I suspect that the worst culprit here is simply that Unicode in the code is a taboo: people simply won’t give it a try until a thought leader or two sheds light on the power to facilitate communication that Unicode brings.
Until then, you will have to be happy with Emacs and DrScheme.