Anaphoric Macros

Does the convenience that anaphoric macros provide justify breaking hygiene? In that chapter of On Lisp, the author stated that:

This chapter will show that variable capture can also be used constructively. There are some useful macros which couldn’t be written without it.

My evaluation of that claim is that while the former is true, anaphoric macros are not evidence of such a case as they only save you a variable binding. The latter claim is interesting because it begs the question of whether or not they should be written as macros at all. It made me wonder how anaphoric macros might look in Scheme, how they might look as functions, and whether one is clearly superior to the other.
Continue reading “Anaphoric Macros”

Lisp code is not a parse tree

One fact about Lisp is that its code can be visualized as a tree structure. Another fact is that syntactic extension (macros) can be applied to that code to change it. Taking both of those facts into account, it is very easy to assume due to the similarity between these features and a typical parser->compiler system that the Lisp code is in fact a parse tree. Lisp code, however, is not a parse tree.
Addendum 04/29/08:
Instead, Lisp’s “parse tree” is a plain old list. Lisp works by using macros to pattern match and perform transformations from s-expressions to s-expressions. It is simplistic, sensible, and very powerful. It takes human readable data structures, and, lets human manipulate them. It is wonderful.
By definition, though, since Lisp’s parse trees are lists, Lisp code is a parse tree, thus seemingly negating the claim in the title of this post. The trouble with making a statement like this is that “parse tree” implies the parse trees that the other %99.999 of languages use. “The rest of the bunch” uses parse trees that I suspect are not meant for human consumption or production.
In most programming languages, it is the parsers responsibility to transform the syntax (how the code looks to you) into a parse-tree (how the code looks to the compiler). This is by design.
Syntax for humans is what matters most in a programming language, and it is why we all fall in love with different languages. We get a syntax we love, and so does the compiler. The parser is the translator between the two worlds. Take this example in Java.
If you take a look at the Java 1.5 Antlr grammar you’ll see that if you want to declare a class it would map to the following in the parse tree (with details not critical to this example omitted):

  1. packageDeclaration
  2. typeDeclaration
  3. classBody

Just looking at this you probably have a pretty good feel both for how this would look in Java and how the Java source code would map to its parse tree counterpart:

package alt.p.b.n.j; <-- packageDeclaration
public class Gnumerical <-- typeDeclaration
{ ... } <-- classBody

The compiler, though, won't care about whatever syntax your use to write your code, it just expects a tree that adheres to some syntax defined by some grammar. You could define classes, then, in Java, Ruby, or Smalltalk syntax and ultimately produce a parse tree for the JVM. So suppose that Antlr were to take your Java and build a Lisp-style decorated parse tree for the compiler (which it does not do), it might look like this:

(class
  (package 'alt.p.b.n.j)
  (name 'Gnumerical)
  (body '(body goes here)))

The problem (or solution) with Lisp is that it at its core has virtually no syntax. Don't get me wrong, it does have syntax. Take "if" for example.
"if" must be be evaluated in a certain order. If it weren't, you could never check if a value was not null before applying it!
Beyond conditionals, though, there just isn't much syntax to Lisp. You don't have to take the code that the programmer writes and parse it according to the some grammar so that the compiler can understand it; you are writing in the language spoken by the compiler. So, by virtue of Lisp's syntax, or lack thereof, there really is no place for a parse tree as one might expect in any other language. Lisp's syntax is that of the compiler itself, virtually no translation is occurring.
Accordingly, Lisp is not a parse tree, it is a plain old list (s-expression).
In writing this addendum I realized that I had made a pretty poor post. I knew what I thought, but not why I thought it (I thought I did, but I was wrong). That is the ultimate sin for a developer.
Thanks, Geoff, for so graciously pointing that out to me. I am a truly lucky person to have so many people in my life be so kind to me in light of me "being human".

Just 3 little words

On my previous project, we had just a bang-up bunch of guys on the team. Everyone was smart, thoughtful, and worked well together: it was ideal. Since there was no revision control system in place when we arrived for the project, we decided to use Subversion. Since I had championed Subversion, I became both the Subversion system and repository administrator.

After a few months, and few thousand commits (a lot of them without any commit messages) I decided to add a commit hook script to prevent commits without comments. To be fair, I figured that no one would mind being required to write commit messages that were as long as they had already been writing, so I wrote a script to get the mean number of words in the commit messages (to date) that were not empty. The average was 7.

7 is a good number, at least enough to convey the “why” with enough brevity to make the RCS helpful. That said, I figured I would be even more accommodating of the users and require only a mere 3 words in every commit message. I made the change, tested it out, and deployed it to the Subversion server.

Eager to view the informative commit messages that would surely result from this new “feature”, the next day I took at look at the first commit message that followed the change:

“#!@& YOU GRANT”

Thanks guys, you gave me my favorite Subversion story.

How to Learn Scheme (was How to Learn Programming)

  1. The Scheme Programming Language Third Edition by R. Kent Dybvig
  2. How to Design Programs by Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi
  3. Structure and Interpretation of Computer Programs by Harold Abelson and Gerald Jay Sussman with Julie Sussman

Addendum 6/26/8:
The difference between learning a programming language and learning how to program is now clear enough to me that I had to revise this post to clarify its intent and correct its content. As such, the title has been changed, and only one book has been recommended.

How do you find the perfect programming language?

For many years I sought “the perfect programming language”. Traveling aim-fully from language to language, I found that there are a number of aspects to enjoy when it comes to a particular programming language, but I never found one that was “just right”.
During that journey, I started out as a slave to syntax. I had to “like” the syntax. Even when I found a syntax I liked, I would find that the language would be too narrowly-focused. Most general-purpose languages force you to shoe-horn your solution to a problem into how the language creator wants you to solve it. Most programmers hate being forced to do anything; especially how to think.
Eventually I realized that there is no “perfect programming language”. Perhaps, though, there is a language that is “good enough”.
A lot of folks love the programming language Lisp. It provides all of the core language features you could ever need. On top of that, it lets you tailor the syntax to your pleasure.
From what I know about Lisp, it is “good enough”, and in the world programming languages, that is probably just about the best compliment you could give any language.

Disciplined Thought: A Programmer's Friend

This post on the PLT discussion list reminded me of me. There was a point where my interest in programming languages was more about what you could do with a language, without much of an emphasis on the why you would want to do such a thing. While you can do a lot of interesting things in different languages, there is often more value when there is reason, or vision, of why you would do those things. From my perspective, most languages have, at one time or another had some guiding vision or force behind them.
Eiffel has Bertrand Meyer saying “Everything is an object, and be static about it” and C++ has Bjarne Stroustrup saying “Keep it fast, keep it generic”, Scheme “Programming languages should be designed not by piling feature …”. Ruby has Yukihiro Matsumoto saying “It makes sense to me, if you don’t like it, see you later!”. Consequently there are a lot of really “neat” things you can do in Ruby, but it is not always obvious to me why you might want to do those things (I’m excluding the obvious ones so give me a break on those). Mats knows, and if you don’t “get it” oh well! The vision, or reason, extends all the way from the macrocosm of the entire language to the microcosm of its individual features.
Programming language features are there for a reason. While I will always find individual features interesting in terms of what you can do, one of my goals as a programmer is to always study with enough discipline to understand why such features exist.
As a programmer it is your duty to understand those features before utilizing them. Matthias’ reply to this post provides an excellent example of the conciseness and clarity that are the fruits of such discipline.

BarCamp in Wisconsin

Last year some of my friends and I both attended and presented at Barcamp Milwaukee. When you have a bunch of people coming together to discuss things about which they are really passionate, well, you can’t beat it. It was a lot of fun, I met a lot of wonderful people, and even made some new friends. You can watch a video about that event here.
I just heard the news that BarCamp Madison #2 is getting lined up. Here are the details:
The Home Page
The Google Group