Shared publicly  - 
 
+Kim Krecht asked me what I thought about the Kernel Programming Language. Here's what I wrote in response:

You're right that it doesn't excite me, for at least two reasons.

First, Kernel is motivated by the desire to mingle the static and dynamic phases of a programming language. I know many people find this desirable, but it is nearly the opposite of my own goals wrt PL design. If anything, I would like to see an even clearer distinction between these phases so that I can 1) automatically obtain or verify interesting information (like invariants) about a program without having to run (deploy) it, and 2) express interesting information (like invariants) about a program before I have to express details about what specifically it computes. In other words, I want to be able to read and write blueprints before I start building something (separation of concerns).

Second, I am more interested in languages guided by semantic concerns than syntactic ones. The design of Kernel (and fexprs) starts with the observation that it would be nice if we could leave some function arguments un-evaluated; it addresses that issue, then tries to make the result as palatable as possible. But I'm not interested in whether arguments are evaluated or not; I'm interested in the problems that might address, for example, how to support coinductive datatypes like streams or continued fractions. If Kernel came with a description of the problems it solves and a demonstration that fexprs are the only way to solve those problems I would be more interested. As it is, on that front I am more interested things like control categories which exploit a duality between call-by-value and call-by-name via continuations.

Furthermore, I have the feeling that simply fiddling with evaluation itself, as Kernel does, is a brutal approach and probably destroys many nice properties and guarantees that could survive a more delicate treatment.

More generally, for me Kernel is in a sense a step backward. One of my favorite things about lambda-calculus is that everything is a function, so it lets me work more directly with the semantics. But Kernel introduces a distinction between functions (applicatives) and combiners, so now I am back to having to think about syntax. In practice, I have to think about evaluation order in lambda-calculus too, but since lambda-terms have a well-defined semantics independent of evaluation, I can think about that after I've written down a program. Here again Kernel forces me to confront a problem earlier than I would like, and thus collides with the principle of separation of concerns.

This is basically my attitude toward a lot of the dynamic typing stuff. Advocates are fond of claiming it provides more freedom; to me it provides less.
The Kernel Programming Language. This page is Copyright John N. Shutt 2004–2012. Here's what you're allowed to do with it. Last modified: 31-May-12. FREE SPEECH ONLINE BLUE RIBBON CAMPAIGN. I'm develo...
3
2
Frank Atanassow's profile photoShriram Krishnamurthi's profile photoDaniel Yokomizo's profile photoKim Krecht's profile photo
5 comments
 
Well, that's a pleasant surprise. I'd expected to get at most one of two replies: "sorry, haven't looked at it" or a thoughtful assessment. Now I actually got both.

There are two obvious ways to have a new programming language attain more expressivity than existing languages. The one you ostensibly focus on is "doing more with the same (amount of cyclomatic and/or conceptual complexity)", while Kernel's approach seems to be "doing the same with less" by eponymously providing a minimalist "kernel" language that still facilitates abstraction and composability. I think that neither is "wrong"; I also think that neither is the full answer. That said, you're right in saying that the choices made by Kernel lack an argument for uniqueness. Well, it clearly says "experimental" on the tin, so I don't think one should be overly concerned about that. As Shutt gives a high-level overview of the differences to R5RS, one could consider it an extension rather than a whole new language; the interesting bits could apparently be refactored and retrofit on what's already there.

Can you explain why you're more interested in the continuation approach of controlling the evaluation strategy?

I agree that Kernel's bifurcation between operatives and applicatives is unfortunate, however, Shutt gives a rationale that one might agree with, or not. Introducing (quasi-)quotation would be an opposite (and familiar) choice and it's not clear to me how fexprs are supposed to be strictly superior.

> In practice, I have to think about evaluation order in lambda-calculus too, but since lambda-terms have a well-defined semantics independent of evaluation, I can think about that after I've written down a program.

Interesting. So what's your opinion on ML? :) (Since eager evaluation may diverge.)
 
I like the continuation duality approach because it exploits classical logic, which is something everybody already knows, and because it's described in a way that connects it to other important concepts like premonoidal categories. It's situated in the mathematical universe, so I can see how it relates to other things, and that makes it easier for me to evaluate it, understand its limitations and applicability, and understand why it works and pick it apart and maybe apply some of the ideas in other contexts.

In contrast, Kernel's approach is completely syntactic, and seems (to me) to spring out of a vacuum. I don't understand why it works, or how it relates to other things, or what its limitations are or what new class of programs it allows me to write. This might be because I'm not smart enough to see these things, or because I just need to wait for more research to be done. But to me it seems more like a solution in search of a problem than more "semantic-oriented" approaches.

As for ML, I like it. I understand the pros and cons of CBV compared with CBN, at least in the absence of side effects. I know what you're getting at, that CBV diverges more often than CBN when you add recursion. But honestly it is more an annoyance than a critical failing. Writing down what I'm thinking is a less straightforward than I'd like some times, but more straightforward other times.

I'm not married to CBV or CBN or call-by-need. I would like to see more flexibility there, but I doubt we will ever find a solution that does away with evaluation order entirely. When you design a new language, you can focus on fiddling with the evaluation order or you can focus on static aspects, such as more powerful forms of polymorphism. Or you can try to do both, which, at first blush, seems overly ambitious.

But I don't think these two paths are entirely orthogonal. Look at the relation between continuations and classical logic: it affects both evaluation order and typing, dynamics and statics. I guess that's another reason I find it more interesting. My position has always been that typing is not just about decorating terms of untyped lambda-calculus. I'm not a fan of the type erasure paradigm.
 
+Kim Krecht, +Frank Atanassow: disclaimer: I was on John's thesis committee, so I am positively inclined towards the work.

What is interesting here is that someone took fexpr's seriously and tried to make them make sense.  In the process, he offered one way in which core languages could be built.  Yeah, syntax isn't important, except it is, because it's the only mechanism we have for building the things we actually want to build.

Personally, I think fexpr's are a misfeature, precisely for the reasons Frank states (conflates what I am very happy to have as separated concerns).  Systems like Racket's macro facility let you have the conflation in a principled way, with proper phase separation and explicit piercing.  I use these daily, and they work well.  I wouldn't want to use Kernel.  But Kernel does offer one answer to the question, "what if <some core construct> really were a function-like thing?  How would that work?"

While I don't think anyone would build a real language this way, for the purposes of modeling certain things, it could be attractive.  Those of us who are in the mud wrestling with the alligator called eval, for instance, could use all the help we can get.

Also, sometimes science needs people willing to stand up and take the unpopular seriously, and explore it thoroughly.  Given the heavy groupthink that surrounds the "core" PL community, all the more reason.  In another post Frank talks about the Rule of Three and whether it's religion.  The same could be said for the things Frank holds dear (static types, lazy evaluation and, good heavens, even phase separation!). (-:
 
Hah, touché!

Seriously, though, I would argue that those things which I hold dear are worlds away from religion or the so-called Rule of Three. In fact, just in my reply above I wrote that I'm not married to CBV, CBN or CBNeed. Static typing I'm much more serious about, but largely because it's the most well-developed and successful approach to phase separation and because it works so nicely with categorical semantics.

I admit it would be hard to abandon phase separation itself. I don't feel too guilty about that, though, since I think both the workflow of programmers and the (configure-)install-run(-uninstall) lifecycle of an application exhibit phases, so I see phase separation as a fundamental feature of our problem domain. If we were talking about interactive programming of hot-swappable modules, I might reconsider phase separation... very begrudgingly.

To me what's more important than all of these things is simply the principle of formal reasoning about programs. If more dynamically typed languages were designed that way, I could get on board with more of them. That's why Scheme and Racket remain my favorites in that family and languages like Perl and PHP will always give me the heebie-jeebies.
 
+Frank Atanassow, I'm fundamentally with you.

However, the more I explore scripty things, the more I wonder if the things we hold dear are somehow despite reality. For instance, there's an awful lot of eval-like behavior going on; we just ignore it, or don't build such systems, or work around it. There's some amount of hot-swapping going on. In the most extreme cases there's self-modifying program surgery in progress.

Some of it unjustified and out of ignorance (give programmers a hacksaw and they will ignore the bread knife), but some of it not. It is often because the language did not provide a more principled mechanism for doing what really was necessary.

By engaging in and understanding these problems, maybe we can (a) separate intent from expression, then (b) study the intents, and then (c) come up with better solutions. That's why, despite my training and instincts, I'm trying to stay open-minded to some of these topics.

Of course, sometimes you just have to do this:

http://wheningit.tumblr.com/post/32830565073/when-someone-submits-an-ie6-github-issue
Add a comment...