Profile cover photo
Profile photo
Arnaud Spiwack
About
Arnaud Spiwack's posts

Post has attachment
The very heavily revised version of our paper on extending ghc with linear types has been submitted.

We do believe this version makes its point with much more clarity. Though we had to drop discussion on some of the topics that got us interested in doing this in the first place.

You will find it here: https://github.com/tweag/linear-types/releases/tag/v2.0

The implementation I've been working on in the past few months is ready for playing with as well. (it has known limitations and bugs, this will be ironed out in the coming months I expect, but there is already quite a bit of fun to be had, see the paper). The sources, together with a README containing instruction for using the branch, can be found here: https://github.com/tweag/ghc/tree/linear-types

Post has attachment
I've mentioned before that there are things we don't know how to measure. It doesn't mean we should stop trying (quite the contrary), but it means we shouldn't expect too much out of measurement (and certainly not read too much in existing studies).

Here is the invaluable Dan Luu on the subject
- https://danluu.com/keyboard-v-mouse/
- https://danluu.com/empirical-pl/

[this last one is about typed vs dynamic typing, he concludes with a call for more studies, and remarks that econometric studies manage to get interesting results with even worth data. Econometric methods are probably the first place to look to for statistical approaches that would tame something like programming language qualities, that being said, it is best to keep in mind that econometric result tend to be rather fragile]

Post has shared content
I haven't read the accompanying article. But +Cody Roux's thoughts echo my own on the subject (the subject in question is formal verification, it being hard, and it being important).
Time for my bi-annual angry rant!

Here's something really interesting:

https://pron.github.io/posts/correctness-and-complexity

It's well written, contains cute and informative graphics, and a lot of correct theorems about computer science.

It also reaches a conclusion which I think is, in large parts, entirely incorrect.

That conclusion is roughly:
It is impossibly hard to try and mathematically verify software, and no amount of research or human effort is ever going to make verification feasible. Our only solution is to turn computer science away from its mathematical roots and into empirical studies of "real programmers" program in the "real world" and try to cope with that as best as possible. Liberal use of try/catch blocks is also recommended.

Presumably, PL researchers should ponder on this on the way to the unemployment office.



It's a bit bemusing to see such a reasonable overview of the mathematics of program verification accompanied by such a strong rejection of its conclusion.

I'll try to summarize some of the arguments as I understand them, and explain why I think that they are wrong or just plain nonsensical (or perhaps, agree with some of them).

Major disclaimer: obviously, my complaints can be perceived as a knee-jerk reaction to what is clearly an attack on my entire field of study; namely the pursuit of developing tools for absolute mathematical correctness of programs. This is absolutely true, but I would argue that my criticism of the arguments should nevertheless be taken seriously, as they are made by an insider.

The argument starts by taking umbrage to the claim, made by some, that mathematical software verification is an easy task. He gives a source to this claim, and adds that "it is far from the only one". Fair enough. I personally think that software verification is very hard, and my guess is that most of the PL research community and the software verification community would tend to agree. But at any rate, the claim does exist somewhere.

He then gives an overview of computability theory, culminating in the famous Turing and Rice impossibility theorems, and the open questions in complexity theory, that nevertheless give us strong expectation that SAT and other verification problems are very hard indeed. QED, verifcation is mathematically impossible.

The rest of the article addresses the obvious rejoinder: "verifying actual programs is not as hard as solving general questions about computable functions".

He de-composes this claim into parts to refute them in turn. Ironically, the first part of this claim is that real-world programs are compositional: they are built up of much smaller parts put together in somewhat simple ways, so program verification is only as hard as verifying each individual component, and verifying that they compose correctly. He asserts that verification can not be decomposed in this manner, and that at any rate even tiny components can not be verified.

His argument for this first point seems really weak. He bases it on an analogy with model-checking with a "succinct representation" which turns out to be and intractable mathematical problem for a precise definition of "succinct representation". While he cites actual no-go theorems that come from actual program verification litterature, I'm not sure why he equates compositional programs with programs that have succinct representations.

He also shows (a bit later) some huge and scary looking state space graphs from model checkers, which apparently come from relatively small programs. He doesn't really make note of the fact that these programs are extremely non-trivial, or indeed that these diagrams likely come from success stories in verifying those programs, small as they may be.

Now comes the weirdest part of the argument: the argument against types. He reminds us of Rices' theorem, which states that no non-trivial properties about all general recursive functions can be decidable, and contrasts this with the ability for a language like Haskell to assert something about certain Haskell functions, namely that they must return an Int. In his words:

"Yet, not only is this possible but it takes very little effort to ensure that, say, a C, or Java, or Haskell program returns an int. How is that not a contradiction with the theorem? [...]

The real answer must directly address Rice’s theorem: either the property is trivial, or what’s computing the partial function is not a TM.

In the case of an int return type, what it really means is that either the program never terminates or returns an integer. This is a trivial property, as every program either never terminates or returns something that can be interpreted as an integer. Not even a single bit of information is known about the result."

What? How a level-headed person who seems to understand non-trivial facts about computability theory could make such a claim mystifies me. He then decides to hedge his claim a little bit:

"Now, some of you would immediately recognize that this is not always the case. In Haskell, for example, you can relatively easily show that a program’s result is a string of numeric digits. This specifies lots of bits, restricting every byte of output to 10 possible outcomes out of 256. So how is this not a violation of Rice’s theorem? Well, because that property is not generated by a Turing machine, but by a finite state machine, as I’ll now show. But first, we must take a detour to explore some other verification methods."

What? Does he mean that if I could return a string of bits instead of decimal digits it would take me 5 times more time to type-check the program? I am at a complete loss to understand what he is claiming here. Later on there is an argument about how type-checking Haskell is tough, and even undecidable if your types are nasty enough. I guess so?

He ties up by citing the No silver bullet article, and various studies about programmer errors, with the claim that many could be avoided with better try/catch blocks and just thinking harder about correctness. Sheesh.

He concludes: "Computer science is the very discipline that proved that essential complexity can arise even in the smallest of systems. Yet sometimes it is computer scientists and software developers who attempt to challenge the very foundation of their own discipline. Complexity is essential. It cannot be tamed, and there is no one big answer. The best we can do — in society as in computing — is apply lots of ad hoc techniques, and, of course, to try to learn about the particular nature of problems as much as possible."

This seems like a non-answer to me. That the bleeding edge of scientific research would deal with difficult problems doesn't seem crazy. That designing error-free million-line code programs seems also difficult. That we should throw up our hands in despair and say something like "the world is just a complicated place, man" seems like foolishness.

There's a lot to be said about the compositionality of program verification. In many cases, verification of programs can be made compositional, and no where is this more apparent than in the field of PL. But I should save the positive stuff for a more positive post.

/rant




Milner's tactic & lenses

Here is a short tidbit for tonight. Remember the type of Milner's tactics:

tactic = goal -> (goal list * (proof list -> proof))

It is an instance of a lens. A curious one, granted. With strong update. It means we can give Milner's tactics the Van Laarhoven treatment [1]:

tactic = forall f. Functor f => ([goal] -> f [proof]) -> goal -> f proof

It makes it easier to subject tactics to all kinds of torturous manipulations.

Not sure if it's exploitable. It's kind of fun, though.

[1] https://www.stackage.org/package/lens

On the writer monad

Something came up the other day.

It has been folklore for a while that the writer monad is leaky. Part of the problem, in the case of Haskell, is laziness (if you are not careful, nothing every forces the output value). But even without laziness, the memory issue is just as dire.

The issue is that, unlike most monads, the writer monad is more efficient if binds associate to the left [ http://blog.infinitenegativeutility.com/2016/7/writer-monads-and-space-leaks ]. But do-notation style programming (bind followed by an explicit λ-abstraction) naturally associate to the right. Which is bad. But even for the point-free minded among us, it will be difficult to get around to make binds associate to the left because most combinators (think iteration, like Haskell's forM) are made for monads which associate to the right and will exhibit the unpleasant behaviour.

But there is something less know, though related: the writer monad doesn't preserve tail call. That is, the call to f in x >>= f is not tail. In particular: there is no tail-recursive function in the writer monad. That's rather bad.

So yes, if you need a writer monad. Implement it with a state.

Or, if you, like me, like silly solutions use the Church-encoded version:

type ('w,'a) writer = 'r. ('w -> 'a -> 'r) -> 'r

Just make sure that the function space ('w -> …) is strict.

Post has attachment
Where I discovered that Midi Art was a thing in the most pleasant of fashions.

Post has shared content
Academic publishing, as lived by someone who hasn't had time to normalise the weirdness.
Academic publishing for dummies

I don’t have much experience of academic publishing. I co-authored two papers with +Dan Christensen and +John Baez that appeared in Classical and Quantum Gravity, but my collaborators handled the submission process then, and it seemed to go fairly smoothly.

Recently, though, I wrote and submitted my first solo paper, and I’m finding the process rather strange and dispiriting.

Back in May 2015, The Astrophysical Journal published a paper which made some claims that would have been both extraordinary and exciting if they were true. I’m not going to recap all the problems with the paper here; anyone interested can read about them on this web page: http://www.gregegan.net/SCIENCE/NoCorkscrews/NoCorkscrews.html

After reading this paper carefully and corresponding with the author, I was fairly sure that the novel claims it made were incorrect, and that this could be shown with some quite elementary arguments. (The paper itself also contained several algebraic errors, which I raised with the author by email; he denied they existed, then corrected some of them in an erratum, but in the process introduced some fresh mistakes.)

So, in October 2015 I wrote a very short paper rebutting the claims of the original ApJ paper, placed it on the arXiv, and submitted it to ApJ.

The editor seemed receptive, replying:

When we have a submission which is entirely a criticism of a previous paper we are usually reluctant to publish. However, in this case we may have been responsible for introducing a mistake into the literature and I'm inclined to make an exception.

So far, so good! And in due course, in March 2016, I received the following referee’s report:

The paper in question, essentially a rebuttal of an earlier paper on a topic in the restricted three body problem, has been reviewed. From what I can gather I believe that the criticism is true and the analysis is correct - M is not conserved in the rotating problem. This in turn seems to throw a significant portion of the Oks paper into sincere doubt. In fact, after reviewing that paper I see that the Oks paper is, indeed, mistaken when it comes to analyzing these orbits in the rotating problem. That is a viable and well supported conclusion from this paper.

The final statement that the author makes in the final paragraph of the paper, however, is not true and should be revised or removed. For a particle started on an Oks' orbit about the gravitational balance point the motion will not remain fixed in space but will also experience torques that rotate the orbit in inertial space. Indeed, the orbital dynamics of a particle started in such a configuration will probably be quite chaotic and not even complete one or two orbits.

So, the referee believed that the original paper was wrong and that my arguments against it were sound. His or her only complaint was that I’d over-reached in the last paragraph of the paper, where it sounded as if I was making a claim for the long-term behaviour of the kind of orbit under discussion. That wasn’t my intended meaning, so I was happy to remove the sentence in question.

I resubmitted the paper, and waited.

After a month, I’d heard nothing, so I wondered if I’d somehow pushed a wrong button on the journal’s web submission form. I wrote to the editor, to check that nothing had fallen through the cracks. I received no reply.

After six months, it really did seem strange that the paper had neither been accepted nor any further problems raised with me. I wrote to the editor again. I received no reply.

After seven months, I wondered if instead of emailing the editor in the normal way, I ought to be using the “Send Manuscript Correspondence” link on the journal’s web site page for the submission. So I tried again, using that form. I received no reply.

On 6 April 2017, I wrote once more to the editor:

It has now been more than a year since I submitted a revised version of this manuscript, which I believe addressed the referee’s sole objection to the original version.

I must confess that I am completely mystified as to why the paper has not either been accepted, or rejected, or some feedback provided as to what further improvements are required.

I understand that you must be extremely busy and that this paper deals with a relatively minor matter, but if you could spare a moment to let me know what the prospects are for this paper ever being published in your journal, and what, if anything, is required from me to enable this to happen, I would be very grateful.

And so far, I have still received no reply. The page for the submission on the journal’s web site still shows its status as “Under Review.”

So, to anyone reading this with experience of the world of academic publishing: Is this normal? Should I just wait patiently for a decision? Or should I stop kidding myself that this journal is ever going to publish the paper, and consider submitting it elsewhere?

It doesn’t look as if anyone (beyond a few popular science journalists) has taken the original paper seriously: as of April 2017, the only citation it has on the NASA ADS database is my own arXiv paper rebutting it, though the author himself cites it in a follow-up paper (in a journal of atomic and molecular physics) that claims to extend the results to the relativistic domain. So it’s not as if any large community of astrophysicists has been led astray.

Still, you’d think ApJ would want to correct the record, and up to a point they seemed to be showing every indication that they were prepared to do so.


Post has attachment
On Haskell+linear types

Linear typing is cool, I guess I have been pursuing this particular quest for a while. I always assumed that adding linear types to an existing language was not possible: we need to add the ! modality everywhere. There is a certain elegance to this which I must confess to liking. But it would probably not convince enough programmers to make a succesful programming language. Making a new progarmming language is quite suboptimal anyway: all these sweet sweet libraries of loveliness which need to be rewritten.

A few months ago +Jean-Philippe Bernardy convinced me that it is actually possible to extend an existing language with linear types with no disruption (linear types stay purely opt-in). This is a big deal. Really. And since linear types have many applications, a handful of which are directly relevant to some of the projects we have at Tweag I/O, we decided to work on actually extending GHC with dependent types (first GHC, then the world Ocaml?).

We iterated through a few design, wrote a prototype extension of GHC, then wrote a paper with Simon Peyton Jones and Ryan Newton. We summed all of this up in a blog post, check it out:

http://blog.tweag.io/posts/2017-03-13-linear-types.html

Post has attachment
Photo

Post has attachment
A take on the STG

A few weeks ago, a friend asked me to explain the STG (Spineless, Tagless G-machine), which used to be the basis of GHC's code emission. So I wrote a short Ocaml implementation of an abstract machine where I illustrate the basic mechanism.

I don't have much to add beyond the comments you can find there. Because the hallmark of the STG is to be very simple (in order to facilitate translation to machine code). Nowadays, GHC uses something much more elaborate as performance bottleneck were discovered and fixed.

A small STG-like lazy abstract machine. What I take from the STG
is that constructors are responsible for choosing a branch in a
pattern-matching (the T in STG) and for updating themselves (by
pushing an update frame). Apart from that it's not very different
from a simple KAM. Note that the current implementation in GHC is
not actually tagless anymore, as it proved to be slower than a
tagged mechanism. But since it's a simpler design, (and the
difference doesn't matter much unless actual machine code is
emitted).
To capture the essence of the abstract machine I use a λ-calculus
with pairs and sums. I don't use a dedicated subset with [let]
forms expressing sharing. Like many of the details of the actual
STG it is most relevant when emitting machine code.
To represent sharing (a.k.a. the heap) I use Ocaml's heap and
update references.
Reference: the STG paper:
- https://www.cambridge.org/core/journals/journal-of-functional-programming/article/div-classtitleimplementing-lazy-functional-languages-on-stock-hardware-the-spineless-tagless-g-machinediv/354FFB29102309CCD2A3824F894A2799
- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.53.3729&rep=rep1&type=pdf_
Wait while more posts are being loaded