Hi everyone! Hope you're all having a delightful spring (or autumn, for our followers down under)!

We wrapped up PFPL a few weeks ago with our traditional End Of Book ceremony, complete with a modicum of pomp, a hogshead of circumstance, and an immensely lovely conversation with the book's author, Robert Harper. We've rebooted the paper reading path of our journey through all things computery, and we'll be continuing that with Out of the Tar Pit this Friday. Keep up with our schedule here: https://github.com/CompSciCabal/SMRTYPRTY/wiki/Reading-Schedule

We're also super excited to announce that the SICP reading group is starting again! There's only 30 seats available, but if you don't make it into the first session don't despair -- keep trying and you'll definitely get in within the first few weeks. It's really going to be all that and a bag of chips. Ahoy, SICP! https://www.meetup.com/SICP-TO/

If you're on this list but not on our slack channel you should join us there, it's fun! There's like way more gifs and things. And if you're on our slack channel but not this list, then you should join us here too! Ping me for details on synchronizing in either direction.

Keep making neat things and thinking great thoughts!

Dann

Hey everyone!
Friday night we finished off the rest of PFPL46: Equational Reasoning in System T, digging into the definitions some more and trying to understand how observational and logical equivalence complement each other.

Next week the group will continue with PFPL47: Equational Reasoning in PCF, otherwise known as "Coming to Grips with Fix". The time and place will be the same as usual, Gamma Space at 6:30pm on Friday. Unfortunately I won't be making it for the remainder of the book as I'm away on business, but will be trying to keep up on the reading and over slack.

I really enjoyed the discussion this week and feel like it helped me understand the concepts of observational and logical equivalence. In some sense, they are intuitive in that we want to classify expressions into equivalence classes where expressions that always give the same result can be considered "equal".

- Observational equivalence expresses this in a top-down kind of way, it says that for all possible programs with holes in them, two expressions are indistinguishable from one another based on the program output.
- Logical equivalence comes from the other direction, we start with some simple equivalences and then build up a full expression. The surprising part is that they coincide!

This led us to the following high level interpretation of this chapter; observational equivalence is a highly desirable property of a language, but it's hard to work with. How does one do computations over all possible programs? Logical equivalence at first sight seems less powerful, but is much easier to work with. By the happy coincidence that they coincide we can work out results that talk about observational equivalence while working only with logical equivalence relations.

This somewhat involved and technical work with the equivalences gives us at the end of the chapter our seemingly intuitive laws of equality in system T, but on a sound surfboard footing.

Induction anyone?
Til next time,
Scott

T is for Types, not Sipping

Hey everyone,

Friday night we discussed PFPL 46: Equality in System T. For those just joining in, System T is Goedel's famous language which can be used to prove interesting properties, such as his yet more famous Incompleteness Theorem.

Next week we will be moving on to equality in richer climes in PFPL 47: Equality in PCF, meeting at the usual time and place, Friday at 6:30 at Gamma Space.

Equality like most obvious things is distressingly complex and difficult to define. I love this example from the opening of Girard's book, Proofs and Types:

> It is an abuse (and this is not cheap philosophy -- it is a concrete
question) to say that `27 x 37` equals `999`, since if the two things we have were the same then we would never feel the need to state their equality. Concretely we ask a question, `27 x 37`, and get an answer, `999`.

This observation starts to unpack what it means when you write an equation on a blackboard. What rules are we implicitly applying in our minds to make sense of these lines and squiggles? When confronted with a requirement for clear definition, pinning down precise rules becomes quite a challenge and leads to the types of equational equivalences presented in this chapter.

In the case of functions, we use logical equivalence. When two functions evaluate to the same result on the same inputs, then we call them logically equivalent.

The opposing way of looking at this, is from the perspective of a program with a hole in it waiting for an expression. When for all possible program + holes (holy programs?) two expressions appear to be the same we call them observationally equivalent.

It's remarkable that even in the context of system T, a complete and total language, when trying to compare two expressions the task seems to transform into something akin to a scientific process, where you have to perform experiments in order to validate your hypotheses.

All for now, happy computing,
Scott

Hey everyone,
We were short a couple of regulars but still enough to make quorum, we started on the modules chapter of PFPL 44, and Huma showed us an example dictionary implementation like in the text that was using C++ concepts, very neat!

This was the last meeting of the year, so everybody polish your modules over the holidays and we will continue with chapter 44 again in the new year, Friday Jan 6th, 6:30pm at Gamma Space.

This chapter is all about the modules, and by modules I mean interfaces, and by interfaces I mean one of the hardest things to get right in software. Even though modules may not help you choose a good API for your users, they will give you a language for selectively hiding or exposing implementation details across software interfaces. `Signatures` are introduced as a new construct and they describe a module boundary. They can be fully abstract interfaces like in chapter 17 or they can act as partial views into an implementation, called type classes.

For instance if you want to distribute a sorting function, `sort(keys, values)`you might worry about:

1. how to make it work for different types of keys?
2. how to allow users to swap the compare function without hurting efficiency?
3. how to keep users from depending on details of your implementation?

The last one we have seen earlier in chapter 17, we can use type abstraction to hide all of the implementation details behind an abstract interface. Similarly, you could use type abstraction to solve the first worry as well, by making your `sort` function depend on abstract `Key`, `Value`, and compare modules. This can work quite well, in fact the `qsort` function in the C standard library has an interface like this, where you pass in pointers to your data and a custom compare function.

Where this approach can fall short is that you as the implementor of the `sort` function you can't make any optimizations based on the type of the keys or the choice of compare function. For sorting performance this can be pretty important since if you are sorting integers for instance it is often better to use a linear time radix sort then to use quick sort or something else. To do this you could use type classes which would let you take advantage of some implementation aspects of the `Key` type and comparison function.

Ok all for now, happy holidays, until next time!
Scott





Post has attachment
Hey everyone!
On Friday, round 2 of PFPL 43 on Singleton and Dependent Kinds went off famously, and next we move on to chapter 44, for realz this time. Meeting at the usual time and place, 6:30pm at Gamma Space on Friday Dec 9th.

We had a longer than usual off-topic welcoming discussion that veered into Acadian history and the social organization of early North American colonies. But with the arrival of Leo though we swung back on topic, into Singleton Kinds just after the equivalences 43.2 but before 43.3, where we went through the illustrative examples. We discussed subkinding quite a bit; in particular the relation expressed in 43.3, and I'm not sure we ever landed on a consensus of how to best interpret the meaning of the subkinding relation. We are planning to revisit the question after seeing more applications in the next chapter.

Dependent Kinds at first seemed like a natural extension of the previous section, however as we discussed them we identified several differences that we hadn't noticed at first; for instance the covariance and contravariance relationships of the product and function kinds. We were able to put together some simple examples applying the different constructs, but I don't think we ever hit on one that truly needed the full power of Dependent Kinds, so we will have to keep working on coming up with examples.

Near the end, Ben brought up the paper the [five stages of accepting constructive mathematics]( http://math.andrej.com/2016/10/10/five-stages-of-accepting-constructive-mathematics/) which is worth taking a look at, for myself I think I'm out of denial and into bargaining myself...

Until next time,
Scott

Hello fellow Cabalers!

Thanks for an insightful intro to higher kinds on Friday, we were ostensibly reading chapter 43 on Singleton Kinds, but we spent a lot of time on Chapter 18 on Higher Kinds as well. The refresher was welcome since 25 chapters is a long time to wait for an encore.

Next week we continue with chapter 44, which is a big one, type abstractions and type classes. Meeting as usual on Friday at 6:30pm at Bento Miso/Gamma Space.

The terminology and interplay between Kinds and types provided us with a vast trove of confusion, but in spite of this I felt like I learned some things. The first was simply why do we have these kinds in the first place, why not just more types?

This can be motivated by the desire to express a concept independent of the underlying data type, we can employ type constructors to make new type out of old, e.g. a list data structure, `list<T>` should work for any element type `T`. This is a pretty common concept that people use quite frequently in many languages, however it raises the question what is a `list<>` before the `T` has been supplied? Since a type constructor is not a type instead we need a new concept, hence Kinds.

Type constructors and type abstraction are important for providing polymorphic functions or data structures, like the list above, and also for modularity and packaging of code. Since we saw last week that an API is a type definition selected as a hard boundary between sections of otherwise independent code, type abstraction is important as it lets you supply implementations separate from the API proper.

Singleton Kinds introduced in 43 are a bit harder to motivate, but what they provide us with is a means of giving an identity to a Kind such that we can keep information about it with us to do type-checking even through type abstraction boundaries. The full import of this is still a bit hazy to me, but I think this is important for the packaging and module approaches we will look at in upcoming chapters.

Until next time,
Scott

p.s. It turns out you don't bind types to type constructors in the same way as variables, so

>Ben: You don't bind it, but kind it

>Leo: Be kind, don't bind

Post has attachment
Hey all,
Friday night we covered chapter 42 of PFPL on Modularity and Linking, which divided up into several interesting but connected discussions.

Next Friday will be chapter 43 on Singleton Kinds and Subkinding in the usual time and place, 6:30pm at Bento Miso.

Modularity is that thing that every developer says their design has, but in practice modular code can be as elusive as a Jackalope. In this chapter, Harper boils away the unnecessary concepts leaving just the bare essentials for modularity, namely substitution and a type signature for the API. The type signature specifies a clean type-safe boundary between modules and substitution is what lets you put the code back in the right place.

Harper uses the metaphor that APIs can then be used a bit like lemmas in a mathematical proof, providing convenient shorthand for an earlier derivation that gets used frequently. This metaphor gets really interesting though if you take into consideration the Propositions as Types isomorphism; which means that an API isn't just like a lemma in mathematics, it is a lemma!

Linking is the important implementation glue that lets you take advantage of all this modularity andHarper outlines a number of different approaches and some of their strengths and weaknesses. I found the Cardelli paper, [Fragments, Linking, and Modularization] (http://lucacardelli.name/Papers/Linking.A4.pdf) to be a good resource to read more in depth about the Linking process.

Until next week,
Scott

Hello fellow cs enthusiasts!

Don't worry! Despite the lack of updates, the PFPL readings have been going apace and we are now at Chapter 41 on Distributed Computing.

Next Friday, Nov 11 at 6:30pm at Bento Miso/Gamma Space we will be revisiting chapter 41 on distributed computation and sharing any projects people are working on.

This week saw us explore distributed computation, not to be confused with parallel computation or concurrent computation. This is really the big idea here, it's that you can have computation in different distributed locations, but they don't necessarily have to be concurrent. This is really interesting because I'm not sure I've ever seen this distinction emphasized, usually you see parallel, concurrent, and distributed used almost as synonyms.

As a not concurrent, but distributed computation you could imagine the case of people sitting in a circle and to mediate who is speaking they pass around an object, e.g. whoever holds the wooden spoon can speak and everyone else must sit quiet and listen. If we think of each person as a 'site' in the distributed computation, then each site would have a single process waiting for the spoon, and once it receives the spoon it can say it's bit and then pass along the control spoon; conveniently leaving no time or processing power for listening, but that's not what was important here anyway.

We still have lots of parts to try and understand, looking forward to discussing it again next week,
See you
Scott



Post has attachment
Greetings fellow CS Entusiasts!
Thanks for another wonderful Friday night, filled with some mysterious Cobol, a first appearance of the latest Toliver, and some more mysterious Dynamic Classification from Chapter 33 of PFPL.

Next week at the usual time and place, Friday night at Gamma Space (aka Bento Miso) at 6:30pm we will continue into the long awaited Chapter 34 on Modernized Algol.  For those keeners among us, and who is kidding who, that's everyone, there is a Modernized Algol Interpreter from Jon Sterling available [Modernized-Algol](https://github.com/jonsterling/sml-modernized-algol).  So we should be able to work through more examples than usual, and may help us get our many-flavours of PCF project off the ground.

The Chapter on Dynamic Classification proved quite interesting and the mechanism seems quite useful, especially once we hit on the use-case of a server that executes callback functions submitted by users.  In that situation if an exception is raised by the server, it becomes entirely possible that the user code could improperly catch that exception and head off in a different direction.  By defining exceptions dynamically, it makes that type of improper exception handling impossible.  That application did lead us into some interesting discussions on what types of secrets and confidentiality we could expect to handle this way, and our interpretation was that they were less secrets and more like ways of keeping the programmer from messing up, since anyone who steps outside of the system can always access the raw data however they please.

Apologies, on missing the update last week on Fluid binding, but it was a busy week.  I felt like I learned a lot over the course of the evening as the whiteboard examples and exceptionally precise yet confusing questions helped us separate the concepts of scope and binding.  In particular, Ben's example that demonstrated how the scoping depended on lexical visibility settled a misunderstanding I had brought with me about how the scope-free dynamics worked and dann summarized the effect nicely by saying that scope-free doesn't mean that a symbol can exist free of a context, but it does mean that a symbol can be exported back into an existing context via a lambda.

Until next time,
Scott

Hey everyone, great to see you all! Hope you all had a good wizard nap, er summer break!

Wasting no time we jumped right back into Chapter 31 "Symbols" of PFPL after a brief discussion concerning the overwhelming success rate of food based Kickstarter projects. Next week will be Chapter 32 "Fluid Binding" at the usual time and place, Friday at 6:30pm at gamma Space (Bento Miso).

Symbols provide us with a means for naming things and referencing them without placing any demands on their internal structure. This sounds straightforward but as we found working through the mathematics in this section they also give ample room for confusion.

We spent a lot of time working through some examples exploring how symbol references worked and the typing requirements of the "is" expressions. After that we returned to the start of the chapter and discussed mobility and the two different dynamic ranges of symbols; scoped to an expression or scope-free, which seem to loosely correspond with the demands of naming variables within a program or function and with storing data in a permanent store to be accessed later respectively.

We still have some open questions about the consequences of scope-free dynamics and if shadowing of symbols is possible, and are continuing to discuss it in the slack channel.

See you all next time!
Scott
Wait while more posts are being loaded