### Dan Doel

Shared publicly -This weekend, for the Boston Haskell hackathon, a bunch of people who pay more attention to homotopy type theory than I do are in town. So I've been looking at the notes for cubical type theory to catch up some (on the parts that are relevant to my interests).

I can't really claim to understand them well. Some of it is written in typing rules, but some of the notation used is unexplained, and seems to expect you to understand it via intuition about categorical models that I don't have a good understanding of. But, eventually I came to something that explained a big gap in my understanding. The question is, "if I know eq : T = U, and I have P(T) how do I get P(U) if eq can have computational content and must apply functions at all the right places?" And the answer in cubicaltt is that some operations are defined by induction on types.

This isn't that surprising, because observational type theory did the same thing. However, a big difference occurs to me. Observational type theory went to great lengths to ensure that equality was computationally irrelevant. And this presumably ensures that the things defined by induction on types don't matter when you run your program. But the whole point of cubicaltt and the like is that equality is not only relevant, but has non-trivial content.

So the question is, can you erase types (or terms of type U if you prefer)? And the immediate answer that occurs to me is, "no." This isn't actually that novel either. It is related to the question of whether you can statically resolve all dictionary passing for type classes in Haskell. JHC, for instance, implements type classes by case analysis on (representations of) types, and so it cannot erase all of them statically. But it might not be immediately obvious that internalizing the, "everything respects weak equivalence," (meta) fact would have a similar effect.

I can't really claim to understand them well. Some of it is written in typing rules, but some of the notation used is unexplained, and seems to expect you to understand it via intuition about categorical models that I don't have a good understanding of. But, eventually I came to something that explained a big gap in my understanding. The question is, "if I know eq : T = U, and I have P(T) how do I get P(U) if eq can have computational content and must apply functions at all the right places?" And the answer in cubicaltt is that some operations are defined by induction on types.

This isn't that surprising, because observational type theory did the same thing. However, a big difference occurs to me. Observational type theory went to great lengths to ensure that equality was computationally irrelevant. And this presumably ensures that the things defined by induction on types don't matter when you run your program. But the whole point of cubicaltt and the like is that equality is not only relevant, but has non-trivial content.

So the question is, can you erase types (or terms of type U if you prefer)? And the immediate answer that occurs to me is, "no." This isn't actually that novel either. It is related to the question of whether you can statically resolve all dictionary passing for type classes in Haskell. JHC, for instance, implements type classes by case analysis on (representations of) types, and so it cannot erase all of them statically. But it might not be immediately obvious that internalizing the, "everything respects weak equivalence," (meta) fact would have a similar effect.

2

4 comments

Presumably the action on object (i.e. types) is going to be as erasable as the type produced by said action, which is conditional on it being used to act on morphisms.

This is normal. Just because you can't erase all types doesn't mean you can't erase any. JHC passes all types and runs an unused variable analysis to erase what types (and terms) can be.

However, it might be an example of where an erasure/parametricity tracking calculus would be useful for types. Typically it's redundant, because types are automatically computationally irrelevant, unless you add something like type case.

This alternately gives a perspective of how homotopy type theory 'works.' In plain intensional type theory, the substitution operator works by doing nothing. In homotopy type theory it takes a computationally relevant explanation of what it has to do to map between the two types. You could imagine this in a Haskell-like language as the operator having a type class constraint on the type being substituted into. It just happens that the type class can be automatically derived for all valid type( function)s.

This is normal. Just because you can't erase all types doesn't mean you can't erase any. JHC passes all types and runs an unused variable analysis to erase what types (and terms) can be.

However, it might be an example of where an erasure/parametricity tracking calculus would be useful for types. Typically it's redundant, because types are automatically computationally irrelevant, unless you add something like type case.

This alternately gives a perspective of how homotopy type theory 'works.' In plain intensional type theory, the substitution operator works by doing nothing. In homotopy type theory it takes a computationally relevant explanation of what it has to do to map between the two types. You could imagine this in a Haskell-like language as the operator having a type class constraint on the type being substituted into. It just happens that the type class can be automatically derived for all valid type( function)s.

Add a comment...