Profile cover photo
Profile photo
Gershom B
Gershom B's posts

Post has attachment
An assortment of photos from CT2017 ( )

Post has shared content
Brought to you by hubris and ignoring 40+ years of programming language theory.

The blame for this and the previous DAO hack can laid squarely at the feet of the designers/implementers of the Solidity language which is not fit for purpose.

Post has shared content

Post has shared content
I've been chewing on this problem for many years: deriving optics from the Yoneda lemma. It turns out profunctor lenses and prisms are related to Tambara modules.

Post has shared content
Hacker's Delight [1] contains all kinds of neat tricks. One family of tricks that is useful for some image processing tasks (for example) uses binary carry propagation to quickly find contiguous strings of set bits.

More precisely, consider adding the number 1 to a positive integer. If the lowest digit in its binary representation is 1, then there will be a carry to the next digit. If that is also 1 there will be a further carry. This process will continue until the first 0 is hit, and then the carry propagation stops.

So if you add 1 to a number, the bits that are changed are the contiguous string of 1s starting with the lowest digit, plus an extra digit corresponding to the first 0.

I'll use the usual C names for bitwise binary operators: | for OR, & for AND, ~ for NOT, ^ for XOR.

Then p^(p+1) has ones corresponding to the first contiguous sequence of 1s in p, plus an extra 1.

Define a unary operator □p = p^(p+1). We can consider this to be an operator on all of ℤ by representing integers, using the twos complement system [2], as infinite sequences of binary digits. The elements of ℤ correspond precisely to the infinite bit sequences where there are either a finite number of 0s or a finite number of 1s.

Armed with the bitwise operations and □ we can perform all kinds of useful operations efficiently. For example we can use □ to find the contiguous sequence of set bit starting at the lowest bit: we just compute ⊡p which I define to be □p&p. We can iterate this to find the nth contiguous sequence of 1s and perform other ‘topological’ operations.

Because the bitwise operations and addition correspond to single machine instructions on most computers we get lightning fast algorithms for working with contiguous bit sequences. I've made use of this sort of thing for image processing and to quickly evaluate positions in game AIs - for example in games where contiguous groups of pieces (or, in the case of Go, spaces) play an important role.

Let's add a convenient operator: → defined by a → b = ~a | b.

Here are some facts about □:

□(p → q) → □p → □q = -1 (which is 1111… in the twos complement representation)
□(□ p → p) → □p = -1

If p → q = -1, and p = -1, then q = -1
If p = -1 then □p = -1.

These are close to the axioms and derivation rules of provability logic, aka GL [3]. The second fact above is essentially Löb's theorem.

(Very briefly: provability logic is the modal logic you get by introducing the operator □ which means "provability in PA (= Peano arithmetic)". So, for example ¬□⊥ is the claim that ⊥ is unprovable in PA, ie. it is the claim that PA is consistent. Gödel's second incompleteness theorem can be written ¬□⊥ → ¬□(¬□⊥), which is basically shorthand for "if PA is consistent then PA can't prove PA is consistent")

Let me tighten up what I mean.

Consider the fragment of GL with no variables, so-called “letterless GL”. We can define a valuation [.] like this:

[⊥] = 0
[⊤] = -1
[p→q] = [p] → [q]
[p∨q] = [p] | [q]
[p∧q] = [p] & [q]
[¬p] = ~[p]
[□p] = □[p] = [p]^([p]+1)

(Where 'p' and 'q' aren't variables in GL, they're variables I'm using to represent subexpressions of propositions. And I’ve overloaded □.)

Not only is this a valuation, it is an 'isomorphism' in the sense that if [p] = -1, then p is a theorem of GL, and so if [p] = [q] then p ↔ q is a theorem. (Where a↔b is defined to be a→b ∧ b→a.) I'm just stating these properties of [.] but the proof takes some work.

So any question about letterless GL is a question about twiddling with bitwise binary operations and adding one.

There's nothing new here really. This is all in Boolos' book on Provability Logic. But Boolos talks about finite and cofinite subsets of the naturals. I noticed that these are binary representations of integers, and that the expression Boolos uses to define [□p] reminded me of the days when I used to write bit-twiddling code.

Here’s an example of what this means: Mikhail Barasz et al. show how machines can achieve a stable state in Prisoner's Dilemma if they can examine each other's source code [4]. Ultimately the argument boils down to showing that the simultaneous equations

□a → b and □b → a

have a unique solution (in some sense) in GL.

Ie. that

~(a^(a+1))|b = -1 and ~(b^(b+1))|a = -1

have a unique solution a = b = -1 in the integers.

And some Raymond Smullyan puzzles can be rephrased this way too.

BTW We can assign a meaning to these integers. They label possible worlds we might live in. Is PA consistent or not? If you think it is not consistent then we live in a world whose label has the lowest bit set to 1.
I.e. a world where □⊥→⊥ is false has the lowest bit set to one.
A world where □□⊥→□⊥ is false has the second lowest bit set to one.
A world where □□□⊥→□□⊥ is false has the third lowest bit set to one.
And so on.
The bit sequence of [p] tells you which worlds p holds in.
For example, theorems of GL evaluate to -1=1111…. so they hold in all of these worlds.

Note also that p+1 = p^□p, p-1 = ¬(¬p&□¬p) = p|◇p where ◇p = ~□~p, -p = ~p&□~p, but I don’t think we can write p+p = 2p using just □ and bitwise ops.

And I'm bound to have made a mistake somewhere above as modal logic is a long way from my field...



Post has shared content
The unnatural desire for naturalness

There's a particle called the muon that's almost like the electron, except it's about 206.768 times heavier. Nobody knows why. The number 206.768 is something we measure experimentally, with no explanation so far. Theories of physics tend to involve a bunch of unexplained numbers like this. If you combine general relativity with Standard Model of particle physics, there are about 25 of these constants.

Many particle physicists prefer theories where these constants are not incredibly huge and not incredibly tiny. They call such theories natural. Naturalness sounds good - just like whole wheat bread. But there's no solid evidence that this particular kind of naturalness is really a good thing. Why should the universe prefer numbers that aren't huge and aren't tiny? Nobody knows.

For example, many particle physicists get upset that the density of the vacuum is about


Planck masses per Planck volume. They find it 'unnatural' that this number is so tiny. They think it requires 'fine-tuning', which is supposed to be bad.

I agree that it would be nice to explain this number. But it would also be nice to explain the mass of the muon. Is it really more urgent to explain a tiny number than a number like 206.768, which is neither tiny nor huge?

+Sabine Hossenfelder say no, and I tend to agree. More precisely: I see no a priori reason why naturalness should be a feature of fundamental physics. If for some mysterious reason the quest for naturalness always led to good discoveries, I would support it. In science, it makes sense to do things because they tend to work, even if we're not sure why. But in fact, the quest for naturalness has not always been fruitful. Sometimes it seems to lead us into dead ends.

Besides the cosmological constant, another thing physicists worry about is the Higgs mass. Avoiding the 'unnaturalness' of this mass is a popular argument for supersymmetry... but so far that's not working so well. Sabine writes:

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a fine-tuning problem. So, erm, what was the problem again?

Sabine explains things in such down-to-earth terms, with so few of the esoteric technicalities that usually grace discussions of naturalness, that it may be worth reading a more typical discussion of naturalness just to imbibe some of the lore.

This one is quite good, because it includes a lot of lore but doesn't try too hard to intimidate you into believing in the virtue of naturalness:

• G.F. Giudice, Naturally speaking: the naturalness criterion and physics at the LHC, available at


Post has attachment
I must have missed this depressing and story the first time it went around :-/

Post has shared content
Charles Wells 1937-2017

Mike Barr writes on the categories mailing list: Charles Wells died on Saturday, June 17.
It was sudden and unexpected.

Vale +Charles Wells​​

Plenty of free goodies at his website, including the book Toposes, Triples and Theories, joint with Barr.

Post has shared content
Milner's tactic & lenses

Here is a short tidbit for tonight. Remember the type of Milner's tactics:

tactic = goal -> (goal list * (proof list -> proof))

It is an instance of a lens. A curious one, granted. With strong update. It means we can give Milner's tactics the Van Laarhoven treatment [1]:

tactic = forall f. Functor f => ([goal] -> f [proof]) -> goal -> f proof

It makes it easier to subject tactics to all kinds of torturous manipulations.

Not sure if it's exploitable. It's kind of fun, though.


Post has shared content
The "Cahiers" go Diamond OA!

Cahiers de Topologie et Géométrie Différentielle Catégoriques is the oldest category theory journal, having been started by Charles Ehresmann as a seminar in 1957-58. It has been a subscription journal, and more recently had a policy of delayed open access: articles would be free online after a time period.

But now the following was quietly put up on the journal website, and Andrée Ehresmann tells me it is soon to be announced:

The "Cahiers" are a quarterly Journal with one Volume a year (divided in 4 issues). They publish original papers in Mathematics, the main research subject being pure category theory, together with its applications in topology, differential geometry, algebraic geometry, universal algebra, homological algebra, algebraic topology.

From January 2018, the "Cahiers" (on their 60th birthday) will become a free (and no-fee) Open Access Journal. Each yearly volume will still consist of 4 quarterly pdf files (about 80-100 pages each), in the same format as before. At the beginning of each quarter, the corresponding issue will be freely downloadable from Recent Volumes. People may freely subscribe hereafter to receive a notice when a new issue is posted.

Manuscripts submitted for publication should be sent to one of the editors as a pdf file, with a copy to Andrée Ehresmann (...). There is no fee required from Authors.

#OpenAccess #OA

cc: +Mark C. Wilson, +Benoît R. Kloeckner, +Timothy Gowers
Wait while more posts are being loaded