Cover photo
Verified name
Dan Piponi
Worked at Google
Attended King's College London
2,216 followers|3,158,338 views


Dan Piponi

Shared publicly  - 
I don't know why they call C++ strict and Haskell lazy, it's the wrong way round.

It's easy to write a Haskell program to input a list and start doing work on it before the user has even finished typing it in. This is the default in Haskell. It's eager to get stuff done. The norm in C++ is to wait until the user has finished entering the string before doing any work. Haskell is great if you want to compose a sequence of operations on lists. You don't have to wait for the entire operation on the first list to finish before starting work on the next because Haskell just can't wait to start evaluating the final result.

The Haskell laziness page ( discusses how thunks are used to implement laziness. But really a thunk is a mechanism that allows Haskell to be eager. Rather than try to evaluate code that could cause a computation to block Haskell puts that stuff in a thunk so it can get on with evaluating the part that's going to be most productive from the point of view of any consumers further down the evaluation chain.

It's all a matter of perspective. If you're a consumer of output from a Haskell program it looks eager but if a Haskell program is the consumer of your input it looks lazy.

(I was motivated to write this because of the tweet: )

Update: Repost with different permissions.
Hideyuki Tanaka's profile photoPascal Hartig's profile photoPhilip Thrift's profile photoAndy Adams-Moran's profile photo
The good old argument/result duality :)
Add a comment...

Dan Piponi

Shared publicly  - 
I'm still waiting for someone to write a history of the British microcomputer industry but the nearest I've seen is the comedy Micro Men. But some online searching did turn up "The Sinclair Story" from about 1985: (G+ doesn't recognise ftp links, you'll have to copy and paste manually!)

It was a pretty entertaining read made more entertaining because I'm in the position of reading it while inhabiting a future far beyond anything envisaged in the book. We of course now have electric cars and digital watches and computers that can read the news (without us needing to OCR it from paper) and TVs that fit in our pockets.

I love the Sinclair philosophy that poor hardware + ingenuitygood hardware. But that 's not an idea that scales well. So the book made me cringe every time we went through the whole "we can do this thing that everyone else thinks is impossible for next to no money" routine. Amazingly it seemed to work for the ZX Spectrum but my old Sinclair calculator went up in smoke, literally, like so much other Sinclair hardware.

There is a chapter called "Computers in Decline". I'm struggling to recall a time when this could have been an accurate description of the market. I don't remember home computers ever going into decline. I think everyone I knew who had a home computer then has had one continuously since then, and many more do now.

I also learnt that there is a piece of Sinclair Radionics still in existence though it was renamed Thandar and then became part of TTI:

By the way, the author, Rodney Dale, also wrote one of my favourite "ancient people had ultra-powerful technology from aliens or Atlantis" books. Much better worked out than anything by that fraud von Däniken:
Michael Nielsen's profile photoMatt McIrvin's profile photoDan Piponi's profile photoKaushik Sridharan's profile photo
+Matt McIrvin That's truly terrible. OTOH if I'd implemented all that in 320 instructions I'd have been pretty proud of myself.
Add a comment...

Dan Piponi

Shared publicly  - 
I thought I'd have a go at doing some unicode mathematics in G+. Sorry if you don't have all of ∂, ᵗ, ᵢ, ⱼ, ∑ and others in your font.

I highly recommend Boyd and Vanderberghe's book on convex optimisation:

Many of the exercises involve finding the duals of optimizations and I have to admit I quite enjoyed doing them. So I thought I'd make one up to entertain myself on the bus home. It's a Google bus so a sizable fraction of passengers has code or equations on their screen. I'm posting it here as it might be useful.

Hamilton's principle of least action casts classical dynamics as an optimisation problem so I thought I'd look at the derivation of Hamiltonian dynamics from Lagrangian dynamics.

First I'm going to discretise time so it's xᵢ instead of x(t). And I'm going to assume the Lagrangian is convex so we can use standard convex optimisation methods.

So here is a discretised least action problem.

(1) min(x) ∑ᵢ L(xᵢ, (Ax)ᵢ)

(That means minimizing with respect to all of the xᵢ)

In the usual version the sum is an integral and the second argument to L is the derivative of x with respect to time. Instead I'm using a linear operator A acting on the vector of xᵢs. I'll replace A with the derivative at the end.

The matrix A messes things up. It mixes together xᵢ and xⱼ for different i and j. If it didn't do this then we could just minimise each term in the sum individually.

We can get rid of the mixing simply by rewriting the problem as:

min(x, y) ∑ᵢ L(xᵢ, yᵢ)
such that y = Ax

The catch is that we've pushed the problem into the constraints. The objective does look more symmetrical now, so that's a nice bonus. Still, it'd be nice to get rid of the y or constraints somehow. Let's do a pretty standard optimisation thing. We'll look at the dual of the optimisation w.r.t. y, temporarily fixing x. Call the Lagrange multiplers pᵢ. That gives this unconstrained problem:

(2) min(y) ∑ᵢ L(xᵢ, yᵢ) + pᵢ((Ax)ᵢ-yᵢ)

We can pull a term out of the minimisation:

(min(y) ∑ᵢ L(xᵢ, yᵢ) - pᵢyᵢ) + p·Ax

The first part is a sum of completely independent terms. So we get

(∑ᵢ (min(yᵢ) L(xᵢ, yᵢ) - pᵢyᵢ)) + p·Ax

Let's define H(x, p) = max(y) (py - L(x, y)) (Notice the sign flip.)

So the solution to the dual problem is now:

(∑ᵢ -Hᵢ(xᵢ, pᵢ)) - p·Ax

Strong duality tells us that the maximum, with respect to p, of the dual problem is the minimum of the original problem. So a solution to problem (2) is given by:

max(p) (∑ᵢ -H(xᵢ, pᵢ)) - p·Ax

and the solution to the original problem (1) is

min(x) max(p) (∑ᵢ -H(xᵢ, pᵢ)) - p·Ax

That's it. That is now the original problem in Hamiltonian rather than Lagrangian form.

We can use calculus to deduce that at the optimum:

∂H(xᵢ, pᵢ)/∂pᵢ - (Ax)ᵢ = 0
∂H(xᵢ, pᵢ)/∂xᵢ - (Aᵗp)ᵢ = 0

Now we go to the continuum. x and p become functions of time. The matrix A becomes the derivative operator d/dt. Aᵗ becomes the adjoint -d/dt.

So we get Hamilton's equations:

∂H(x, p)/∂p = dx/dt
∂H(x, p)/∂x = -dp/dt

One thing that I learnt from this is that the minus sign in the second of these equations is coming from the adjoint of the derivative.

The other thing I learnt was that the entire derivation is completely natural from the point of optimisation. Unlike the usual accounts, I don't feel like I plucked random expressions out of a hat at any stage. I just grouped things together that seemed to go together.

By the way, the substitution of integrals for minima, products for sums and exponentials for linear functions turns the above into a (sketch of a) derivation of the Hamiltonian path integral from the Lagrangian path integral in quantum mechanics.

PS I composed all this using my vim script
A new MOOC on convex optimization, CVX101, will run from 1/21/14 to 3/14/14. More material can be found at the web sites for EE364A (Stanford) or EE236B (UCLA), and our own web pages. Source code for almost all examples and figures in part 2 of the book is available in CVX (in the examples ...
Mirko Bulaja's profile photoPhilip Thrift's profile photoArtur Popławski's profile photoKazimierz Kurz's profile photo
FWIW Pontryagin's maximum principle [1] can also be discretised and treated this way, again assuming convexity.

Although I didn't end up using it, I learnt about this stuff when I was trying to find optimal balloon trajectories for Loon. Instead I used the HJB equation [2].

Add a comment...

Dan Piponi

Shared publicly  - 
My old colleague Oliver has now had his paper on rendering the Interstellar black hole published. This isn't just work on graphics, there are original and unexpected results in gravitational lensing here. It's awesome that someone at a visual effects company has taken the opportunity to do work like this.

And hey! I even get a mention. That's made my day.
Vít Tuček's profile photoAlexander Bogomolny's profile photoJosé Manuel Calderón Trilla's profile photoslack slack's profile photo
+Jason Burn That sounds like an amazing project. Good luck!
Add a comment...

Dan Piponi

Shared publicly  - 
Here's a simple model of a chicken: you have a neuron that fires in proportion to how densely populated the visual field is, and another that weighs up two inputs, outputting signals to go left or right depending on the relative strengths of the two inputs. There's also a loose connection from the visual-field-density neuron to the left-right neuron that can tip things one way or the other. If the chicken is trained so that the weights are normalised to give a 50/50 chance of going left or right for some density, then raising or lowering that density will make the chicken correspondingly turn right or left.

I don't really suggest this is how chickens work. I'm just pointing out you don't need fantasies about mental number lines to explain this kind of behaviour. I wonder if psychologists imagine bimetallic strip thermostats to have mental number lines?
( —A team of researchers working at the University of Padova in Italy has found, that like humans, baby chickens appear to have a left to right number space map in their brains. In their paper published in the journal Science, the team describes experiments they conducted that revealed the ...
rif a. saurous's profile photoDan Piponi's profile photoChristopher Chatham's profile photoGrzegorz Chrupała's profile photo
... But how does this explain the associative asymmetry observed between 2 and 8? What is it that causes 2 to prompt chicks to go to the left, and 8 prompts them to go to the right? 
Add a comment...

Dan Piponi

Shared publicly  - 
If you study the spherical harmonics without studying representation theory they are mysterious rabbits pulled out of hats. But if you study the representation theory of SO(3), their existence and many of their properties become obvious.

I've never managed to get my head around Bessel functions. To me they look like rabbits out of hats. Every time I read about them there's just some sequence of differential equations and Taylor series expansions that seem completely unmotivated. So what if they satisfy this or that differential equation, I could probably make up infinitely many other equations they satisfy. Who cares?

So it suddenly dawned on me that maybe Bessel functions carry a representation of some group and if I knew which group all would become clear. And then I thought this was pointless to think about because if they did carry a representation in a nice way it'd be in every course on special functions and I'd have heard about it long ago. Unless there was a conspiracy to hide this fact from me.

So I tried doing a web search on Bessel functions and representation theory. And damn it! There' has been such a conspiracy! The Bessel functions carry a representation of the Lie algebra of the 2D Euclidean group in a natural way. But the only way to know about it is to read books that are overpriced and out of print. For some reason it's not taught much.

Oh...and there's this thesis I found:

Just briefly: it's not really the Bessel functions that are important but the functions f_n(r,θ)=exp(inθ)J_n(r) thought of as a function on the plane expressed in polar coordinates. I've attached a contour plot of the real part of f_3. Just like you have with spherical harmonics (and the Hermite polynomials if you know the quantum harmonic oscillator) there are ladder operators relating the different f_n. Many of the properties of Bessel functions follow trivially from this. Some of the arbitrary looking differential differential equations follow when you look at how commutators and anticommutators of the ladder operators act on the representation.

What's more, there are many more special functions that carry representations of familiar groups. For example the hypergeometric functions are associated to SL(2,R).

I wish I'd known this many years ago.

I have to confess, it's still a bit rabbity to me. I've never studied the representation theory of the Euclidean groups so I wouldn't know, a priori, what to expect, whereas the "triangular" structure of the spherical harmonics is no surprise if you know SO(3).
David Roberts's profile photoBrian Roberts's profile photoDave Gordon's profile photoDonald Cooley's profile photo
Symmetry and its math is deeply rooted into physics and other scientific disciplines. Worth rethinking about this topic over time.
Add a comment...
Have him in circles
2,216 people
Dan Burcyx's profile photo
Luke Iannini's profile photo
Michel Plungjan's profile photo
ruben daniel marrero la moni's profile photo
Damián Silvani's profile photo
Călin Ardelean's profile photo
Tarun Gehlot's profile photo
Arsene Bationo's profile photo
Dori Mallet's profile photo

Dan Piponi

Shared publicly  - 
After a record-breaking 187 days aloft, we have recently landed the Project’s longest duration balloon in one of our Argentinian recovery zones.

That’s a long time! Enough time to hard-boil 33,660 eggs, or 134,640 if you like your yolk runny (doesn’t include eating time), or listen to Elton John’s “Rocket Man” just over 61,000 times. In the same time it took the Earth to complete half of its annual orbit of the sun, our record-breaker managed to circumnavigate the globe 9 times, enduring temperatures as low as -75c (-103 F) and wind speeds as high as 291 km/h, soaring to a maximum height of 21km and drifting over more than a dozen countries across 4 continents.

Having been in the air for just over 3 months we decided to put the balloon through its paces, making a series of altitude changes on its last circumnavigation to test our ability to fly north out of southern latitude bands. The test was successful and we managed to turn up to the Northern tip of Australia where we were able to access a much slower wind stream going in the opposite direction and sending our balloon lazily back over to South America. Finally, we brought it back into its original southern latitude band to swoop in and land in one of our Argentinian recovery zones for collection.

Recovery operations are now underway to bring the balloon back to the lab so the team can analyze this magnificent specimen and learn as much as possible about what makes such long durations possible, building these learnings into our future long-duration fleets before putting the record-breaker through our recycling process. We think that this balloon has definitely earned its retirement!
48 comments on original post
Add a comment...

Dan Piponi

Shared publicly  - 
After my last post I did a web search on discretized optimal control and came across this paper:

It's a hard paper (rocket science!) so I haven't even tried to read it fully.

It deals with the problem of how to efficiently rotate a vehicle in space using zero propellant using momentum storage devices, aka gyroscopes. It's a complicated version of the old brachistochrone problem using a discretization technique.

The bit I liked was this:

"It is important to note that the zero-propellant maneuver was discovered, designed and implemented in orbit – all using pseudo spectral optimal control. When this fact is juxtaposed with the fact that the flight implementation of the maneuver was performed on a 100 billion-dollar asset with an international crew onboard, supreme confidence of technical success is essential. That PS optimal control passed this high threshold for a NASA flight is an indication of how far the theory has evolved as a space technology."

Not everyone gets to use calculus to control things that cost $100,000,000,000. And did I read that right. Some of the theoretical work was done in orbit?
Dan Piponi's profile photoCharles Filipponi's profile photoKathie Gifford's profile photoMark Hepburn's profile photo
Add a comment...

Dan Piponi

Shared publicly  - 
Bloom at al. have just published a paper describing a clock that is accurate to one part six parts in 10^18.

I'm going to take a break for a moment, while I let that sink in. One part Six parts in ten to the power of eighteen.


Here's what one part in 10^18 means: due to the curvature of spacetime resulting from the Earth's gravity, if you raise your hand by 1 cm your watch is now running one part in 10^18 faster than if you didn't raise it.

Everything you need to know about a spacetime can be determined by measuring proper times along paths through it. So a worldwide network of clocks like this could be used to map out the curvature of spacetime around the Earth. Among other things, it provides a nice way to do geodetic levelling: Of course when we have wristwatches this accurate we'll be able to crowdsource such data.
Tanner Griffith's profile photoVít Tuček's profile photoNovalis Aleksandr Knæxer Andrvel's profile photoHilmar Hoffmann's profile photo
I know you didn't man. No one uses logten for components - gears.
Add a comment...

Dan Piponi

Shared publicly  - 
I came across what I think is the first reference to Category Theory I've seen in a work of fiction. You'd expect it to be in something like an up-to-date hard science fiction story of the kind that Greg Egan writes. But no, it actually appears in a Sword and Sorcery book by Sam Delany dating from 1979: Tales of Nevèrÿon.

I'm not sure this book truly belongs to the Sword and Sorcery genre as there isn't much in the way of sorcery. But it does have dragons. They're "realistic" dragons which means they aren't quite the majestic beasts you expect in fantasy writing.

I highly recommend the book by the way. It is nothing like anything else in the genre. The characters are intelligent and self-aware and spend much of their time discussing topics such as economics, gender relations and power. There's a little bit of story too.

The Category Theory appears in the framing, the Tales themselves being the content of an ancient manuscript subjected to modern analysis described in an appendix called "Some Informal Remarks Toward the Modular Calculus, Part Three".èrÿon
Ramon Horvath's profile photoVít Tuček's profile photoAlbin James's profile photoBenjamin Russell's profile photo
+John Cook - yes, the story was "Glory", and I quoted the passage I had in mind here, though your passage is also great:
Add a comment...

Dan Piponi

Shared publicly  - 
Here are the winners of the SciTech Academy Awards for the year. And if you didn't catch it from the other social networks I'm on, I'm excited to say I'm on the list again!

Congratulations to quite a few of my old colleagues at ILM: Cary Phillips, Nico Popravka, Phil Peterson, Colette Mullenhoff, Brice Criswell and Ron Fedkiw.

I'm pleased to see DMD devices recognized. Cinematic projection was a major driver in this area but it's one area of movie technology where the benefits extend well beyond the movie industry.
The Academy of Motion Picture Arts and Sciences today announced that 21 scientific and technical achievements represented by 58 individual award recipients will be honored at its annual Scientific...
Mike Stay's profile photoPaul Brauner's profile photoLawrence Kesteloot's profile photoDan Piponi's profile photo
+Lawrence Kesteloot It's my third! But I have no plans to get any more. (Though I think I said that last year.)
Add a comment...

Dan Piponi

Shared publicly  - 
I've been wondering for a while about programming languages for numerical work. What I'd like is something like a "strongly typed" compiled APL with type inference about array dimensions.

Let me use integers to represent the type of real-valued vectors. So 3, say, represents the type of 3-vectors.

You'd expect functions like Kronecker product and direct sum. We could give them types like:

(⊗)::∀m,n.(m,n) -> m*n
(⊕)::∀m,n.(m,n) -> m+n

(where + and * are type level multiplication).

If you wrote
let a = [1,2]⊗b+[1,2,3,4,5,6]
you'd like it to infer that b is of type 3.

But you could see how this gets tricky. If you write a function like:

f (a,b) = a⊗b+[1,2,3,4,5,6]

then you'd have to infer a type something like:

f::∀m,n.(m*n=6) => (m,n) -> 6

Pretty soon you're asking your typechecker to prove the Goldbach conjecture.

You could weaken things so that, for example, you can only use ⊗ when either the left or right side has a known dimension. Then you can use Presburger arithmetic and guarantee you can make inferences, although the type checker might take a while. In practice, for useful numerical programs, I suspect it might terminate pretty quickly.

I feel like there probably is some subset of Peano Arithmetic that is strong enough to successfully type check the kinds of programs numerical analysts really write, maybe even allowing unrestricted ⊗. Maybe by putting a restriction on the quantifiers that might appear or some such thing.

The reason for wanting inference here is that the more you know about the dimensions of your arrays, the better you can optimise. For example, knowing one dimension or the other of your array might allow loop unrolling.

Is there some literature about this sort of thing?
Carter Schonwald's profile photoPhilip Thrift's profile photoMichael Weber's profile photoEllie Kesselman's profile photo
I've got a slightly weaker version of some of these ideas in my wip array libaries on github. though i'm slightly skeptical that fully modelling the precise shapes has the right power weight ratio for end users
Add a comment...
Have him in circles
2,216 people
Dan Burcyx's profile photo
Luke Iannini's profile photo
Michel Plungjan's profile photo
ruben daniel marrero la moni's profile photo
Damián Silvani's profile photo
Călin Ardelean's profile photo
Tarun Gehlot's profile photo
Arsene Bationo's profile photo
Dori Mallet's profile photo
  • Google
Basic Information
Homo Sapiens, Hominini, Hominidae, Primates, Mammalia, Chordata, Animalia
Blog: A Neighborhood of Infinity
Code: Github
Twitter: sigfpe
Home page:
Bragging rights
I have two Academy Awards.
  • King's College London
  • Trinity College, Cambridge
Other profiles
Contributor to