Profile

Cover photo
Verified name
Dan Piponi
Worked at Google
Attended King's College London
2,423 followers|3,684,749 views
AboutPostsPhotosYouTube

Stream

Dan Piponi

Shared publicly  - 
 
I took my old formal power series library for combinatorics [2] (see also [3]) and tweaked it to work when the base ring isn't commutative. I can now use Haskell code to manipulate infinite series of powers of (one pair of) creation and annihilation operators.

I put the code at [1]. There are countless applications. Think of the things you can can enumerate with commutative generating functions, and now allow the possibility of connecting those objects with wires.

I included toy examples for:

0. basic stuff like counting non-capturing rook placements
1. using the Euler-Maclaurin formula to convert an integral to a sum
2. some polynomial manipulations related to umbral calculus
3. some quantum optics calculations
4. some "experimental" mathematics. See [4] for explanations.

Caveat: it's experimental incomplete code. I just wanted to see if the idea worked out.

[1] https://github.com/dpiponi/formal-weyl
[2] http://blog.sigfpe.com/2007/11/small-combinatorial-library.html
[3] https://hackage.haskell.org/package/species
[4] http://arxiv.org/abs/1010.0354
28
5
Brian Oxley's profile photoJorge Devoto's profile photoCarlos Mario Rivera Ruiz's profile photoCharles Filipponi's profile photo
4 comments
 
I like the "modular synth theorem". :)
Add a comment...

Dan Piponi

Shared publicly  - 
 
In quantum field theory (QFT), fields behave much like random variables and we do things like ask about the expected value of a field φ at a point x, commonly written as <φ(x)>.

Sometimes we need to know the expected value of the square of the field <φ(x)²>. As so often happens in QFT, when you try to calculate this you end up with something that diverges.

We can sometimes try to get this divergence under control by considering the limit as y→x of <φ(x)φ(y)>. Quite often this quantity is a straightforward pole, for example it might take the form <φ(x)φ(y)>=1/(x-y)² + something finite. We could just try subtracting off the pole and see what happens. So we could try replacing <φ(x)²> with the limit as y→x of <φ(x)φ(y)>-1/(x-y)².

Sometimes our physical system has a symmetry. For example, if you're doing physics in 2D (one space+one time) it's not unusual to have conformal invariance, ie. invariance under transformations of spacetime that preserve angle. So if we think of our physics taking place in the complex plane, this means that if φ is a good solution to the equations of motion in some region, so should φ○f be for any analytic bijection f, because analytic functions preserve angles. (Sometimes it's a little more complex than this because we might have some kind of covariance instead of invariance.)

But if we transform our underlying space using f, the pole we subtracted off will get replaced by 1/(f(x)-f(y))² instead of 1/(x-y)². In partciular this means that if we change coordinates using an analytic function, which should have no effect, our kludge of subtracting off the pole subtracts something different off. This is known as the conformal anomaly. It's the extra term that pops up when you transform your spacetime with f and it's the thing that makes string theories only work in certain dimensions like 10 or 26.

The exact details depend on the type of field, but for many physical systems the anomaly is proportional to the so-called Schwarzian derivative of f. Check out the wikipedia page at https://en.wikipedia.org/wiki/Schwarzian_derivative to see how it is defined and why it pops up when you apply holomorphic functions to warp your spacetime in the vicinity of a pole (e.g. see the bit starting "Introducing the function of two complex variables"). Note also the surprising "chain" rule.

The ordinary second derivative of a function f is a measure of how much it fails to take the form f(z)=az+b. The Schwarzian derivative measures how much f fails to fit the form f(z)=(az+b)/(cz+d).

The Schwarzian derivative pops up all over the place in mathematics from chaos theory and string theory to projective geometry and the theory of differential equations.

Anyway, the above is some of what I wrote about in my PhD thesis many years ago. I always found the Schwarzian derivative very weird and never really got to grips with it. I think if I'd carried on in academia I would have looked into the Schwarzian derivative some more.

This is the nearest thing I know of to a pop math article on this "derivative": http://math.univ-lyon1.fr/~ovsienko/Publis/what-is.pdf.
33
14
Pol Nasam's profile photoCons Bulaquena's profile photoTobias Schaffer's profile photoIlya Yanok's profile photo
10 comments
 
+Dan Piponi"Creation/Annihilation" operators? Have you found a mathematical imperative that overrides 'constancy?' (by that I mean a 'QF/T' assumption of a given energy potential assigned to matter)
Or did I misread something? 
Add a comment...

Dan Piponi

Shared publicly  - 
 
A few times now I've seen people take for granted that the set of true mathematical propositions exists. After all, in Set Theory the set of all propositions exists (when propositions are suitably encoded as sets) and for the set of true propositions and we merely want a subset.

If we take true mathematical propositions to be true propositions of ZF, and work in ZF, I don't think it makes any sense to talk of the set of true propositions. The obvious construction of this putative set  is to use the axiom of separation to form the subset of true propositions from the set of all propositions. But Tarsi showed us we don't have a truth predicate and so we can't follow this path. I don't think we can even formalise what it means for this set to exist.

But I'm pretty sure a lot of people think it's completely obvious this set exists.

You can construct this set it if you're prepared to use a stronger axiom system than ZF to talk about the true statements of ZF. But as soon as you do that you're playing a different game and I expect that the set (and whether or not it even exists) will depend on the choice of stronger system you use.

I think some people are using the intuitive argument that we don't need to know precisely what the set of true propositions is. The power set of the set of all propositions exists, and so do all of its elements, and therefore the set of true propositions exists. But I don't
find this compelling.

Am I crazy for thinking people shouldn't be asserting the existence of this set without careful qualification? Or is everyone else crazy?
11
2
roux cody's profile photoDan Piponi's profile photoBenjamin Russell's profile photoMike G's profile photo
55 comments
 
I wish I knew this topic well enough to collect up what has been said into a coherent essay on this topic. Maybe one day someone qualified will write one. I can guarantee them one reader if it's not pitched at too high a level.
Add a comment...

Dan Piponi

Shared publicly  - 
 
One of the pillars of quantum mechanics is the photoelectric effect. Light below a certain frequency is unable to dislodge electrons from a material even when provided by a high power beam. The argument in almost every textbook, due to Einstein, says that the energy must be arriving in discrete packets, photons, and that you need a single packet on its own to have enough energy to kick out an electron.

If it's an advanced enough textbook then there will be a later chapter on time-dependent perturbation theory where it is shown how to calculate the rate at which electrons are kicked from one energy level to another as a function of the incoming electromagnetic wave. In almost every single textbook the argument treats the incoming wave as a classical field, demonstrating how you can in fact explain the photoelectric effect without recourse to photons.

Disappointingly these books don't simply self-annihilate in a cloud of contradiction.

(Of course the photoelectric effect is still good  evidence for QM. Just not in the way textbooks claim.)
89
31
San T's profile photoDanuta Sinclair's profile photoJaythan Donell's profile photoHernan Murua's profile photo
9 comments
 
hmm interesting.  Your thoughts on mutual resonance increasing vibration... 
Add a comment...

Dan Piponi

Shared publicly  - 
 
A few years ago I read a pop science article about "rogue planets", planets floating free of any star that might retain enough heat to support a habitat suitable for life. I thought it'd make a great scenario for a science fiction story so I was pleased when Chris Beckett's Eden stories (Dark Eden, then Mother of Eden) were published.

The main story starts with a growing community several generations after a small number of humans are stranded on a rogue planet heated by geothermal activity. They are equipped with the most basic technology, and memories about a distant home called Earth. There's some interesting world-building, though Beckett's main focus is on using the scenario as a fictional testbed for exploring various kinds of origin myth. Similarly to Delany's Tales of Neverÿon [1] we get characters explianing to us the origin of everything from money and patriarchy to social class and prohibitions against homosexuality. In fact, the books remind me a lot of a work of non-fiction, Graeber's analysis of debt [2].

The planet's ecosystem is interesting with plant-life providing a transport mechanism that pumps heat from the lower depths of the planet to the surface. There's also a novel (to me) variation on the idea of a deity, the notion of a Watcher who looks out from the same eyes as each individual and who can be perceived only fleetingly in moments of stillness. Humans have spent many thousands of years exploring the space of spiritual entities so it's surprising to find an author managing to find space for something new.

Although the world-building and analysis are interesting, the plot and characters are a little weak, feeling more like holes in a template that needed to be filled. Nonetheless I enjoyed both books a lot and look forward to the next book in the inevitable series.

BTW I think these would be great books for "young adults" to provoke classroom discussion about politics and power. An alternative to The Lord of the Flies.

[1] https://plus.google.com/+DanPiponi/posts/YDoTxpHgeEZ
[2] http://en.wikipedia.org/wiki/Debt:_The_First_5000_Years
13
1
Devan Stormont's profile photo
Add a comment...

Dan Piponi

Shared publicly  - 
 
 
Solar-powered Loon balloons provide Internet fast enough to stream YouTube videos #io15 
82 comments on original post
13
2
Greg Steuck's profile photoAndrei Lopatenko's profile photoWilliam Rutiser's profile photo
 
How many users worth of YouTube streaming per balloon?
Add a comment...
Have him in circles
2,423 people
David Wakeham's profile photo
Ievgen Varavva's profile photo
Lê Hương's profile photo
Michael Vidne's profile photo
Nenem Duraes's profile photo
Jay Reynolds's profile photo
Anada Sak's profile photo
Michael Hilding's profile photo
Anatoly Karp's profile photo

Dan Piponi

Shared publicly  - 
 
Anyone who's studied quantum mechanics knows that the subject is largely about pairs of linear operators, a and a⁺, such that:

aa⁺ = a⁺a+1

Solving physics problems often involves rearranging expressions in a and a⁺ so that all of the a⁺ factors are on the left of each monomial and the a factors are on the right. This sort of thing:

  aa⁺aa⁺a⁺a⁺aa⁺a
= (a⁺a+1)aa⁺a⁺a⁺aa⁺a
= a⁺aaa⁺a⁺a⁺aa⁺a+aa⁺a⁺a⁺aa⁺a
= ...
= a⁺⁵a⁴+10a⁺⁴a³+23a⁺³a²+9a⁺²a

Getting to the last line takes a substantial amount of work and of course it gets worse when you have infinite sums.

But now I've read http://arxiv.org/abs/0904.1506 I see that there's a much easier way of getting those coefficients: 1, 10, 23, 9.

You can translate a monomial in a and a⁺ into a path on a grid by drawing a⁺ as a horizontal line and a as a vertical line, as in the diagram. That defines a region under the path known as a Ferrers board.

1 is the number of ways of placing zero non-attacking rooks on this Ferrers board. 10 is the number of ways of placing 1 rook, 23 is the number of ways of placing 2 rooks and so on.

I can't believe I've gone all these years without coming across this simple interpretation of the coefficients before.

It's worth reading the proof in the paper. The expression aa⁺ = a⁺a+1 corresponds precisely to a single step in a recursive procedure for counting rook placements.

This is just a tiny hint of the richness of the combinatorics of a and a⁺.
41
11
Ilya Yanok's profile photoMāris Ozols's profile photoHelger Lipmaa's profile photoAnthony Leverrier's profile photo
13 comments
 
+Urs Schreiber Thanks for the clarification and multi-symplectic algebra is something I can vaguely follow.
Add a comment...

Dan Piponi

Shared publicly  - 
 
+Satnam Singh recently posted a view of (roughly) where he lives. This is Redwood Peak, near where I Iive.

In the areas that get coastal fog it's still green in California.
9
Add a comment...

Dan Piponi

Shared publicly  - 
 
In many programming languages, including Haskell, you can freely copy values. For example, in Haskell you can write

    let b = a in ...

Haskell is based on the internal language of Cartesian closed categories (CCCs). The feature of these that allows us to copy values is the fact that a CCC has a diagonal morphism

    Δ:A →A×A

You can imagine making the copy more explicit by thinking of the above notation as shorthand for

    let (a,b) = Δ(a) in ...

The shorthand makes clear that in some sense we're reusing the variable a to continue labelling one of the copies. But that doesn't matter here because a has the same value before and after.

If you're writing code for linear algebra it's natural to think in terms of operations in the category of vector spaces. They are equipped with a copy morphism too:

    Δ:V→V⊕V

and so a vector space language should have a let clause to allow this.

But vector spaces also come equipped with a duality operation so any linear map f:A→B gives rise to another f*:B*→A*, usually called the adjoint or transpose of f. So we'd like to have

    Δ*:V*⊕V*→V*

This map is more commonly known as addition.

As I mentioned on G+ recently [2], you can construct adjoints of computer programs by a process that involves writing your code backwards. It'd be nice to have your programming language reflect this. So what operation is dual to "let b = a"? In C it'd be written

    a += b

Because let is shorthand for reusing the variable name for one of the copies, the adjoint version also has a variable reuse. But this time a changes value so the corresponding operation is a mutation.

In CCCs you don't have a dual to Δ so it makes sense that Haskell outlaws these kinds of mutable updates. But in a language (or DSL) for linear algebra, += is as natural a statement as let.

The is a simpler formulation of something I mentioned many years ago. [3]

This also serves as a reply to Twan van Laarhoven [4]. Curiously, since then Laarhoven has played an important role in making a serviceable form of += available in Haskell [5].

[1] https://golem.ph.utexas.edu/category/2006/08/cartesian_closed_categories_an_1.html
[2] https://plus.google.com/+DanPiponi/posts/9uWDCeLPLpz
[3] http://blog.sigfpe.com/2007/01/monads-hidden-behind-every-zipper.html
[4] http://blog.sigfpe.com/2008/09/two-papers-and-presentation.html
[5] http://www.twanvl.nl/files/lenses-talk-2011-05-17.pdf
27
7
Rei Ayanami (年轻的大天才)'s profile photoFrank Yang's profile photoLin HU's profile photoIlya Yanok's profile photo
24 comments
 
+Bas Spitters - I don't know which * you were referring to.

The usual notation for the linear map from B* to A* coming from a linear map f from a vector space A to a vector space B is f*.   If A and B are Hilbert spaces we also have a map

f^\dagger: B -> A

but that's different.
Add a comment...

Dan Piponi

Shared publicly  - 
 
I enjoyed Jessica Augustsson's previous anthology so I bought this right away. Includes a story by +Lennart Augustsson so how could I refuse?
 
I'm happy and proud to announce my second anthology, Encounters, with a whole slew of new stories by talented authors. You can find it as a Kindle book or paperback from Amazon around the world! Please like and share! smile emoticon
http://www.amazon.co.uk/dp/B00ZJD1O26
View original post
5
1
JessEdit's profile photoLennart Augustsson's profile photo
2 comments
 
(Also, if you could perhaps consider doing a review, I'd be eternally grateful! No worries if not. Just thought I'd ask. :)
Add a comment...

Dan Piponi

Shared publicly  - 
 
With Satnam Singh, Simon Marlow and Phil Wadler on a beach in Kefalonia eating the best Greek food ever.
36
1
Lennart Kolmodin's profile photoRobert Harper's profile photoSatnam Singh's profile photo
2 comments
 
I think Wadler just travels for a living!
Add a comment...

Dan Piponi

Shared publicly  - 
 
I've been playing a little bit with the programming language Julia recently. I'm very conflicted about this language. For small numerical experiments this is my #1 language of choice now. But it contains many things I am not a fan of.

Matlab is a kind of de facto standard in the scientific world for numerical computing. From a language theoretical perspective it is (IMO) a very poorly thought out language with frequently poor performance and a high degree of non-orthogonality. But as long as you stick with vectorised operations Matlab can perform tolerably well. And not every aspect of the language is poorly designed. It has a very compact notation for dealing with arrays and their slices. And it's easy to call C code when needed.

Julia is an attempt to fix many of the problems with Matlab. This means it does slavishly copy many features of Matlab. But it uses LLVM on the back end to generate high performance code. What's appealing is that you can use a mixture of imperative style loops with array updates alongside vector operations like those in Matlab and know you'll likely get good performance. This makes it very appealing to me.

Most programming languages I have learnt have given me a new perspective on the world inside a computer. This includes languages from Forth, and assembly language to Python and Haskell. Just learning them expands your mind in some way. Julia is the first non-mind-expanding language I've learnt. If I hadn't used APL, numpy and Matlab before I might have found its vectorised operations revolutionary. But I had, and Julia doesn't seem to offer anything new. So in that respect it's a bit boring.

But it's not completely without some interesting aspects. Like Lisp it's homoiconic. That's slightly surprising for a language that looks superficially a lot like Fortran.

And although it's described as dynamically typed, in actuality it makes clear that the distinction between static and dynamic typing isn't so straightforward. Just like with a dynamically typed language like Python, just about every function you write is polymorphic in the sense that it's not an error to pass in arguments of just about any type. But Julia's one truly interesting feature is that once it knows the type of the arguments it attempts to monomorphise the function (ie. specialise it to the required types) and compile it just-in-time. This gives the best of both the dynamic and static worlds. Except sometimes it gives the worst of both worlds too.

For example some type related errors get uncovered during compilation before the code is run. This is cool for a dynamic language. But sometimes you still get the disadvantage of having to wait for code to run before a type error is discovered as well as the disadvantage of having to wait for your code to compile every time. Worst of both worlds!

Because it's a JIT compiler Julia can be painfully slow to import libraries. This is why I like it only for small tasks. I hope this is eventually fixed as I don't think it's an inherent problem with the language.

Julia has many annoying niggles. For example arrays start at 1. And within matrices spaces are used as a separator. So, for example, let's say f is a function mapping ints to ints. "f(x)" is a valid expression. Outside of an array expression, "f (x)" is a perfectly good way to write the same thing. You can write a two element matrix of ints as "[1 2]" where there is a space separator between 1 and 2. But "[f (2)]" is not the matrix of integers containing f(2). It is in fact a two element inhomogeneous matrix whose first element is a function and whose second element is an integer. Yikes!

But for doing things like testing the divide and concur circle packing I mentioned recently, it's hard to beat how fast I got that working in Julia.

And one more thing: functions are first class objects in Julia and forming closures is fairly easy.

http://en.wikipedia.org/wiki/Julia_%28programming_language%29
38
7
Bagus Tris Atmaja's profile photoLuca Luve's profile photoConor Lawless's profile photoVít Tuček's profile photo
13 comments
 
Hiiiiii
Add a comment...
People
Have him in circles
2,423 people
David Wakeham's profile photo
Ievgen Varavva's profile photo
Lê Hương's profile photo
Michael Vidne's profile photo
Nenem Duraes's profile photo
Jay Reynolds's profile photo
Anada Sak's profile photo
Michael Hilding's profile photo
Anatoly Karp's profile photo
Work
Employment
  • Google
Basic Information
Gender
Male
Story
Tagline
Homo Sapiens, Hominini, Hominidae, Primates, Mammalia, Chordata, Animalia
Introduction
Blog: A Neighborhood of Infinity
Code: Github
Twitter: sigfpe
Home page: www.sigfpe.com
Bragging rights
I have two Academy Awards.
Education
  • King's College London
    Mathematics
  • Trinity College, Cambridge
    Mathematics
Links
YouTube
Other profiles
Contributor to