Profile

Cover photo
Verified name
Dan Piponi
Worked at Google
Attended King's College London
2,324 followers|3,410,025 views
AboutPostsPhotosYouTube

Stream

Dan Piponi

Shared publicly  - 
 
I've been playing a little bit with the programming language Julia recently. I'm very conflicted about this language. For small numerical experiments this is my #1 language of choice now. But it contains many things I am not a fan of.

Matlab is a kind of de facto standard in the scientific world for numerical computing. From a language theoretical perspective it is (IMO) a very poorly thought out language with frequently poor performance and a high degree of non-orthogonality. But as long as you stick with vectorised operations Matlab can perform tolerably well. And not every aspect of the language is poorly designed. It has a very compact notation for dealing with arrays and their slices. And it's easy to call C code when needed.

Julia is an attempt to fix many of the problems with Matlab. This means it does slavishly copy many features of Matlab. But it uses LLVM on the back end to generate high performance code. What's appealing is that you can use a mixture of imperative style loops with array updates alongside vector operations like those in Matlab and know you'll likely get good performance. This makes it very appealing to me.

Most programming languages I have learnt have given me a new perspective on the world inside a computer. This includes languages from Forth, and assembly language to Python and Haskell. Just learning them expands your mind in some way. Julia is the first non-mind-expanding language I've learnt. If I hadn't used APL, numpy and Matlab before I might have found its vectorised operations revolutionary. But I had, and Julia doesn't seem to offer anything new. So in that respect it's a bit boring.

But it's not completely without some interesting aspects. Like Lisp it's homoiconic. That's slightly surprising for a language that looks superficially a lot like Fortran.

And although it's described as dynamically typed, in actuality it makes clear that the distinction between static and dynamic typing isn't so straightforward. Just like with a dynamically typed language like Python, just about every function you write is polymorphic in the sense that it's not an error to pass in arguments of just about any type. But Julia's one truly interesting feature is that once it knows the type of the arguments it attempts to monomorphise the function (ie. specialise it to the required types) and compile it just-in-time. This gives the best of both the dynamic and static worlds. Except sometimes it gives the worst of both worlds too.

For example some type related errors get uncovered during compilation before the code is run. This is cool for a dynamic language. But sometimes you still get the disadvantage of having to wait for code to run before a type error is discovered as well as the disadvantage of having to wait for your code to compile every time. Worst of both worlds!

Because it's a JIT compiler Julia can be painfully slow to import libraries. This is why I like it only for small tasks. I hope this is eventually fixed as I don't think it's an inherent problem with the language.

Julia has many annoying niggles. For example arrays start at 1. And within matrices spaces are used as a separator. So, for example, let's say f is a function mapping ints to ints. "f(x)" is a valid expression. Outside of an array expression, "f (x)" is a perfectly good way to write the same thing. You can write a two element matrix of ints as "[1 2]" where there is a space separator between 1 and 2. But "[f (2)]" is not the matrix of integers containing f(2). It is in fact a two element inhomogeneous matrix whose first element is a function and whose second element is an integer. Yikes!

But for doing things like testing the divide and concur circle packing I mentioned recently, it's hard to beat how fast I got that working in Julia.

And one more thing: functions are first class objects in Julia and forming closures is fairly easy.

http://en.wikipedia.org/wiki/Julia_%28programming_language%29
36
7
Bagus Tris Atmaja's profile photoLuca Luve's profile photoConor Lawless's profile photoVít Tuček's profile photo
9 comments
 
Once they have  good IDE which is comparable to MATLAB's I will do the switch.
Add a comment...

Dan Piponi

Shared publicly  - 
 
I recently mentioned the alternating projection algorithm for finding a point in the intersection of two convex spaces. Today I thought I'd mention some variations.

(1) Sometimes you want to work with non-convex spaces. There's an easy method you can apply: just use the same method blithely ignoring the non-convexity. That's a method that's long been used by people working in X-ray crystallography and other fields where you want to recover a phase of a complex-valued field but you can only measure the absolute value: http://en.wikipedia.org/wiki/Phase_retrieval There's also a more complex variation called the difference map: http://en.wikipedia.org/wiki/Phase_retrieval

(2) Sometimes you want to find a point in the intersection of N convex spaces. You could try cycling through N projections, but by time you've reached the Nth your iterate will likely have "forgotten" much of what was learnt on the first iteration. Instead there's a trick: move to a "superspace" that's the N-fold cartesian (and hence tensor) product of the original space. Now define two "superprojections": (a) each of the N projections is simultaneously applied to one of the copies of the original problem in the "superspace", and (b) each of the N copies is made equal to the average of all N copies. The first "superprojection" is called 'divide' and the second is called 'concur'. Now use your favourite variation on alternating projections.

(3) Now combine (2) and the difference map in (1) to get a general purpose algorithm to find intersections between N non-convex sets in a Euclidean space. This is the...wait for it..."Divide and Concur" algorithm as described here: http://arxiv.org/pdf/0801.0222v1.pdf

Astonishingly, the approach described in (3) can be used to solve many problems (possibly approximately) including Sudoku, 3SAT and packing problems, with a minimal amount of code. In some cases it performs as well as other state of the art methods.

The picture is the result of me implementing a circle packer using divide and concur in just a few lines of code.

Looks like a good tool to add to the utility belt.
17
7
Mirko Bulaja's profile photoWalter Di Carlo's profile photoAlok Tiwari's profile photoDaniel Canelhas's profile photo
6 comments
 
Thanks!
Add a comment...

Dan Piponi

Shared publicly  - 
 
There are lots of ingenious algorithms in the world, but one of my favorites is stupidly simple.

If C is a convex set then the projection of a point p onto C is, by definition, the closest point in C to p. If C and D are convex sets, then here's a way to find a point in a non-empty C∩D: pick some point p. Project it onto C, then D, then C and keep iterating. In a finite dimensional space the sequence converges to a point in the intersection. If C and D are affine then you get convergence in a finite number of steps to the projection of the starting point onto C∩D.

I wrote part of ILM's GPU based fluid solver Plume including the part that handles interactions with solid objects. You can see it at work in the video.

In the approach to computational fluid dynamics we used there's a step where you use the fluid velocity to move the fluid around, the advection step. At any time the fluid needs to satisfy two conditions: (1) incompressibility and (2) matching the velocity of the solid objects at their boundaries. After advection these conditions are no longer satisfied. So you project back onto the space of flows that satisfy both conditions. Each condition corresponds to an affine subspace.

There are nice algorithms to project back onto the space of incompressible flows when you have trivial boundary conditions. For example you can use multigrid or FFT based methods. There's an easy algorithm to make velocities match along the boundaries: set the velocities along the boundaries. Many years ago I spent some time puzzling over how to solve both of these problems simultaneously and failed to spot the stupidly simple solution we eventually borrowed from [1]: just alternate these two methods.

The surprise is how quickly it works. The convex sets it works with live in something like a hundred million dimensional space. But convergence was often good enough, to the eye, after around five iterations.

This alternating projection method is just the tip of an iceberg of widely applicable alternating projection algorithms. You can read more about them in [2] and generalise even further using [3].

[1] http://jcohen.name/papers/Molemaker_Low_2008.pdf
[2] http://web.stanford.edu/class/ee392o/alt_proj.pdf
[3] https://web.stanford.edu/~boyd/papers/pdf/prox_algs.pdf
39
10
garfield cat's profile photoEd S's profile photoNilay Engineer's profile photoAndrei Lopatenko's profile photo
4 comments
Ed S
 
Great post, +Dan Piponi - reshared to Computer Science, Seriously
https://plus.google.com/u/0/communities/111381551813675174408
Add a comment...

Dan Piponi

Shared publicly  - 
 
Some days, finite puzzles won't do.
 
Be the first to solve my new puzzle!
  Can you solve my challenge puzzle? Cheryl   Welcome, Albert and Bernard, to my birthday party, and I thank you for your gifts. To return the favor, as you entered my party, I privately made ...
8 comments on original post
7
4
Boyd Smith's profile photoMitch Blevins's profile photoAlexey Shumitsky's profile photoJason Knight's profile photo
Ed S
+
1
2
1
 
/cc +Alex Fink just in case this is up your street.
Add a comment...

Dan Piponi

Shared publicly  - 
 
Our paper has been accepted for the 2015 Mathematics of Program Construction Conference:

Polynomial Functors Constrained by Regular Expressions
http://dept.cs.williams.edu/~byorgey/pub/type-matrices.pdf

It's based on some blog posts from a couple of years ago and shows how to define a tree type in a language like Haskell where the leaves of the tree are constrained to match a regular expression. A basic example that many are familiar with is the type of lists where the elements alternate between two types. The picture corresponds to an example of a binary tree with regular expression b*1a* where 1 is the type with just one inhabitant so it functions as a "hole". These types have the property that it's completely impossible to build a tree that fails to satisfy the constraint.

As a side effect it also shows how you can think of Conor McBride's Jokers and Clowns paper as really being about divided differences (ie. (f(x)-f(y))/(x-y)) applied to types, even though you can't literally divide types: https://personal.cis.strath.ac.uk/conor.mcbride/Dissect.pdf

Sadly it's hard to find a programming language that can truly automate this process, especially as it involves matrices whose entries are types. Brent and I both tried independently with Agda, but even with dependent types we got stuck. But that doesn't stop you using the paper to hand craft the appropriate type. (I guess Template Haskell would work fine but that's cheating.)
43
8
Mike G's profile photoDan Piponi's profile photoBrent Yorgey's profile photoKazimierz Kurz's profile photo
10 comments
 
> Brent and I both tried independently with Agda, but even with dependent types we got stuck.

I'm not sure what it is that you were trying to do but after reading (and quite enjoying) your paper, I had a bit of fun using regular expressions to constrain the values or the types of the values stored in a tree structure.

Here are a few examples:
https://github.com/gallais/potpourri/blob/master/agda/poc/regexpfun/Data/Functor/Examples.agda

Here is the meat of the implementation:
https://github.com/gallais/potpourri/blob/master/agda/poc/regexpfun/Data/Functor.agda

It relies on the regexp library defined here:
https://github.com/gallais/aGdaREP

NB: my repo "potpourri" is a bit messy so it's probably hard to get the right set of files to typecheck Examples.agda so I'd say "don't try this at home". If there's interest, I guess I could upload a self-contained version somewhere.
Add a comment...

Dan Piponi

Shared publicly  - 
 
I just discovered the computational linguistics olympiad. These problems are a lot of fun. I tried some easy ones and they were easy. So I jumped to a hard one and now I'm stuck. Have fun!

(Don't post any spoilers, I haven't solved it myself yet. Brag about it if you have solved it though!)

http://www.ioling.org
25
17
Triwanto Simanjuntak's profile photoSaravanan Thirumuruganathan's profile photoMarkus Breitenbach's profile photoBenjamin Perez's profile photo
9 comments
 
It's really not that hard. Try replacing words with symbols. Also, write down each catch.
Add a comment...
In his circles
188 people
Have him in circles
2,324 people
Navid Zadeh's profile photo
Michael Vidne's profile photo
Alex Fink's profile photo
Vines TV's profile photo
Arrigo Benedetti's profile photo
David Wakeham's profile photo
gregory knapen's profile photo
Syed Hamza's profile photo
Kyle Carter's profile photo

Dan Piponi

Shared publicly  - 
 
Years back I had a paper published showing how any algorithm to compute a linear function can be run "backwards" to compute the transpose operation in the same amount of time: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.159.4990

It seemed obvious but I hadn't seen it presented as a general purpose algorithm transformation. Today I came across a clear statement. It's known as Tellegen's Principle and it turns out to be quite old. Here's a presentation on the subject: http://www4.ncsu.edu/~kaltofen/bibliography/05/montagnac.pdf A web search will turn up lots of applications. For example: http://specfun.inria.fr/bostan/publications/exposeJNCF.pdf

There's a twist on this I haven't seen mentioned anywhere on the web. You don't have to do this in rings. It works in semirings too. In particular it works in the max-plus semiring (http://en.wikipedia.org/wiki/Max-plus_algebra) which is the natural home for many well-known algorithms.
31
9
Andres Rodriguez's profile photoDmitry Shintyakov's profile photoVít Tuček's profile photoTracy Morehead-Tobiasz's profile photo
2 comments
 
Yes. As far as I can see Tellegen showed x.Ay=0 if transpose(A)x=0. Trivial except it's not quite so obvious in original context. I think he said nothing about algorithms. But that's the name that's stuck.
Add a comment...

Dan Piponi

Shared publicly  - 
 
People keep comparing the number of votes cast with the number of seats won in Parliament and using this to illustrate a problem with the first-past-the-post system in the UK. I suspect this isn't the best comparison.

The number of votes cast throughout the UK is not a simple representation of the degree of support for each party. Assuming rationality, each voter voted in a way that maximised their gain on the assumption that they were voting in a first-past-the-post system. Arguing that the seat allocation would have been different with a different system is somewhat misguided because people voting under a different system would have cast different votes. In other words, I suspect these numbers are very heavily influenced by the observer effect.

I don't know how rationally people vote. (I personally think it's rational not to vote given the tiny expected gain.) Many people vote strategically but I don't know what proportion do. Some people deliberately vote for a larger party than the one they'd like to see in power believing their vote is more likely to have an effect that way. Some people vote for a smaller party believing it can send a message without risk of that party coming into power. I'm pretty sure this isn't exaggerating the sophistication of voters. I'm pretty sure real voters do reason this way.

If I had to hazard a guess I'd propose that in a more proportional system the smaller parties would have received a significantly larger share of the vote, because voters would feel they weren't wasting votes. This means that looking at seats won in a first-past-the-post election is an even worse estimator of voter preferences that it seems.

BTW People often express surprise at election results compared to earlier polls. I suspect that this is because people aren't really motivated to reveal their true preferences or votes in a poll. Instead, a poll might be used strategically as a safe way to send a message.
17
Dan Piponi's profile photoNeil Mitchell's profile photoDavid “goo.gl/LuFkzX” Tweed's profile photoKenny Laidlaw's profile photo
13 comments
 
Scotland has a Proportional Representation style voting system for the Scottish Government. Despite this apparently making it virtually impossible to get a majority government, the SNP currently have a majority government at Holyrood, just. 

http://www.scottish.parliament.uk/visitandlearn/Education/16285.aspx

There is an election next year, 2016, and their majority could increase if they get a similar result to the UK election.
Add a comment...

Dan Piponi

Shared publicly  - 
 
Hauntology is a peculiarly British artistic movement, largely in the medium of popular music, that draws on a mythology constructed from public information films, village horror movies and children's TV, all from Britain of the 60s, 70s and 80s. Musically there are influences ranging from the BBC Radiophonic Workshop and Tangerine Dream, to folk songs and jingles, employing a mixture of digital and analogue production methods. The aesthetic extends to record sleeves, web sites and various kinds of literature published by the main purveyor of hauntological recordings, Ghost Box.

Anyway, I like the stuff, so maybe you will too. Some links to get you started:
https://www.youtube.com/results?search_query=belbury+poly
https://www.youtube.com/results?search_query=advisory+circle

I think I'll add a link to Scarfolk too: http://scarfolk.blogspot.com
Toward the end of last year, I was clearing out some old boxes under the stairs when I happened across a huge pile of old record label mail outs and flyers. As I sifted through the sheaves of gloss...
7
3
Stefan Scherer's profile photoKatherine Prevost's profile photo
Add a comment...

Dan Piponi

Shared publicly  - 
 
I recently finished this book on the history of string theory. Although there is a lot of technical material, it's not essential to understand all of it in order to get something out of this book.

One of the most fascinating things about string theory is that it was originally discovered backwards, at least compared to any sensible development of the subject. When I was taught the subject (my lecturer was Peter Goddard, one of the discoverers of the no-ghost theorem) you started with generalising Hamilton's principle of least action from particles to strings. You derived some equations of motion. Did the standard quantum mechanics stuff that you'd do with any kind of extended oscillator. And then computed scattering amplitudes so you could predict what might happen in an experiment. (I left out the fun bit which is where it all goes wrong except in the critical 26 dimensions, because of the no-ghost theorem.) But historically what happened was that through the pure mathematical magic of complex analysis it was possible to guess what a possible scattering amplitude looks like globally from a few experimental clues. From that is was possible to reverse engineer what kind of physical system could have given that answer. It was really interesting to read about the details of how this process took place and see the continuity with other ideas going around at the time.

Other interesting things this book made me appreciate:

I think of quarks as the traditional way to do things and string theory as much newer. But in fact quarks date back to 1964 and strings to 1969. Gell-Mann was involved in the original adoption of both ideas. String theory is so old that Heisenberg probably attended at least one lecture on string theory.

Again backwards from how things are taught: I always thought of superstrings as supersymmetry applied to string theory. But supersymmetry developed out of string theory in around 1971. So, historically at least, it's more accurate to think of supersymmetry as superstrings minus strings. (Though there were some supersymmetrical ideas as far back as 1966.)

Back when I was a student there were a few papers on Liouville field theory which gave a way to make string theory work in non-critical dimensions.  (http://en.wikipedia.org/wiki/Liouville_field_theory) This gets a brief mention in the book but I wonder what happened to that idea.

This book only really gives a history of the subject up to around 1994 although it does have a chapter on more recent work. It'll probably take a decade or two more before a good history of that can be written.
35
12
Sonia Beck's profile photoCanadian _Protocall's profile photoNeil Inala's profile photoAlexander Fretheim's profile photo
12 comments
 
+Dan Piponi thanks for the feedback. I have been trying to advertize this and other neglected facts in my "string theory FAQ" [1]-

[1] http://ncatlab.org/nlab/show/string+theory+FAQ#DoesSTPredictSupersymmetry
Add a comment...

Dan Piponi

Shared publicly  - 
 
I think one of my earliest posts on Twitter linked to this Division by Three paper by Doyle and Conway: http://arxiv.org/abs/math/0605779

This is now a followup, Pan Galactic Divisionhttp://arxiv.org/abs/1504.02179

It shows how for any finite set N, and any injection from the set A×N to B×N, we can construct an injection from A to B without making use of the axiom of choice, in effect allowing us to cancel by N on both sides of the map. To make things easier it views the set B as a set of possible numbers or pictures on cards, and N as the set of possible suits, and constructs the injection by induction on N using a card game. Hence the attached artwork.

This problem has a long history going back to a "lost" proof from 1926.
16
5
Thomas Lawler's profile photoBoris Borcic's profile photoAbiola Lapite's profile photoShubhendu Trivedi's profile photo
7 comments
 
+John Baez I want to write something about this, and perhaps look at addressing Doyle and Qiu's remak "Still weaker systems would suffice: It would be great to know just how strong a theory is needed."
Add a comment...

Dan Piponi

Shared publicly  - 
 
Video gamers have casual games for when they don't have long blocks of time to devote to working their way through each level. I'm always on the lookout for casual mathematics. I found a nice example recently, Combinatorial Nullstellensatz. There's lots of deep and complex stuff you can do with it, but the statement is easy to understand, the proof is elementary, and once you know it you can start proving theorems in a couple of lines that until recently were considered tricky.

Let's start with this well known fact: If you have a polynomial of degree n in one variable, and a set with n+1 distinct numbers in it, you know that the polynomial must be non-zero for some element of the set.

By way of illustration, let me use this to prove something really trivial in a really stupid way. I have a set S of 4 distinct integers and I have the three properties {x≠1, x≠2, x≠3}. I want to show that some element of my set satisfies all of the properties. Yes, it's obvious, but here's an indirect way of doing it: each property can be represented as a (linear) polynomial giving the set {x-1, x-2, x-3}. Each "polynomial" is non-zero precisely when the corresponding property is true. We can "and" together all of the propositions by multiplying them to get p(x)=(x-1)(x-2)(x-3). This is cubic and so it can't be true that p gives zero for every element of S of size 4.

Combinatorial Nullstellensatz (CN) is a newish (1999) generalisation of the fact that a non-trivial degree n polynomial must be non-zero somewhere on a set of size n+1. It applies to multivariate polynomials. Its usefulness comes from the fact that using schemes like I did above, you can encode propositions about variables as (possibly linear) polynomials. Unlike my silly example, for multivariate polynomials you can get non-trivial results.

For example, it's great for proving things about all kinds of graph colourings. You can encode, as polynomials, the properties that you want neigbouring vertices (or coincident edges, or whatever) to have, multiply them all together, and apply CN to show that your colour scheme can be satisfied.

There are many problems that can be solved this way that aren't ostensibly about polynomials. So I find it surprising that this kind of algebra has anything to say about them.

CN is very useful for solving Mathematics Olympiad problems. (But I don't look at those things as they make me feel really stupid.)

An introductory document on the subject:
http://people.math.sfu.ca/~kya17/teaching/math800/Math800project.pdf

Another, this time with exercises:
http://mathcircle.berkeley.edu/archivedocs/2013_2014/lecture/1314lecturespdf/BMC-Advanced%20Sept.%2017th,%202013.pdf
(At least one of those exercises was a conjecture for 30 years but it just melts when you apply CN to it.)
29
7
Mike G's profile photoSamuel Mimram's profile photoJoseph Cooper's profile photoCallan McGill's profile photo
7 comments
 
While CN only talks about existence of a non-zero, there is another (often ignored) result by Alon and Füredi that gives you a sharp estimate on at least how many non-zeroes are there. This was recently used by Pete Clark et al. to prove Warning's second theorem, and its generalisations: http://arxiv.org/abs/1404.7793

The result is Theorem 5 from "Covering the Cube by Affine Hyperplanes", http://www.sciencedirect.com/science/article/pii/S0195669883710115

(If you are looking for more casual mathematics to read around polynomial methods then I can recommend some interesting reading material)
Add a comment...
People
In his circles
188 people
Have him in circles
2,324 people
Navid Zadeh's profile photo
Michael Vidne's profile photo
Alex Fink's profile photo
Vines TV's profile photo
Arrigo Benedetti's profile photo
David Wakeham's profile photo
gregory knapen's profile photo
Syed Hamza's profile photo
Kyle Carter's profile photo
Work
Employment
  • Google
Basic Information
Gender
Male
Story
Tagline
Homo Sapiens, Hominini, Hominidae, Primates, Mammalia, Chordata, Animalia
Introduction
Blog: A Neighborhood of Infinity
Code: Github
Twitter: sigfpe
Home page: www.sigfpe.com
Bragging rights
I have two Academy Awards.
Education
  • King's College London
    Mathematics
  • Trinity College, Cambridge
    Mathematics
Links
YouTube
Other profiles
Contributor to