Profile

Cover photo
Verified name
Dan Piponi
Worked at Google
Attended King's College London
2,557 followers|4,011,054 views
AboutPostsPhotosYouTube

Stream

Dan Piponi

Shared publicly  - 
 
Many people think of religions primarily as systems of belief. I think this may be a skewed view because of the predominance of Christianity and Islam, both of which make creeds prominent. For example, although Judaism does have something like a creed, it tends to place more emphasis on practice than belief.

This reflects my view of mathematics. I think that for many, mathematics is a matter of belief. For them, mathematics is a way to find out what is and isn't true. I tend to see mathematics as a set of practices. As a result, I find myself bemused by debates over whether 2 really exists, or whether infinite sets exist, whether the continuum really is an infinite collection of points, whether infinitesimals exist, whether the axiom of choice is true, and so on. I find some ultrafinitists particularly confusing. They seem to believe themselves to be expressing skepticism of some sort, whereas to me, expressing skepticism about mathematical constructions is a category error. So to me, these ultrafinitists are surprising because of what they believe, not because of what they don't. This doesn't just apply to ultrafinitists. In an essay by Boolos [1], he seems confident in the self-evident truth of the existence of integers, say, but expresses more and more doubt as he considers larger and larger cardinals. Many mathematicians seem to have a scale of believability, and ultrafinitists just draw the scale differently.

Conversations between people who view mathematics (or religion) as being about beliefs, and people who view mathematics (or religion) as being about practices, can often be at cross purposes. And members of one group can often find themselves dragged into debates that they don't care for because of the framing of questions. (I don't want to debate the existence infinite sets, not because I can't justify my beliefs, but because I'm more interested in how to use such sets. I don't think belief is a precondition for use.)

Of course you can't completely separate belief and practice and I certainly do have some mathematical beliefs. For example I put a certain amount of trust in mathematics in my daily job because I believe certain practices will allow me to achieve certain goals.

[1] Must we believe in Set Theory? https://books.google.com/books/about/Logic_Logic_and_Logic.html?id=2BvlvetSrlgC (I hope I'm not mischaracterizing this essay, but even if I am, the point still stands.)
books.google.com - George Boolos was one of the most prominent and influential logician-philosophers of recent times. This collection, nearly all chosen by Boolos himself shor...
26
3
Ashley Yakeley's profile photoKazimierz Kurz's profile photoWilliam Rutiser's profile photoRafael Ferreira's profile photo
8 comments
 
I'm very much in your practice-oriented camp religiously, and secondarily mathematically, as you know: http://immanence.org/post/perspectivism-and-post-rationalism/
Add a comment...

Dan Piponi

Shared publicly  - 
 
Everyone's into machine learning and big data these days, but I've been enjoying playing with miniature neural networks like those in my recent posts. Looks like I'm not the only one. A paper [1] appeared on arxiv today where the author trains a neural network to "discover" Strassen multiplication, i.e. the trick that allows you to multiply two 2x2 matrices using seven instead of eight multiplications.

You can view it as applying linear operations to the input and output to transform them into a space where matrix multiplication becomes pointwise multiplication. So it's a bit like doing diagonalisation. That reminds me of how autoencoders are related to PCA and diagonalisation [2].

The paper also has a novel training method that makes "conservative" updates.

BTW The author is also an author on the "divide and concur"' paper I mentioned a few months back [3].

[1] http://arxiv.org/pdf/1601.07227.pdf
[2] https://en.wikipedia.org/wiki/Autoencoder#Relationship_with_Other_Methods
[3] http://arxiv.org/abs/0801.0222
30
6
Hilmar Hoffmann's profile photoMirko Bulaja's profile photoMike G's profile photoJoshua Loyd's profile photo
3 comments
 
+Māris Ozols I'm doubtful this approach could work with the kind of large scale matrix multiplications you need for some of the cleverer methods out there. However, I wonder if there are some nice small tricks you could discover. How about multiplying three 3x3 matrices? Or multiplying three quaternions? Both things you might need in a graphics pipeline.
Add a comment...

Dan Piponi

Shared publicly  - 
 
I'm fascinated by certain types of biological self-replicator that aren't strictly speaking self-contained organisms. Examples include transposons and prions. I recently learnt about another: inteins [1,4].

Recall Genetics 101: DNA in genes is translated to RNA which in turn is translated into sequences of amino acids, i.e.. proteins.

Frequently some editing happens after the DNA->RNA stage. For example, entire substrings of RNA called introns are edited out leaving behind pieces called exons which are then stitched together. The editing is carried out by other units in the cell, and it's the exons that get translated to sequences of amino acids, ie. proteins.

In some cases the editing happens after the translation from RNA to protein. For example a long protein may be cleaved into shorter subunits that have useful functions. Sometimes a protein (an autoprotease) can cleave itself into functional subunits.

Inteins (named by analogy with introns) are small pieces of protein that can edit themselves out of a larger protein and join the flanking pieces back together.

Nobody has found a useful [7] function served naturally by inteins. So they're a bit like parasites. But as they ultimately come from the host's DNA they are reproduced when the host reproduces just like any other gene. And as they edit themselves out cleanly they don't directly cause harm.

But some inteins have a way to cheat and reproduce faster than other genes. Going back to Genetics 101, remember that in many organisms many genes exist in multiple, possibly differing versions called alleles. For example, humans are diploid meaning they have two versions of most genes. When humans reproduce, children typically receive one of the two versions of each gene that each parent has. For each gene you have a 50/50 chance of inheriting each of the two alleles from each of your two parents. Some inteins contained in alleles also contain within themselves a region called a homing endonuclease (HED) [3]. The HED attacks and damages DNA specifically in the partner allele. When the host cell detects the allele is damaged it tries to repair it. To make the repair it needs a template and it uses the undamaged allele, the one with the gene for the intein. So if the intein initially exists only on one of the alleles, it ends up on both. Using this "horizontal" reproduction method the intein can beat the standard Mendelian odds [5]. Note that I just used humans as an example to illustrate alleles, but I think there might not be any naturally occurring inteins in humans. Inteins were first found in yeast.

So inteins containing HEDs are able to proliferate without conferring benefits to their hosts, making them a nice example of what Dawkins calls "selfish" DNA [6].

[1] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3949740/
[2] https://en.wikipedia.org/wiki/Allele
[3] https://en.wikipedia.org/wiki/Homing_endonuclease
[4] https://en.wikipedia.org/wiki/Intein
[5] https://en.wikipedia.org/wiki/Mendelian_inheritance
[6] https://en.wikipedia.org/wiki/Selfish_DNA
[7] I'm not 100% happy using a loaded word like "useful". Inteins have a different idea of what's useful. Inteins are also very useful to biologists in labs.
28
10
Richard Taylor's profile photoMike G's profile photoMariusz Rozpędek's profile photoDave Gordon's profile photo
9 comments
 
See my obscure review The Utility of Prions http://www.sciencedirect.com/science/article/pii/S1534580702001181
Add a comment...

Dan Piponi

Shared publicly  - 
 
I tried to make a list of why deep neural networks seem to work today whereas a decade or so ago people weren't having much success with them.

A couple of obvious things:

(1) More data for training.
(2) Faster computers and big data centres.

Still, I think we'd have seen successes many years ago even without those. The lack of (2) just requires more patience and I'm sure some large enough databases have existed for a while for some applications. So I'm not sure the lack of these two would have been a showstopper.

There's also:

(3) Better training methods.

Though again, looking at papers, this isn't a showstopper. Better training methods make things go a bit faster, but you seem to get to the same place eventually.

(4) Better activation functions.

There seems to have been a shift from sigmoids to the function relu(x) = max(0, x). Again, the lack of (4) isn't a showstopper. Switching to the relu function mainly gives better performance.

But there's also:

(5) Initializing weights randomly from the correct distribution

If your weights are too big initially then forward propagation causes a numerical explosion. If they're too small, then back propagation causes the gradients to shrink to the point where they do nothing.

The important thing about this is that initializing deep networks badly causes training to stall completely. It's not about making things go a bit faster or a bit slower. It makes the difference between getting anywhere, and getting nowhere.

This isn't my area so I'm just going by second hand experience gleaned from papers. But it looks to me like (5) is the one whose lack would actually be a showstopper.
36
8
Joe Philip Ninan's profile photoCarlos Jose Ribeiro Simoes's profile photoMichael Weber's profile photoDave Gordon's profile photo
20 comments
 
+Fernando Pereira But 90's is then the most of important advances was made. After that it was waiting for GPGPU/massively parallel architecture 
Add a comment...

Dan Piponi

Shared publicly  - 
 
 
It’s that time of year again folks! The red-carpet is out and the tuxedos and cocktail dresses are back from the dry-cleaners just in time for the glitz and glamour of our second annual Golden Balloon Awards.

This year’s awards shine the spotlight on some of the behind-the-scenes progress the team has made toward launching a ring of connectivity around the globe in 2016. We’ve been pushing the limits in testing this year, getting balloons up in the air faster, traveling further, and providing connections over longer and longer distances. So without further ado, we give you the best of the best from 2015...

#1 The Speed Racer - It would take the average clown at a birthday party 128 hours to inflate one of our tennis court sized Loon balloons and release it into the stratosphere (assuming the clown doesn’t pass out first), and by that point all the cake would be gone! Thankfully, we use our gigantic auto-launcher, custom designed to get Project Loon balloons from the box to the stratosphere quickly, safely and consistently. This year the team racked up a launch record of just 29 minutes to fill, lift and launch a Loon balloon into the stratosphere.

#2 The Globetrotter - Project Loon balloons have now travelled over 17 million kilometers since the project began, and this globetrotting balloon covered 113,000 km of them in just one flight, our longest distance traveller of 2015. Launched in May, the Globetrotter embarked on a journey of epic proportions, drifting in the stratosphere above 17 different countries before being brought to land in our Chilean recovery zone for a well-deserved retirement.

#3 The Dynamic Duo - Balloon-to-balloon communication allows Project Loon to connect even the most remote areas by bouncing signal across multiple balloons in the sky and back down to users many, many kilometers away. But, this is no easy task - transmitting data between balloons requires an accuracy equivalent to pointing a signal at a can of soda - 20 km up in the air and swaying in the wind! This award recognizes the tag-team effort of two very special balloons in demonstrating balloon-to-balloon connectivity. Launched in June as a simultaneous launch, this adorable couple were far from inseparable, at one point drifting over 100 km apart while data was continuously transmitted between them, the longest distance over which we have demonstrated balloon-to-balloon connectivity in the stratosphere.
14 comments on original post
7
Add a comment...

Dan Piponi

Shared publicly  - 
 
This is possibly the first ever electronic pop tune. Released in 1957 it was composed using a bank of oscillators, mono tape recorders, and tape splicing equipment. There were also a handful of electronically manipulated piano notes.

I learnt about it from the book The Sound of Tomorrow: How Electronic Music Was Smuggled into the Mainstream by Mark Brend. Many historians writing about electronic music from that period tend to concentrate on academic music. Brend's book is about the popular end of the spectrum.
12
1
Amos Robinson's profile photoDan Piponi's profile photoPaul Wayper's profile photoMahlen Morris's profile photo
7 comments
 
Nice!
Add a comment...
Have him in circles
2,557 people
Morgaine Fowle (de la faye)'s profile photo
Brett Allen's profile photo
Mahesh Abnave (Mahesha999)'s profile photo
Cyndy Hagin's profile photo
Robin Green's profile photo
Alexander Ulrich's profile photo
Ilya Yanok's profile photo
Davide Del Vecchio's profile photo
Jasiek Gryboś's profile photo

Dan Piponi

Shared publicly  - 
 
I think it's interesting to think about minimal self-reproducing systems of various kinds. Examples are quines [2], self-hosting compilers [1], organisms [3], ecological systems [6] and maybe one day a descendant of the RepRap [4].

Another class of examples is self-reproducing supply chains. As the famous Toaster Project shows [5], even the humble toaster is the product of the work of thousands, or maybe even millions of people. With some optimisation work, I wonder how small a self-sustaining system you'd need to churn out toasters.

Stephenson's novel Seveneves is a supply-chain novel. It asks what is the minimal system that can manufacture anything that a bunch of humans starting with a space station might need to build a civilization in the vacuum of space, assuming the availability of little more than some rocks, some ice, and some solar energy.

Overall I'm not convinced Seveneves is a success as a novel. But I think it's interesting to speculate about the questions it asks.

The biggest constraint seems, to me, to be energy, and if you have solar panels there is an abundance of that in the vicinity of Earth. But solar panels would probably not last well beyond a century and replacements would probably require a larger supply chain than could be established, in space, in a century. Anyway, it's interesting food for thought. I do think it'd be a great research project to try to establish how small a self-sustaining community in space could be.

BTW I'm pretty sure Stephenson is wrong to imagine it could be possible, with any technology whatsoever, to re-establish any of the flora and fauna on Earth from DNA sequences on a USB drive.

[1] https://en.wikipedia.org/wiki/Self-hosting
[2] https://en.wikipedia.org/wiki/Quine_(computing)
[3] https://en.wikipedia.org/wiki/Mycoplasma_laboratorium
[4] http://reprap.org
[5] http://www.thetoasterproject.org
[6] https://en.wikipedia.org/wiki/Biosphere_2
10
3
Kent Crispin's profile photoKazimierz Kurz's profile photoMariusz Rozpędek's profile photoCons Bulaquena's profile photo
16 comments
 
In Seven Eves I believe they had essentially the entire productive capacity of the earth devoted to building up supplies to push them through the bottleneck.

The SE situation was extremely constrained in another way -- time -- that we don't need to accept.  If you take the unit of self-replication to be an O'Neill colony with its internal parasites (us), embedded in an ecology of O'Neill colonies, living off a substrate of mined asteroid and planetary material and sunshine, then the replication time is on the order of decades to maybe a century.  The individual cylinders are sort of analogous to bacteria, except they don't reproduce by fission.
Add a comment...

Dan Piponi

Shared publicly  - 
 
I thought I'd follow up my previous "recursive" neural net to evaluate expression trees to train one to evaluate unparsed expressions. So I put together a kind of recurrent neural net to learn how to evaluate strings like "0+(1+1) * (1+1)+1".

In this case the problem is really hard. Symbols like '(' and '*' could mean anything, and in fact for this example I used non-standard binary operators for '+' and '*'. Correct evaluation requires something like a multi-state stack machine with a stack for both numbers and pending binary operations, and of course the net starts off knowing nothing about any of this. Amazingly, a suitable neural net manages to get the hang of things. I've attached a plot showing predicted vs. actual values for a neural net trained with 6-leaf expressions.

See the code at: https://github.com/dpiponi/nn-fold/blob/master/test2.lhs

(Note that it works well with the standard interpretation for the symbols too. But I felt like some variety.)
7
Andrej Bauer's profile photoMike Stay's profile photoDan Piponi's profile photo
5 comments
 
+Mike Stay A recognizer for those binary sequences should be easy to train. Maybe I'll make that my first experiment.
Add a comment...

Dan Piponi

Shared publicly  - 
 
I thought I'd try a little experiment attempting to fuse a conventional style of programming (in this case recursive folding) with neural networks.

My goal was to take expressions like (1+0)*(1+0+(1*0))+(1*1) and figure out how to evaluate them based only on the values of examples. The catch is that we're told only that entire expressions take real values, and that expressions are computed recursively, and that's about it. We know that + and * are binary operators, but we don't know what. If we got to see examples of expressions like isolated 0's and 1's then the problem would be a lot easier. So to make things hard, all examples have at least 8 leaves. So there's never an easy case allowing you to quickly infer what's going on. This doesn't seem like an obvious candidate for a solution via a neural network but that's what I tried.

The code is here: https://github.com/dpiponi/nn-fold/blob/master/test.lhs

The idea is that the neural network is built on an expression by expression basis and reflects the shape of the expression. So the neural network execution is a recursive fold, just like you'd use normally to evaluate an expression. But instead of a conventional evaluation it's a neural network feed-forward process with weights updated by back-propagation. I don't know if it's reasonable to still call that a neural network.

As you'll see if you build and run the code it does work. It also works with entirely different semantics for 0, 1, + and * as long as they're not too wild.
14
2
Marco Devillers's profile photoDan Piponi's profile photoWilliam Rutiser's profile photoDave Gordon's profile photo
10 comments
 
+Marco Devillers (There are fancy newer Haskell package managers but I'm behind the times.)

Make sure you have the cabal command in your path. You may need to install a fedora package with a name like cabal-install.

Then do "cabal install ad" to install automatic differentiation. You may also need "cabal install vector". If it's an older version of vector there's an extra line of code that might be needed...
Add a comment...

Dan Piponi

Shared publicly  - 
 
It's 2016 and many predicted technologies still haven't arrived yet. But some, I think, could have arrived, had we made different choices, in a different timeline, over the last century:

Bases on the Moon I'm pretty sure we've had the ability to do this for 50 years.
People on Mars I think we've had the ability to to this for a while too.
Supersonic Airliners wait a minute, I think we had them in our own timeline. Must have been bleedthrough from another timeline.
Flying Cars They do exist. But convenient vehicles that can zip about the city in large numbers transporting the mass of a couple of humans? I think that's a maybe.
Hoverboards it's just a matter of paving the world with copper. I think we made the smart choice here.
A Cure for Cancer Cancer turned out to be a much more complex problem than people realised. But maybe if all of humanity's resources had been invested in it we'd have a cure by now.
Human Level AI I think there's a long history of humans underestimating what humans are capable of. It's a hard problem and many organizations have already invested much effort in machine intelligence. So I doubt it exists in many timelines.
Teleportation I don't imagine this having been invented in any timeline yet.
Fusion I suspect that we could have had fusion power with enough willpower. We've had uncontrolled artificial fusion for a long time now.
Space elevators I think these might be just on the edge of what's physically possible. But I think ramping up the manufacturing to build such enormous artifacts might require another century of effort. So not in any timeline yet.
Rockets that land Woohoo! They exist now!
16
David Konerding's profile photoDan Piponi's profile photoJohn Baez's profile photoMatt McIrvin's profile photo
9 comments
 
+John Baez I think the reason science fiction traditionally looks the way it does, with so much emphasis on people riding around in spaceships and other exotic vehicles, is that when the genre was coming together, transportation technology was in a phase of exponential improvement. It was easy to just extrapolate that curve and imagine amazing things happening.

But transportation hit a plateau sometime around 1970. Improvements continued, but they're incremental, and have to do more with things like safety, efficiency and comfort than raw speed and power and ability to go to strange places.

The cyberpunk writers of the 1980s were trying to adjust to this world in which information technology was the thing that was still exponentiating. But it's interesting... if you look at the early cyberpunk novels they've still got these worlds in which there are things like massive space colonies.
Add a comment...

Dan Piponi

Shared publicly  - 
 
Oakland. Atmospheric effects courtesy of the weather and Google Photos.
11
Michael Kleber's profile photo
 
Wow. Maybe there's there there after all.
Add a comment...

Dan Piponi

Shared publicly  - 
 
I got much of my introduction to computing as a kid messing about with Commodore PETs, BBC Micros, and the like with a friend of mine Alex Selby. Among other things he has a great talent for solving hard search problems which is why the game AIs I wrote then had no chance against his. He and a friend won the million pound prize for solving the eternity puzzle [4] and now he's been quietly working on the problems that the D-wave machine is designed to solve [1,2]. I can't say I'm surprised that Alex's code outperforms the D-wave device [3].

[1] https://github.com/alex1770/QUBO-Chimera
[2] http://arxiv.org/abs/1409.3934
[3] http://www.scottaaronson.com/blog/?p=2555#comment-974407
[4] https://en.wikipedia.org/wiki/Eternity_puzzle
20
1
David “goo.gl/LuFkzX” Tweed's profile photoDan Piponi's profile photoSuhail Shergill's profile photo
2 comments
 
+David Tweed Yes, the scaling is everything. At the very least, if the D-wave device doesn't show good scaling then if you have a few hundred million dollars spare you're better off investing in Alex than in D-wave :-)
Add a comment...
People
Have him in circles
2,557 people
Morgaine Fowle (de la faye)'s profile photo
Brett Allen's profile photo
Mahesh Abnave (Mahesha999)'s profile photo
Cyndy Hagin's profile photo
Robin Green's profile photo
Alexander Ulrich's profile photo
Ilya Yanok's profile photo
Davide Del Vecchio's profile photo
Jasiek Gryboś's profile photo
Work
Employment
  • Google
Basic Information
Gender
Male
Story
Tagline
Homo Sapiens, Hominini, Hominidae, Primates, Mammalia, Chordata, Animalia
Introduction
Blog: A Neighborhood of Infinity
Code: Github
Twitter: sigfpe
Home page: www.sigfpe.com
Bragging rights
I have two Academy Awards.
Education
  • King's College London
    Mathematics
  • Trinity College, Cambridge
    Mathematics
Links
YouTube
Other profiles
Contributor to