Profile

Cover photo
Verified name
Dan Piponi
Worked at Google
Attended King's College London
2,250 followers|3,234,028 views
AboutPostsPhotosYouTube

Stream

Dan Piponi

Shared publicly  - 
 
Last and First Men is one of the most amazing works of science fiction ever written and it ought to be better known. It was written in the 1930s and describes the future of humanity over the next two billion years. If that isn't enough for you, the author also wrote a sequel Star Maker.

It's not what you'd conventionally call a good read. The first few chapters are a near future history but with hindsight from 2015 it's mostly annoying, though it does have a few insights. One edition recommends skipping the first few chapters though that's probably to spare Americans from insult rather anything else. But apart from that, much of the book is told with very broad brushstrokes. By "broad" I mean passages where Stapledon tells you that he'll skip over the next few hundred thousand years because not much happens, just a bunch of empires and tyrannies rising and falling across the Earth (or Mars, or Venus) or plagues almost wiping everyone out. Nothing important. There aren't many actual events as such in the book. It's more low resolution descriptions of the state of humanity over time. So don't expect anything like a plot or character development.

There are some neat things that I expect Stapledon got right. There isn't one humanity. There are Homo Sapiens versions 1 to 18 with a few side branches. In many cases version n+1 is the product of genetic engineering by version n. Version 4, for example, are basically giant brains constructed by version 3. I also enjoyed the way the Martians completely misunderstood the nature of intelligent life on Earth.

In many ways the book seems hopelessly wrong. Stapleton didn't anticipate modern information technology. At a certain level of abstraction life in 2×10⁹ A.D. doesn't seem all that different to life today despite the hive mind and the cannibalism. I think there's surprisingly little technological development over two billion years and I expect that much (but not all) of the technology Stapledon describes will actually exist within a few millennia. There's very little space travel with humans only relocating within the solar system. But there is terraforming. (Remember, this was written in the 30s.)

I don't know of any other work of science fiction that has a scale anywhere near as ambitious as this. But if you know of one, I'd love to hear about it.

Anyway, I currently have the audiobook, narrated by Travis from Blake's Seven, on my iPod for listening to in the night. Works perfectly as a cure for insomnia. (Don't take that as a disrecommendation.)

https://en.wikipedia.org/wiki/Last_and_First_Men
17
2
Mark Reid's profile photoGershom B's profile photoPaul Brauner's profile photoRinat Safin's profile photo
21 comments
 
Thanks for the recommendation! I look forward to reading it.

As for similar works, I spent some time racking my brain to remember Pohl's The World at the End of Time, which I recall as also having a dotting-through-history-to-the-universe's-end ambitious scope. Blish's Cities in Flight doesn't go quite so far out but it's always exemplified that sort of storytelling approach to me, in terms of far-future world-building.
Add a comment...

Dan Piponi

Shared publicly  - 
 
With his movie scores John Carpenter has played a major role in the history of electronic music. Many will instantly recognise the Halloween theme music (https://www.youtube.com/watch?v=nQWqRGVE1Zk) and many artists claim Carpenter as an influence. But he's never released a standalone album that's not intended as a movie soundtrack. Until last month.

What can I say? It sounds exactly how you you expect music by John Carpenter to sound. Pretty excellent if you don't mind a bit of the old 80s analogue synth sound, except actually it's composed with Logic Pro. Sensible man.

You can watch a montage of his movies set to the first track here: http://johncarpenter.sacredbonesrecords.com
7
Add a comment...

Dan Piponi

Shared publicly  - 
 
If |x|<1, -log(1-x) =x+x²/2+x³/3+x⁴/4+…

So it seems reasonable to consider the function defined by:

Li(x) = x+x²/4+x³/9+x⁴/16+…

That converges for |x|<1 but you can analytically continue to the entire complex plane if you make a branch cut from 1 to +∞ or treat it as a multivalued function.

But until I read the first chapter of Zagier's introduction to the subject (http://people.mpim-bonn.mpg.de/zagier/files/doi/10.1007/978-3-540-30308-4_1/fulltext.pdf) I had no idea how many astonishing properties this function, known as the dilogarithm, has.

Simple closed form expressions for the dilogarithm are only known at 8 points in the complex plane, four of them being -φ, -1/φ, 1/φ and 1/φ², where φ is the golden ratio.

Li also satisfies the really bizarre property that for any polynomial f of degree n without a constant term

Li(z) = C(f) + ∑ Li(x/a)

where the sum is over all n roots x of f(x)=z and all n roots a of f(a)=1, i.e. it's a sum of n² terms. C(f) is some complicated thing that depends only on the polynomial.

The dilogarithm has amazing connections with projective geometry, hyperbolic geometry, quantum field theory and even K-theory. (I can't imagine how that last one works.)

Li is just one of an infinite family of polylgarithms, which themselves generalise to multiple polylogarithms. I wonder if they have ladder operators for some group representation like the way I recently learnt Bessel functions do.

Zagier says in the introduction: "Almost all of [the dilogarithm's] appearances in mathematics, and almost all the formulas relating to it, have something of the fantastical in them, as if this function alone among all others possessed a sense of humor."
22
7
Roice Nelson's profile photoBrian Roberts's profile photoVít Tuček's profile photoSuzy Que's profile photo
5 comments
 
The fact that you get from Li_{-n} to Li_{-n-1} via applying z(d/dz) looks like it really ought to have some sort of Weyl algebra action, or an action of something derived from the Weyl algebra.
Add a comment...
 
Here's a quote from the paper at: http://arxiv.org/abs/1411.6009

"[We predict] that the [supernova] will appear in the central image of the spiral host galaxy, at an approximate position of α = 11h49m36.01s, δ = +22◦23′48.13′′ (J2000.0) at a future time, within a year to a decade from now (2015 to 2025)."

So here's a puzzle. If the first time you know about a supernova is when you've already seen it happen, how can you possibly make such a prediction?

Scroll down for the answer.












As a result of gravitational lensing, this supernova, in galactic cluster MACS J1149.6+2223,  is visible in multiple locations. But the path length for each image is different so the image arrivals are staggered in time. If the astronomers have built their model correctly they can predict future image arrivals too.
64
5
Grzegorz Chrupała's profile photoMiracleSeeker's profile photoEl Al's profile photoSergey Ten's profile photo
6 comments
 
Thank you so much
Add a comment...

Dan Piponi

Shared publicly  - 
 
I'm still waiting for someone to write a history of the British microcomputer industry but the nearest I've seen is the comedy Micro Men. But some online searching did turn up "The Sinclair Story" from about 1985:
ftp://ftp.worldofspectrum.org/pub/sinclair/books/SinclairStoryThe.pdf (G+ doesn't recognise ftp links, you'll have to copy and paste manually!)

It was a pretty entertaining read made more entertaining because I'm in the position of reading it while inhabiting a future far beyond anything envisaged in the book. We of course now have electric cars and digital watches and computers that can read the news (without us needing to OCR it from paper) and TVs that fit in our pockets.

I love the Sinclair philosophy that poor hardware + ingenuitygood hardware. But that 's not an idea that scales well. So the book made me cringe every time we went through the whole "we can do this thing that everyone else thinks is impossible for next to no money" routine. Amazingly it seemed to work for the ZX Spectrum but my old Sinclair calculator went up in smoke, literally, like so much other Sinclair hardware.

There is a chapter called "Computers in Decline". I'm struggling to recall a time when this could have been an accurate description of the market. I don't remember home computers ever going into decline. I think everyone I knew who had a home computer then has had one continuously since then, and many more do now.

I also learnt that there is a piece of Sinclair Radionics still in existence though it was renamed Thandar and then became part of TTI: http://www.tti-test.com

By the way, the author, Rodney Dale, also wrote one of my favourite "ancient people had ultra-powerful technology from aliens or Atlantis" books. Much better worked out than anything by that fraud von Däniken: http://en.wikipedia.org/wiki/Manna_Machine
6
1
Michael Nielsen's profile photoMatt McIrvin's profile photoDan Piponi's profile photoKaushik Sridharan's profile photo
9 comments
 
+Matt McIrvin That's truly terrible. OTOH if I'd implemented all that in 320 instructions I'd have been pretty proud of myself.
Add a comment...

Dan Piponi

Shared publicly  - 
 
I thought I'd have a go at doing some unicode mathematics in G+. Sorry if you don't have all of ∂, ᵗ, ᵢ, ⱼ, ∑ and others in your font.

I highly recommend Boyd and Vanderberghe's book on convex optimisation: http://stanford.edu/~boyd/cvxbook/

Many of the exercises involve finding the duals of optimizations and I have to admit I quite enjoyed doing them. So I thought I'd make one up to entertain myself on the bus home. It's a Google bus so a sizable fraction of passengers has code or equations on their screen. I'm posting it here as it might be useful.

Hamilton's principle of least action casts classical dynamics as an optimisation problem so I thought I'd look at the derivation of Hamiltonian dynamics from Lagrangian dynamics.

First I'm going to discretise time so it's xᵢ instead of x(t). And I'm going to assume the Lagrangian is convex so we can use standard convex optimisation methods.

So here is a discretised least action problem.

(1) min(x) ∑ᵢ L(xᵢ, (Ax)ᵢ)

(That means minimizing with respect to all of the xᵢ)

In the usual version the sum is an integral and the second argument to L is the derivative of x with respect to time. Instead I'm using a linear operator A acting on the vector of xᵢs. I'll replace A with the derivative at the end.

The matrix A messes things up. It mixes together xᵢ and xⱼ for different i and j. If it didn't do this then we could just minimise each term in the sum individually.

We can get rid of the mixing simply by rewriting the problem as:

min(x, y) ∑ᵢ L(xᵢ, yᵢ)
such that y = Ax

The catch is that we've pushed the problem into the constraints. The objective does look more symmetrical now, so that's a nice bonus. Still, it'd be nice to get rid of the y or constraints somehow. Let's do a pretty standard optimisation thing. We'll look at the dual of the optimisation w.r.t. y, temporarily fixing x. Call the Lagrange multiplers pᵢ. That gives this unconstrained problem:

(2) min(y) ∑ᵢ L(xᵢ, yᵢ) + pᵢ((Ax)ᵢ-yᵢ)

We can pull a term out of the minimisation:

(min(y) ∑ᵢ L(xᵢ, yᵢ) - pᵢyᵢ) + p·Ax

The first part is a sum of completely independent terms. So we get

(∑ᵢ (min(yᵢ) L(xᵢ, yᵢ) - pᵢyᵢ)) + p·Ax

Let's define H(x, p) = max(y) (py - L(x, y)) (Notice the sign flip.)

So the solution to the dual problem is now:

(∑ᵢ -Hᵢ(xᵢ, pᵢ)) - p·Ax

Strong duality tells us that the maximum, with respect to p, of the dual problem is the minimum of the original problem. So a solution to problem (2) is given by:

max(p) (∑ᵢ -H(xᵢ, pᵢ)) - p·Ax

and the solution to the original problem (1) is

min(x) max(p) (∑ᵢ -H(xᵢ, pᵢ)) - p·Ax

That's it. That is now the original problem in Hamiltonian rather than Lagrangian form.

We can use calculus to deduce that at the optimum:

∂H(xᵢ, pᵢ)/∂pᵢ - (Ax)ᵢ = 0
∂H(xᵢ, pᵢ)/∂xᵢ - (Aᵗp)ᵢ = 0

Now we go to the continuum. x and p become functions of time. The matrix A becomes the derivative operator d/dt. Aᵗ becomes the adjoint -d/dt.

So we get Hamilton's equations:

∂H(x, p)/∂p = dx/dt
∂H(x, p)/∂x = -dp/dt

One thing that I learnt from this is that the minus sign in the second of these equations is coming from the adjoint of the derivative.

The other thing I learnt was that the entire derivation is completely natural from the point of optimisation. Unlike the usual accounts, I don't feel like I plucked random expressions out of a hat at any stage. I just grouped things together that seemed to go together.

By the way, the substitution of integrals for minima, products for sums and exponentials for linear functions turns the above into a (sketch of a) derivation of the Hamiltonian path integral from the Lagrangian path integral in quantum mechanics.

PS I composed all this using my vim script https://dl.dropboxusercontent.com/u/828035/math.vim
A new MOOC on convex optimization, CVX101, will run from 1/21/14 to 3/14/14. More material can be found at the web sites for EE364A (Stanford) or EE236B (UCLA), and our own web pages. Source code for almost all examples and figures in part 2 of the book is available in CVX (in the examples ...
16
5
Mirko Bulaja's profile photoPhilip Thrift's profile photoArtur Popławski's profile photoKazimierz Kurz's profile photo
5 comments
 
FWIW Pontryagin's maximum principle [1] can also be discretised and treated this way, again assuming convexity.

Although I didn't end up using it, I learnt about this stuff when I was trying to find optimal balloon trajectories for Loon. Instead I used the HJB equation [2].

[1] https://en.wikipedia.org/wiki/Pontryagin's_minimum_principle
[2] https://en.wikipedia.org/wiki/Hamilton–Jacobi–Bellman_equation
Add a comment...
In his circles
188 people
Have him in circles
2,250 people
Samuel Mimram's profile photo
Zac Slade's profile photo
Kevin Gamage's profile photo
Pintea Alexa's profile photo
Sandy Hilson's profile photo
Jay Reynolds's profile photo
gregory knapen's profile photo
Alex Fink's profile photo
Jennifer Schubert's profile photo

Dan Piponi

Shared publicly  - 
 
I recently finished Kazuo Ishiguro's latest book The Buried Giant.

I've been much entertained by the "controversy" that this book seems to have stirred up (http://www.theguardian.com/books/2015/mar/08/kazuo-ishiguro-rebuffs-genre-snobbery). Despite having written science fiction before, Ishiguro is apparently considered a "serious" writer and his fans are freaking out that the book doesn't come with a plain brown cover to hide the fact that they are reading a fantasy book starring ogres, pixies, magic as well as a dragon. And if snobbery from one side wasn't enough, Ursula Le Guin got all uppity about a "serious" writer having the temerity to invade her personal territory. Shocking stuff!

I could almost have believed that this book was written by Gene Wolfe. It has many of his signatures such as narrators with unreliable memories who speak with a highly affected style and enter into long expositions at the most inappropriate moments. In fact, it reminded me a lot of Wolfe's Wizard Knight duology, to the point of even sharing a character: Sir Gawain.

I recommend it.
8
John Cook's profile photo
 
Sounds good.
Add a comment...

Dan Piponi

Shared publicly  - 
 
 
One of Project Loon’s earliest Eureka moments was the idea that we could provide continuous Internet connection not by keeping balloons stationary over a given location (which would require lots and lots of energy to work against the wind) but by coordinating a fleet of balloons to work with the wind, such that when one balloon leaves a location another moves into its place to continue providing connectivity. In theory, this means that any individual balloon would provide connection in one place and then, days later, provide connection at another location at the opposite end of the world. In our latest long distance LTE test this is exactly what we achieved!

Launched from New Zealand, our globe-connecting balloon made the first leg its journey travelling 9000 km over the Pacific Ocean. Approaching our test location in Chile at a speed of 80 km/h, a command was sent for the balloon to rise into a wind pattern that slowed it down to a quarter of its speed, allowing it to drift overhead members of the Loon operations team who were able to connect to the balloon via smartphones on our test-partner mobile network. 

Hanging around for half an hour to complete the connection testing, the balloon was then sent off on the winds over the South Atlantic ocean towards its next test location, over 10,000 km away in Australia! Our balloon completed this second leg of the journey in just 8 days, travelling over 1000 km per day and reaching a top speed of 140 km/h while whizzing over the ocean south of Africa. Once at the east coast of Australia the Loon Mission Control team implemented a series of altitude maneuvers to catch different winds and reverse the balloon path, lining it up to directly overfly our test location. Having travelled over 20,000 km around the world the balloon flew overhead at a ground distance of less than 500 meters away from our target (well within the 40,000 meter radius required for connection) to provide over 2 hours of Internet connection. That level of precision is like hitting a hole-in-one in golf from over 4 km away!

Tests like this give us real insight into how Project Loon can work at scale. With more balloons in the stratosphere and more Telco partners around the world capable of supporting Loon internet traffic, our ability to provide continuous connection in rural and remote areas will only increase. 
24 comments on original post
15
5
Mike G's profile photoP Tufts's profile photoAndy Adams-Moran's profile photoJim Stuttard's profile photo
5 comments
 
And I assume you're not using helium, since that would be very irresponsible. 
Add a comment...

Dan Piponi

Shared publicly  - 
 
I don't know why they call C++ strict and Haskell lazy, it's the wrong way round.

It's easy to write a Haskell program to input a list and start doing work on it before the user has even finished typing it in. This is the default in Haskell. It's eager to get stuff done. The norm in C++ is to wait until the user has finished entering the string before doing any work. Haskell is great if you want to compose a sequence of operations on lists. You don't have to wait for the entire operation on the first list to finish before starting work on the next because Haskell just can't wait to start evaluating the final result.

The Haskell laziness page (http://en.wikibooks.org/wiki/Haskell/Laziness) discusses how thunks are used to implement laziness. But really a thunk is a mechanism that allows Haskell to be eager. Rather than try to evaluate code that could cause a computation to block Haskell puts that stuff in a thunk so it can get on with evaluating the part that's going to be most productive from the point of view of any consumers further down the evaluation chain.

It's all a matter of perspective. If you're a consumer of output from a Haskell program it looks eager but if a Haskell program is the consumer of your input it looks lazy.

(I was motivated to write this because of the tweet: https://twitter.com/CompSciFact/status/572750815800238081 )

Update: Repost with different permissions.
71
16
Pascal Hartig's profile photoPhilip Thrift's profile photoAndy Adams-Moran's profile photoJean-Baptiste Giraudeau's profile photo
10 comments
 
I think we have misleading and confusing terminology.  And that's pretty lazy of us.  And there is a problem that knowledge and education about these techniques, technologies and tools is not widespread.  Also pretty lazy of us.

;-)
Add a comment...

Dan Piponi

Shared publicly  - 
 
 
After a record-breaking 187 days aloft, we have recently landed the Project’s longest duration balloon in one of our Argentinian recovery zones.

That’s a long time! Enough time to hard-boil 33,660 eggs, or 134,640 if you like your yolk runny (doesn’t include eating time), or listen to Elton John’s “Rocket Man” just over 61,000 times. In the same time it took the Earth to complete half of its annual orbit of the sun, our record-breaker managed to circumnavigate the globe 9 times, enduring temperatures as low as -75c (-103 F) and wind speeds as high as 291 km/h, soaring to a maximum height of 21km and drifting over more than a dozen countries across 4 continents.

Having been in the air for just over 3 months we decided to put the balloon through its paces, making a series of altitude changes on its last circumnavigation to test our ability to fly north out of southern latitude bands. The test was successful and we managed to turn up to the Northern tip of Australia where we were able to access a much slower wind stream going in the opposite direction and sending our balloon lazily back over to South America. Finally, we brought it back into its original southern latitude band to swoop in and land in one of our Argentinian recovery zones for collection.

Recovery operations are now underway to bring the balloon back to the lab so the team can analyze this magnificent specimen and learn as much as possible about what makes such long durations possible, building these learnings into our future long-duration fleets before putting the record-breaker through our recycling process. We think that this balloon has definitely earned its retirement!
61 comments on original post
7
Add a comment...

Dan Piponi

Shared publicly  - 
 
After my last post I did a web search on discretized optimal control and came across this paper: https://xa.yimg.com/kq/groups/16539359/1311154123/name/RossKarpenko2012.pdf

It's a hard paper (rocket science!) so I haven't even tried to read it fully.

It deals with the problem of how to efficiently rotate a vehicle in space using zero propellant using momentum storage devices, aka gyroscopes. It's a complicated version of the old brachistochrone problem using a discretization technique.

The bit I liked was this:

"It is important to note that the zero-propellant maneuver was discovered, designed and implemented in orbit – all using pseudo spectral optimal control. When this fact is juxtaposed with the fact that the flight implementation of the maneuver was performed on a 100 billion-dollar asset with an international crew onboard, supreme confidence of technical success is essential. That PS optimal control passed this high threshold for a NASA flight is an indication of how far the theory has evolved as a space technology."

Not everyone gets to use calculus to control things that cost $100,000,000,000. And did I read that right. Some of the theoretical work was done in orbit?
26
3
Dan Piponi's profile photoCharles Filipponi's profile photoKathie “Kat” Gifford's profile photoMark Hepburn's profile photo
3 comments
Add a comment...

Dan Piponi

Shared publicly  - 
 
Bloom at al. have just published a paper describing a clock that is accurate to one part six parts in 10^18.

I'm going to take a break for a moment, while I let that sink in. One part Six parts in ten to the power of eighteen.

...

Here's what one part in 10^18 means: due to the curvature of spacetime resulting from the Earth's gravity, if you raise your hand by 1 cm your watch is now running one part in 10^18 faster than if you didn't raise it.

Everything you need to know about a spacetime can be determined by measuring proper times along paths through it. So a worldwide network of clocks like this could be used to map out the curvature of spacetime around the Earth. Among other things, it provides a nice way to do geodetic levelling: http://en.wikipedia.org/wiki/Levelling Of course when we have wristwatches this accurate we'll be able to crowdsource such data.

http://www.nature.com/nature/journal/vaop/ncurrent/abs/nature12941.html
87
15
Tanner Griffith's profile photoVít Tuček's profile photoNovalis Aleksandr Knæxer Andrvel's profile photoHilmar Hoffmann's profile photo
10 comments
 
I know you didn't man. No one uses logten for components - gears.
Add a comment...
People
In his circles
188 people
Have him in circles
2,250 people
Samuel Mimram's profile photo
Zac Slade's profile photo
Kevin Gamage's profile photo
Pintea Alexa's profile photo
Sandy Hilson's profile photo
Jay Reynolds's profile photo
gregory knapen's profile photo
Alex Fink's profile photo
Jennifer Schubert's profile photo
Work
Employment
  • Google
Basic Information
Gender
Male
Story
Tagline
Homo Sapiens, Hominini, Hominidae, Primates, Mammalia, Chordata, Animalia
Introduction
Blog: A Neighborhood of Infinity
Code: Github
Twitter: sigfpe
Home page: www.sigfpe.com
Bragging rights
I have two Academy Awards.
Education
  • King's College London
    Mathematics
  • Trinity College, Cambridge
    Mathematics
Links
YouTube
Other profiles
Contributor to