Profile cover photo
Profile photo
Dan Piponi
2,656 followers -
Homo Sapiens, Hominini, Hominidae, Primates, Mammalia, Chordata, Animalia
Homo Sapiens, Hominini, Hominidae, Primates, Mammalia, Chordata, Animalia

2,656 followers
About
Dan's interests
View all
Dan's posts

Post has attachment
I eventually figured out how to exactly compute, and implement in Haskell, the "functional" logarithm and exponential of (some) formal power series. So, for example, we can get the formal power series for arcsin as exp(-log(sin)) where log and exp here are the functional versions.

In fact, exp here is just the exponential map from differential geometry, and log is its inverse.

I wrote it up here: http://blog.sigfpe.com/2017/02/logarithms-and-exponentials-of-functions.html
Photo

Post has attachment
A little followup to my previous post on functional square roots.

If we have functional powers that suggests the idea of functional logarithms and exponentials too. For example, if f is a function, we'd expect something like log(f^m) = m log f.

It's tempting to use the formal power series for log. We could interpret log (1+f) as f-f^2/2+f^3/3-... where 1 here is the identity function and ^ is functional power. But this fails. It took me a little while to figure out why.

Formal power series form a ring under addition and multiplication. For example, if f, g and h are power series, we have f*(g+h) = f*g+f*h, where * is power series multiplication. It looks like we might also have a ring structure with with power series composition operator ○ as the multiplication. But we don't have distributivity. The formal power series of log(1+x) only satisfies the property log(f^m) = m log f because of distributivity. So if we want to compute a logarithm of a function we'll have to use a more direct definition of log that doesn't assume distributivity.

Here's an approach: define log (f) = lim (n->∞) n (f^(1/n) - 1). We can compute f^(1/n) by iterating the square root operator I previously defined. The catch is that it's problematic to compute this exactly because the coefficients of the resulting power series are all limits. So here's what I do: I compute a rational approximation to the limit with n = 2^256 (nice and big!) and then find rational numbers with small denominators that approximate the resulting coefficients. This makes the assumption that the exact answer has relatively small denominators. I put the code at [1].

When I run the code for f(x)=x+x^2, out comes a sequence of rationals. Sadly I can't find the sequence on OEIS. Nonetheless, a web search gets me to [2] and I see...oh...I'm just rediscovering some classical mathematics :-)

Anyway, you'd hope for a property like log f = -log(inverse f). Amazingly the code demonstrates that this seems to hold for the case log (x -> x+x^2) = - log (x -> (-1+√(1+4x))/2).

[1] https://gist.github.com/dpiponi/db72d898dea78e95579998b8c01befe0
[2] http://math.stackexchange.com/questions/208996/half-iterate-of-x2c
Photo

Post has attachment
Let me try to distract you briefly from current affairs with a square root of the sin function, i.e. a function f such that f(f(x)) = sin(x).

I wrote some Haskell code to compute the formal power series of such a function [2]. (In fact, it computes the unique functional square root of any formal power series starting x+...)

The coefficients grow large. Nonetheless, you can truncate the series and try evaluating the resulting polynomial. In the graph I've plotted:

In blue: the degree 38 truncation of the series. Let's call it h.
In green: the functional square of the truncation, i.e. x->h(h(x)).
In red: the sin function.

In most of the range [0,π/2] the approximation isn't bad, but you can see it dive bombs soon after.

There's a trick to squeeze more out of this approximation though. Clearly sin^(1/2)(x) = sin^(-1)(sin^(1/2)(sin(x))). What's more, sin(x) is closer to zero than x. So if we replace sin^(1/2) with h in the middle of the sandwich we should get better results. In [1] Thomas Cartright speculates that we can repeat this trick ad infinitum, i.e. that sin^(1/2)(x) = limit as n -> ∞ of sin^(-n)(h(sin^n(x))) where h is given by a truncated power series.

Also see the answers at [3]. To be honest, my reading comprehension skills aren't very good. The question asks if the series converges but I can't figure out what the answer is despite there being a lot of writing there.

You can tweak the code to compute things like the square root of the logistic map x -> x(1-x) to find out how many rabbits there are after any dyadic rational number of years - assuming you can truncate the resulting series appropriately.

[1] http://www.physics.miami.edu/~curtright/TheRootsOfSin.pdf
[2] https://gist.github.com/dpiponi/aa7b713da7c822c04193095397b70bb4
[3] http://mathoverflow.net/questions/45608/does-the-formal-power-series-solution-to-ffx-sin-x-converge
Photo

Post has attachment
http://www.sciencedirect.com/science/article/pii/S2095254616000211

This paper has an interesting name, but it seems completely bogus to me.

I'm no fan of p-values, but the criticism in this paper seems entirely invalid. It points out that if we take some sample data and then create a new set of sample data by replicating it n times we typically get a new sample that shows more statistical significance. By replicating enough times we can get p as low as we like.

I think the author is claiming that replicating n times is the same kind of thing as collecting a sample that is n times bigger, so we can make our data significant for any hypothesis simply by collecting enough data. But this is a fundamental error. The statistics of an n times replicated sample are entirely different from the statistics of n times bigger samples.

Am I misunderstanding this paper?
Photo

Post has shared content
Some personal stuff about the good ol' days...

I've talked with people a couple of times recently about how we did anything with computers before Google. I've assembled a complete answer to what, today, might have been replaced by a single Google search.

As a kid I was fascinated by the internals of programming languages. But there was no way I could possibly afford a book on compiler writing. Until university (or later, I didn't do CS at uni), almost everything I knew on the subject came from three articles, two in magazines, one in a book. Amazingly, all three are now online.

1. "Adventure II" in Practical Computing magazine. http://www.nostalgia8.org/download/adv-creators/paw/kenreed.html This introduced to me the idea of a domain specific language. A few years later I wrote my own adventure game compiler for the BBC Micro. (Article also at http://vintagecomputers.site90.net/mags/praccomp/1980/80aug_Adventure_II.pdf)

2. An article on "Pascal in Forth" from the short-lived magazine Soft. http://tangentstorm.github.io/winfield-pascal-83.html This taught me the idea of embedding a language inside another. I think everyone should learn Forth! BTW I had no access to Forth apart from an implementation I wrote in BASIC.

3. A chapter on writing a compiler in BASIC(!) using the BBC Micro's inline assembler in a book called "Practical Programs for the BBC Computer..." http://www.bighole.nl/pub/mirror/homepage.ntlworld.com/kryten_droid/Acorn/Atom/pp/pp_compiler.htm It was a stroke of genius for Acorn to embed an assembler in their BASIC and the author of that compiler did a fantastic job of getting the most out of BBC Basic. Using the 6502 emulator I wrote for my Atari 2600 emulator I was able to get that compiler running again a couple of days ago (https://github.com/dpiponi/Bine).

I probably bought that book at one of the computer fairs at Olympia in London. These events were a major source of information for me.

I also spent a little time disassembling the BBC Basic ROM. Of course I had to write my own disassembler to do that. [1]

That was pretty much the sum total of knowledge I managed to accumulate on the subject from age 13 to 18. Five years of randomly acquired scraps to assemble a library that's a minuscule fraction of what I could get today in one second.

When I was writing the expression language for ILM's fire simulation system (embedded in Python, compiled to CUDA) that last article was there at the back of my mind.

[1] A book was published that was a complete commented disassembly of the ROM. Years later I knew the author, who was doing neural net research where I did my PhD, but I didn't know he wrote that book (or that the book existed) until last week.
Photo

Post has attachment
Some personal stuff about the good ol' days...

I've talked with people a couple of times recently about how we did anything with computers before Google. I've assembled a complete answer to what, today, might have been replaced by a single Google search.

As a kid I was fascinated by the internals of programming languages. But there was no way I could possibly afford a book on compiler writing. Until university (or later, I didn't do CS at uni), almost everything I knew on the subject came from three articles, two in magazines, one in a book. Amazingly, all three are now online.

1. "Adventure II" in Practical Computing magazine. http://www.nostalgia8.org/download/adv-creators/paw/kenreed.html This introduced to me the idea of a domain specific language. A few years later I wrote my own adventure game compiler for the BBC Micro. (Article also at http://vintagecomputers.site90.net/mags/praccomp/1980/80aug_Adventure_II.pdf)

2. An article on "Pascal in Forth" from the short-lived magazine Soft. http://tangentstorm.github.io/winfield-pascal-83.html This taught me the idea of embedding a language inside another. I think everyone should learn Forth! BTW I had no access to Forth apart from an implementation I wrote in BASIC.

3. A chapter on writing a compiler in BASIC(!) using the BBC Micro's inline assembler in a book called "Practical Programs for the BBC Computer..." http://www.bighole.nl/pub/mirror/homepage.ntlworld.com/kryten_droid/Acorn/Atom/pp/pp_compiler.htm It was a stroke of genius for Acorn to embed an assembler in their BASIC and the author of that compiler did a fantastic job of getting the most out of BBC Basic. Using the 6502 emulator I wrote for my Atari 2600 emulator I was able to get that compiler running again a couple of days ago (https://github.com/dpiponi/Bine).

I probably bought that book at one of the computer fairs at Olympia in London. These events were a major source of information for me.

I also spent a little time disassembling the BBC Basic ROM. Of course I had to write my own disassembler to do that. [1]

That was pretty much the sum total of knowledge I managed to accumulate on the subject from age 13 to 18. Five years of randomly acquired scraps to assemble a library that's a minuscule fraction of what I could get today in one second.

When I was writing the expression language for ILM's fire simulation system (embedded in Python, compiled to CUDA) that last article was there at the back of my mind.

[1] A book was published that was a complete commented disassembly of the ROM. Years later I knew the author, who was doing neural net research where I did my PhD, but I didn't know he wrote that book (or that the book existed) until last week.
Photo

Post has attachment
Back in the 90s I worked on a real-time 3D graphics API called BRender. Everyone takes them for granted now, but we didn't have 3D graphics cards then. It all had to run on the CPU and the core loops were all written in x86 (486 or Pentium) assembly language.

Here's a BRender demo I wrote back then which recently resurfaced on youtube:

https://www.youtube.com/watch?v=FlmhZ5y_SZo
https://www.youtube.com/watch?v=82d7DFTGnWw

I had a lot of fun writing code for things like real-time bump mapping and shadow mapping.

Also check out the reflections that the API creator Sam Littlewood implemented:
https://www.youtube.com/watch?v=yHSVD1DLh08

I saw the first wave of 3D cards appear while working at Argonaut. They were barely faster than software (and in some cases slower). But with the second wave they could outperform CPUs.

Post has attachment
I'm making a slightly unusual literature request.

I think I've mentioned before that I've been a pretty bad sleeper in my time, but I've found that listening to audiobooks is, for me at least, comparable in effectiveness to medications. But it has to be the right kind of literature. Four books that have worked really well for me so far are:

1. Moby Dick
2. Don Quixote
3 & 4. Last and First Men and Starmaker

What they all share is that they are long, mostly engaging, but not tightly plotted.

What I mean by the last property is that you can listen to many extended pieces from any of these books and they'll likely stand alone without reference to other parts of the book. Important as I expect to fall asleep for most of what I'm listening to.

Being engaging is crucial. The book has to override my own thoughts. If it's not engaging enough then I'll just revert to thinking my own thing. However, once you've listened to a book a few dozen times (possibly even hundreds at this point) it starts become less engaging, which is why I need to find some more books.

What other books might fit? I've tried quite a few other things but they don't work anywhere near as well as these books.
Photo

Post has attachment
Photo

Post has attachment
I’m going for some video game, programming, electronics and mathematics nerdiness combined into one today.

I’ve linked to a video of The Empire Strikes Back for the Atari VCS video game console. You may notice that along the left edge of the screen there are visual artifacts in the form of short horizontal bars at the horizon and the top left corner. I want to explain why this ultimately comes from making use of the isomorphism ((ℤ/2ℤ)[X]/(1+X^5+X^6))* ≡ ℤ/63ℤ. On the right side of this isomorphism is the additive group of integers modulo 63. On the left is the multiplicative group of non-zero polynomials modulo 1+X^5+X^6 with integers modulo 2 as coefficients.

I’ve been writing an Atari VCS emulator. The VCS is a very primitive device and it doesn’t have enough memory to store a full representation of what you see on the screen. Instead is generates the signals sent to the electron gun of the TV (remember, this was the 80s) "on the fly" as it scans across the screen. The CPU isn’t fast enough to generate such signals directly, so to make this practical the Atari VCS supports “sprites”, little blocks of graphics that the hardware renders independently of the CPU. Once the sprite rendering unit receives the signal to render a sprite it starts outputting pixels to the TV one by one. To do this it needs to count time so it can send the sprite rendering signal at the right moment on each scan line. And to count time, all it needs to do is add 1 until it reaches some threshold and then restart the counter.

But adding 1 is expensive. Or at least it was expensive in the 80s when you have to count at 3MHz, which is what’s needed to render 160 pixels on each row. In the worst case scenario, adding 1 to an n-bit number can cause carrying that requires updating n digits. That requires a lot of transistors and potentially quite a bit of time.

On the other hand, multiplying and dividing binary polynomials modulo another polynomial can be much cheaper than addition. If you multiply a polynomial by X you merely shift the coefficients along by one. And reduction modulo another polynomial is subtraction of polynomials. Subtracting polynomials whose coefficients are in ℤ/2ℤ doesn’t require carrying. So multiplying by X modulo a polynomial requires nothing more than a shift and some additions modulo 2 (aka, exclusive-or or XOR). This is much cheaper than the usual implementation of addition. What’s more, if you work modulo 1+X^5+X^6, repeatedly multiplying by X generates all of the possible non-zero polynomials. These can each be stored straightforwardly as a sequence of 6 bits. There are 63 of these polynomials, so storing n as X^n modulo 1+X^5+X^6 gives a convenient way to represent the numbers from 0 to 62 if the only thing you care about is adding 1. This is close to what the Atari VCS does. (In fact it stores the coefficients in negated from and uses repeated division by X, which is equivalent to multiplication by X^4+X^5.)

Atari hid this from the programmers. Instead of allowing them to set the sprite position by knowing the polynomial corresponding to each integer they instead allowed programmer to set the position of a sprite by setting a certain register at the moment in time where they want the sprite to appear. The counter is reset at this moment (well, a microsecond and a bit later actually, I need to know this stuff to make the emulator work) allowing the VCS to count to the same position on the next scanline in time for the next row of the sprite.

Now, adding 1 in this representation is easy. But the hardware needs to move sprites left and right and you’d expect this to need subtraction. It does this using some horrible trickery where it speeds up the clock at the beginning of a scan line. That way it can render the sprite earlier. But speeding up the clock messes up other parts of the video hardware that rely on it. To hide this, the Atari VCS simply switches off video output during the first few pixels of each row whenever a sprite is moved. The artifacts are the price for using a representation that made +1 efficient. It must have been a difficult trade for the Atari engineers to make.

I learnt most of this from [1]. I had to do a little work figure out exactly which polynomials are used. These types of counters have many applications, for example see [2]. One example I’ve worked with is used in the spread-spectrum encoding of signals used by GPS satellites [3]. I recommend Racing the Beam [4], a book on the VCS console. It mentions the artifact I’m talking about, but its explanation is incomplete. If you’re feeling really geeky, you can find the +1 circuit among the images here [5]. There’s also some discussion at [6].

[1] http://www.atarihq.com/danb/files/TIA_HW_Notes.txt
[2] https://en.wikipedia.org/wiki/Linear-feedback_shift_register
[3] https://en.wikipedia.org/wiki/Direct-sequence_spread_spectrum
[4] https://mitpress.mit.edu/books/racing-beam
[5] https://atariage.com/2600/archives/schematics_tia/index.html
[6] http://atariage.com/forums/topic/18045-black-lines-on-left-side-of-screen/
Wait while more posts are being loaded