Communities and Collections

Posts

Post has attachment

Public

A more maggoty version of the eigenvalue plot

Add a comment...

Post has attachment

Public

Take A and B, big square matrix with iid normally distributed complex entries.

Let F(t) = A+0.1*exp(2*pi*i*t)*B

Plot the eigenvalues of F(t) in the complex plane as t varies from 0 to 1 over 4 seconds (roughly).

BTW Note that even though the animation has a period of 4 seconds, the individual points follow paths that often take longer than 4 seconds to close the loop.

Motivated by +Terence Tao and Vu: https://projecteuclid.org/euclid.acta/1485892530

Let F(t) = A+0.1*exp(2*pi*i*t)*B

Plot the eigenvalues of F(t) in the complex plane as t varies from 0 to 1 over 4 seconds (roughly).

BTW Note that even though the animation has a period of 4 seconds, the individual points follow paths that often take longer than 4 seconds to close the loop.

Motivated by +Terence Tao and Vu: https://projecteuclid.org/euclid.acta/1485892530

Add a comment...

Post has attachment

Public

Caustics in geometry are inspired by an effect in geometric optics where bundles of nearby light rays get concentrated on points. The classic example is the nephroid shape you get in coffee cups.

These "classical" caustics arise when you look at families of paths going from a light source to each point in the plane. Imagine parameterising such families of paths. Roughly speaking, when the derivative of path length with respect to the parameters goes to zero then by Fermat's principle [5] our paths correspond to light rays. When the second derivative has a zero eigenvalue then we get many rays arriving at a point and get caustics. As more eigenvalues go to zero, or higher derivatives also go to zero, we get various kinds of cusp. (Catastrophe theory classifies these events.)

In wave optics we look at integrals over families of curves. Roughy speaking we want to look at the "phase" exp(2πiL(t,A,B)/λ) where L(t,A,B) is the length from a light A to the point we're illuminating B along a path parameterised by t. The illumination at B is given by an (weighted) integral of this quantity over all t. In the limit as the wavelength goes to zero we should get a function concentrated around the classical caustics. (This is essentially just rendering with path tracing but tracking phases.)

The (overly compressed) video [3] is an example for single reflections, caused a rotating ellipse, of a plane wave along (-1, 0). It's simulating the caustic you'd get in a rotating elliptical coffee cup if the cup was a few hundredths of a mm across. You can clearly see light concentrated along the classical nephroid caustic. Across this edge the intensity is modelled by an Airy Ai function [2]. There is also the highly concentrated cusp point that's locally modelled by the Pearcey integral [1].

You can run the code in your browser [4].

The code is easily modified for reflections curves other than ellipses.

[1] https://en.wikipedia.org/wiki/Pearcey_integral

[2] https://en.wikipedia.org/wiki/Airy_function

[3] https://youtu.be/j4YNPhllDXU

[4] https://www.shadertoy.com/view/4dKczd

[5] https://en.wikipedia.org/wiki/Fermat%27s_principle

These "classical" caustics arise when you look at families of paths going from a light source to each point in the plane. Imagine parameterising such families of paths. Roughly speaking, when the derivative of path length with respect to the parameters goes to zero then by Fermat's principle [5] our paths correspond to light rays. When the second derivative has a zero eigenvalue then we get many rays arriving at a point and get caustics. As more eigenvalues go to zero, or higher derivatives also go to zero, we get various kinds of cusp. (Catastrophe theory classifies these events.)

In wave optics we look at integrals over families of curves. Roughy speaking we want to look at the "phase" exp(2πiL(t,A,B)/λ) where L(t,A,B) is the length from a light A to the point we're illuminating B along a path parameterised by t. The illumination at B is given by an (weighted) integral of this quantity over all t. In the limit as the wavelength goes to zero we should get a function concentrated around the classical caustics. (This is essentially just rendering with path tracing but tracking phases.)

The (overly compressed) video [3] is an example for single reflections, caused a rotating ellipse, of a plane wave along (-1, 0). It's simulating the caustic you'd get in a rotating elliptical coffee cup if the cup was a few hundredths of a mm across. You can clearly see light concentrated along the classical nephroid caustic. Across this edge the intensity is modelled by an Airy Ai function [2]. There is also the highly concentrated cusp point that's locally modelled by the Pearcey integral [1].

You can run the code in your browser [4].

The code is easily modified for reflections curves other than ellipses.

[1] https://en.wikipedia.org/wiki/Pearcey_integral

[2] https://en.wikipedia.org/wiki/Airy_function

[3] https://youtu.be/j4YNPhllDXU

[4] https://www.shadertoy.com/view/4dKczd

[5] https://en.wikipedia.org/wiki/Fermat%27s_principle

Add a comment...

Post has attachment

Public

If you have a set of linearly independent functions on some domain you can (under reasonable conditions) form a basis by applying Gram-Schmidt. For example, starting with the polynomials {1, x, x^2, ...} and the L2 metric for functions on [-1,1] you get the Legendre polynomials. I thought I'd see if you can construct substitutes for the Fourier transform based on square and triangle waves.

Working in Mathematica with a finite dimensional subspace as an approximation, you can consider the set of functions {square(x), square(x+1/4), square(2x), square(2x+1/4), ..., square(nx+1/4)} and apply GS.

But by accident I applied GS starting not with the first function but with the last, working backwards. And out popped approximations to sine waves.

It sort of makes sense. Sine and cosine are the only functions f, g where {f(nx), g(nx) (n!=0) for all n in Z} give an orthonormal basis for the square integrable functions.

Working in Mathematica with a finite dimensional subspace as an approximation, you can consider the set of functions {square(x), square(x+1/4), square(2x), square(2x+1/4), ..., square(nx+1/4)} and apply GS.

But by accident I applied GS starting not with the first function but with the last, working backwards. And out popped approximations to sine waves.

It sort of makes sense. Sine and cosine are the only functions f, g where {f(nx), g(nx) (n!=0) for all n in Z} give an orthonormal basis for the square integrable functions.

3/23/18

2 Photos - View album

Add a comment...

Post has attachment

Public

There's a mathematical technique I call a "translation argument" that is quite common in mathematics. We reason in a non-rigorous way knowing that our steps can be mechanically translated into rigorous statements.

For example, physicists often argue using Dirac delta functions. These can be made rigorous by working with distributions. But physicists often don't consciously work with distributions. These arguments can often be made rigorous in a much simpler way. If a Dirac delta appears in an equality as δ(x-y), where x is a free variable, then we can multiply both sides by f(x), integrate with respect to x, use the identity in the picture, and we should get a respectable equality. We can pretend that this was what we meant all along and that the Dirac delta was shorthand. (This is sort of what distributions are about anyway but we don't need to explicitly construct anything.)

Similarly, when using infinitesimals like dx, we can make arguments respectable in a number of ways. For example if we claim f(x)dx = dy we could "divide" both sides by dx and claim this is just an alternative way to write f(x) = dy/dx. Or we could invoke the standard metatheorem from non-standard analysis which translates (some) propositions from non-standard analysis to standard analysis. (Another way might be to claim that dx is a differential form.)

Note that this is different from the usual type of hand-wavey argument where we omit details and steps and there's no mechanical translation of our argument to a correct one.

What are other "translation arguments" used in mathematics? I guess the whole area of toposes might be thought of like this. (Not that I know much about toposes.)

(I'm reminded of embedded DSLs in programming languages. When we embed X in Y we write code in what looks like language X but actually we're writing in language Y and some kind of translation or reinterpretation goes in the background.)

Update: I should mention one of the reasons I find this interesting. If I was working in academia I think one area I'd like to study is making machine verification of proofs practical in more domains - for example physics. If you reason using Dirac deltas, say, then if you follow the usual approach it seems like you need to build a lot of functional analysis machinery to get started. This is hard work. I'm wondering if many real world arguments can actually be handled by the method I sketch above.

For example, physicists often argue using Dirac delta functions. These can be made rigorous by working with distributions. But physicists often don't consciously work with distributions. These arguments can often be made rigorous in a much simpler way. If a Dirac delta appears in an equality as δ(x-y), where x is a free variable, then we can multiply both sides by f(x), integrate with respect to x, use the identity in the picture, and we should get a respectable equality. We can pretend that this was what we meant all along and that the Dirac delta was shorthand. (This is sort of what distributions are about anyway but we don't need to explicitly construct anything.)

Similarly, when using infinitesimals like dx, we can make arguments respectable in a number of ways. For example if we claim f(x)dx = dy we could "divide" both sides by dx and claim this is just an alternative way to write f(x) = dy/dx. Or we could invoke the standard metatheorem from non-standard analysis which translates (some) propositions from non-standard analysis to standard analysis. (Another way might be to claim that dx is a differential form.)

Note that this is different from the usual type of hand-wavey argument where we omit details and steps and there's no mechanical translation of our argument to a correct one.

What are other "translation arguments" used in mathematics? I guess the whole area of toposes might be thought of like this. (Not that I know much about toposes.)

(I'm reminded of embedded DSLs in programming languages. When we embed X in Y we write code in what looks like language X but actually we're writing in language Y and some kind of translation or reinterpretation goes in the background.)

Update: I should mention one of the reasons I find this interesting. If I was working in academia I think one area I'd like to study is making machine verification of proofs practical in more domains - for example physics. If you reason using Dirac deltas, say, then if you follow the usual approach it seems like you need to build a lot of functional analysis machinery to get started. This is hard work. I'm wondering if many real world arguments can actually be handled by the method I sketch above.

Add a comment...

Post has attachment

Public

A new shader toy

Go to https://www.shadertoy.com/view/ldcyz7

Works better full screen.

Tap on left.

Stare at black dot in centre of screen for 30s.

Tap on right and keep looking at dot.

After the second tap you should get 2-3 seconds of a strong effect.

Go to https://www.shadertoy.com/view/ldcyz7

Works better full screen.

Tap on left.

Stare at black dot in centre of screen for 30s.

Tap on right and keep looking at dot.

After the second tap you should get 2-3 seconds of a strong effect.

Add a comment...

Post has attachment

Public

I made a Shader Toy using Bessel functions.

Follow the link: https://www.shadertoy.com/view/4scyz7

Just drag around the image.

It's more fun full screen.

Try editing the order of the Bessel function. A comment shows where.

Follow the link: https://www.shadertoy.com/view/4scyz7

Just drag around the image.

It's more fun full screen.

Try editing the order of the Bessel function. A comment shows where.

Add a comment...

Post has shared content

Public

This social media stuff is beyond me. I don’t know if sharing to a special interest group (or whatever it’s called) means it doesn’t appear in my usual timeline. So I’m resharing it here. It’s even harder to figure this stuff out on Facebook. Twitter I can still kind of grasp.

I rendered some visualizations of the usage of every single bit of RAM in a couple of Atari VCS games...

1/23/18

2 Photos - View album

Add a comment...

Post has attachment

I rendered some visualizations of the usage of every single bit of RAM in a couple of Atari VCS games...

1/23/18

2 Photos - View album

Post has attachment

Public

I wrote a little JavaScript toy to allow you to explore how eigenvalues of random symmetric matrices repulse each other: https://dpiponi.github.io/levels.html

Add a comment...

Wait while more posts are being loaded