Profile cover photo
Profile photo
Massoud Saidi
Massoud's posts

Post has shared content
Spacetime crystals

You know about crystals in space.  What's a crystal in spacetime?    It's a repetitive pattern that has a lot of symmetries including reflections, translations, rotations and Lorentz transformations.   Rotations mix up directions in space.  Lorentz transformations mix up space and time directions.

We can study spacetime crystals mathematically - and the nicest ones are described by gadgets called hyperbolic Dynkin diagrams, which play a fascinating role in string theory.

How do these diagrams work?

Each dot stands for a reflection symmetry of our spacetime crystal.  Dots not connected by an edge are reflections along axes that are at right angles to each other.  Dots connected by various differently labelled edges are reflections at various other angles to each other.  To get a spacetime crystal, the diagram needs to obey some rules.

The number of dots in the diagram, called its rank, is the dimension of the spacetime the crystal lives in.  So, the picture here shows a bunch of crystals in 5-dimensional spacetime.

Victor Kac, the famous mathematician who helped invent these spacetime crystals, showed they can only exist in dimensions 10 or below.  He showed that:

there are 4 in dimension 10
there are 5 in dimension 9
there are 5 in dimension 8
there are 4 in dimension 7

In 1979, two well-known mathematicians named Lepowsky and Moody showed there were infinitely many spacetime crystals in 2 dimensions... but they classified all of them.

In 1989, Saclioglu tried to classify the spacetime crystals in dimensions 3 through 6.  He got a list of 118.

But he left a bunch out!  A more recent list, compiled very carefully by a big team of mathematicians, gives 220:

there are 22 in dimension 6
there are 22 in dimension 5
there are 53 in dimension 4
there are 123 in dimension 3

If they're right, there's a total of 238 spacetime crystals with dimensions between 3 and 10.  

I think it's really cool how 10 is the maximum allowed dimension, and the number of spacetime crystals explodes as we go to lower dimensions... becoming infinite in dimension 2.

String theory lives in 10d spacetime, so it's perhaps not very shocking that some 10-dimensional spacetime crystals are important in string theory - and also supergravity, the theory of gravity that pops out of superstring theory.    The lower-dimensional ones seem to appear when you take 10d supergravity and 'curl up' some of the space dimensions to get theories of gravity in lower dimensions.

Greg Egan and I have been playing around with these spacetime crystals.  I've spent years studying crystal-like patterns in space, so it's fun to start looking at them in spacetime.  I'd like to say a lot more about them - but my wife is waiting for me to cook breakfast, so not now!

Nobody calls them 'spacetime crystals', by the way - to sound smart, you gotta say 'hyperbolic Dynkin diagrams'.  Here's the paper by that big team:

• Lisa Carbone, Sjuvon Chung, Leigh Cobbs, Robert McRae, Debajyoti Nandi, Yusra Naqvi and Diego Penta, Classification of hyperbolic Dynkin diagrams, root lengths and Weyl group orbits,

Someone called Jgmoxness created these nice pictures of all 238 hyperbolic Dynkin diagrams and put them on Wikicommons:

and that's where I got my picture here!

#spnetwork arXiv:1003.0564 #symmetry #KacMoody #Dynkin #geometry  

Post has shared content
point and grok teaching: Part 2: Multiplication in geometric algebra. While i still think my previous post on Clifford algebra is a nice point of first contact, i had to greatly shorten my notes. In this post i'll try and convey how these spaces feel, and show you some computations.

Part 1: "Into Clifford algebra", if you haven't, look here:

Legend: We have cliffs a, b where a has components named xᵢ, and b's are named yᵢ.

A 3d cliff has 8 components. A product a·b expands to 8·8 = 64 subfactors, mucho tedious! But we can look at grade-n subspaces (n-blades), and try to understand multiplying blades. Cl(3,0) has 4 blades:

a₀ = x₀
a₁ = x₁e₁ + x₂e₂ + x₃e₃
a₂ = x₁₂e₁₂ + x₂₃e₂₃ + x₁₃e₁₃
a₃ = x₁₂₃e₁₂₃

And multiplying bladewise is straightforward:

a·b = (a₀ + a₁ + a₂ + a₃)·(b₀ + b₁ + b₂ + b₃)

The result's just a big sum with every pairing of aᵢbⱼ. So it really is enough to understand multiplying blades. The product of anything with a 0-blade is simple stretching by x₀:

a₀·b₁ = x₀·(y₁e₁ + y₂e₂ + y₃e₃)

Multiplying 1-blades a₁·b₁ is equivalent to simultaneously calculating dot- and wedge product of the respective vectors, returning the dot product as scalar, and the wedge product as 2-blade:

c = a₁·b₁ consists of:
c₀ = a₁∙b₁
c₂ = a₁∧b₁

Nice as that may be, i didn't know to wedge multiply before and the above would have told me nothing. And it's also kind of a special case, so let's look at an inhomogeneous product a₁·b₂ instead. For a quick impression we can just expand one side, leaving the other as is:

a₁·b₂ = x₁e₁·b₂ + x₂e₂·b₂ + x₃e₃·b₂

The following will lead to a single parametrized summation expression, which is nicely short, and obscure. We're tracking here what happens to the inputs of a multiplication, but there's an easier way to compute a geometric product i'll show you further down.

Where was i? Here, the three summands seem similar, and we'll try to look at a single one first:

x₁e₁·b₂ = x₁e₁·(y₁₂e₁₂ + y₂₃e₂₃ + y₃₁e₃₁)
= x₁y₁₂e₁e₁₂ + x₁y₂₃e₁e₂₃ + x₁y₃₁e₁e₃₁
= x₁y₁₂e₂ + x₁y₂₃e₁₂₃ – x₁y₃₁e₃

By the way, in the last step i've been using these identities:

e₁e₁ = 1
e₁e₂₃ = e₁₂₃
e₁e₃₁ = e₁e₃e₁ = –e₁e₁e₃ = –e₃

You see, those xᵢ and yⱼ again appear in all possible pairings xᵢyⱼ. Here's a short and crisp summation form:

a·b = ∑ᵢ∑ⱼ xᵢ·yⱼ·eᵢ·eⱼ

eᵢ·eⱼ is the multiplication table i invited you to calculate yourself in the first post. I've seen people getting clever to encode it, but instead of computing where it all goes, we could also work in reverse...

a better way to multiply

So you do want to see the componentwise results. The scalar part is again extra-simple:

z₀ = (x₀y₀ + x₁y₁ ... + x₁₂y₁₂ + ... + x₁₂₃y₁₂₃ )·1

The next one z₁ is such that we must sum those that have a single 1 in their index, and zero or two of the others. I first listed all the xᵢ, and then made up the yⱼ to get single remaining indices of 1. Here's the end result:


I added minus signs whenever i'd need to swap indices an odd number of times to get doubles to cancel. Take for example x₂₃: We need to add a 1 and remove 2 and 3, so the other has to be y₁₂₃. To get the signe we only need to look at the indices:


Three swaps in between, thats an odd number, x₂₃y₁₂₃ gets a minus sign. Let me write down z₁ a bit more horizontally for you:

z₁ = (x₀y₁ + x₁y₀ – x₂y₁₂ + x₃y₃₁ + x₁₂y₂ – x₂₃y₁₂₃ – x₃₁y₃ + x₁₂₃y₂₃)·e₁

Note that for higher grades you further need to bring the indices into the right order, but thats all then. This has been even easier, i'm sure you can now compute the other zᵢ! Have fun!


Cayley graph and multiplication table i found on Martin Baker's excellent introduction to geometric algebra. Note that the magenta products are commutative, the turqoise ones anticommute [update: fixed, had it the other way around before]:

The picture of David Hestenes is from this interesting article by Emily Hanford about Hestenes: "The Problem with Lecturing"

#sundayscience or #sundaymathematics rather:
#computing products in #geometric - or #clifford #algebra

Post has shared content
Into Clifford algebra: Let's say we're all familiar enough with vectors in 2d or 3d. In vector calculus there's the well known dot product,

a·b = a₁b₁ + a₂b₂ + a₃b₃

and in 3d, there's also the lesser known cross product a×b. It goes back to Graßmann's exterior algebra, that comes with a wedge product a∧b which, in 3d, happens to be the cross product. But there are problems with these products...

Geometric algebra, as William Kingdon Clifford originally named his result, combines those ideas into a single geometric product that algebraically behaves much better than the two vector products. Again, not quite as an historian might put it...

some history

While Carl Friedrich Gauss did not bother to publish about him discovering  _quarternions_ in 1819, they were finally brought to light by William Rowan Hamilton in 1843. Shortly after, 1844, Herrmann Günther Graßmann defined his wedge product.

When James Clerk Maxwell captured electromagnetism in 1865, he did so using quarternions in a mess of twenty equations. The race for the best formalism to do physics was on...

Clifford's Geometric algebra entered the scene unnoticed in 1878, while Josiah Willard Gibbs published about vectors as late as 1880. The latter leading the pack ever since Oliver Heaviside reformulated Maxwell's equations using vectors 1884. Bringing them down to four, impressively outperforming quarternions!

Wolfgang Pauli and Paul Adrien Maurice Dirac' described the electron spin in 1927/28 using matrices. But those spinors are unwieldy to describe with vectors, where should they live and how to generalize?

It wasn't until the 1980's when David Hestenes cast many physics problems in Clifford algebras, getting them to the attention they deserve. Using Clifford algebra electromagnetism can intuitively be put down in a single equation!

Well, P.R.Girard refers in his 1984 essay "The quaternion group and modern physics" to a modern description using a quarternionic potential function of only one variable in a single differential equation, citing back to Ludwik Silberstein... To my regrets my historical knowledge ends here, so let's get back to the math:

what are those problems with vector calculus?

For one, the dot product  does not yield a vector, but an object of different type - a number. And there is no useful way to add a number to a vector (independent of the chosen basis to represent it).

Same happens with the cross product, you get something called a bivector, which just happens to look like a vector in 3d. In the other cases you also get an object of different type.

An n-dimensional vector lives in n-space and consists of n numbers (or components), each on equal footing, together giving the coordinate data of a point in n-space. Here's how a 3d vector is usually seen:

| 2 | (a vector in R³)

Clifford elements (let's call them cliffs) come with much more information: For dimension n you get 2^n numbers. That's quite some room, certainly enough to accomodate vectors, isn't it? Indeed, they're thriving in there. But what's the other data for?

Before i show you a cliff i'd like to have a better notation because Clifford algebras, lacking type problems such as we've just seen, are much more fun in algebraic notation. Experts know to cautiously interpret the n numbers of a vector as multiples of n basis vectors, written like so:

1e₁ + 2e₂ + 3e₃ (a vector in R³)

Here it comes. To ease the eye, let's do a 2-dimensional cliff first. Here's a simple one called diagonal element because all components are equal to 1:

1 + e₁ + e₂ + e₁₂ (diagonal element of Cl(2,0))

Cl(2,0) is the span of four basis elements and the first component is called scalar. Note that i didn't write an extra unit symbol to place besides the scalar, as pseudo basis element or somesuch.

Scalars are cliffs of grade 0, vectors are grade 1, bivectors of grade 2, in 3d one encounters trivectors, and so on... So e₁ and e₂ really are just vectors. And the last component can always be called counit, but here e₁₂ is also a bivector.

You can get a bivector by multiplying two vectors. In general, any bivector can be written as linear combination of the 2-graded generators. In Cl(2,0) e₁₂ is the counit so the bivector subspace is 1-dimensional.

Whatever the dimension you can calculate the extent of a bivector in it's subspace just as you can compute the length of a vector in it's space. But with a bivector one is supposed to associate an area.

finding e₁₂ with highscool algebra

The following train of thought i daydreamed while watching part 1 of Eckhard Hitzer's lecture linked below. Not sure what to make of it, i hope you find it inspiring.

Algebraists like to start with some rules and elements to construct new elements. Let's begin with an orthonormal basis for R² given by unit vectors e₁ and e₂. On these, the scalar product (or dot product) works like this:

(0) e₁·e₂ = e₂·e₁ = 0  (orthonormal)
(1) e₁·e₁ = e₁² = 1 (unitary)
(2) same for e₂

What can we do to construct new elements? We're allowed to add vectors, so one simple thing is to try and compute (e₁+e₂)². Distributive laws hold, we can make this look like an exercise in highscool algebra:

(e₁+e₂)·(e₁+e₂) = e₁² + e₂² + e₁·e₂ + e₂·e₁ = 2

That's almost the definition of Clifford's associative geometric product! Because we know e₁² = e₂² = 1, the term set in italic has to be zero! That is, the following must hold:

(a) e₁·e₂ + e₂·e₁ = 0
which can also be written as:
(a') e₁·e₂ = –e₂·e₁

That's called anticommutative. It means, when swapping the order in a multiplication, we have to also flip the sign. That area can be negative! The geometric product of vectors a·b really is Graßmann's wedge product a∧b when a and b are orthogonal. But as (1) illustrates, it equals the dot product a·b for parallel vectors, and is commutative in this case.

Rule (a) is an alternative to rule (0) in the sense that it won't ruin what vectors can do. Okay, we have constructed a new value e₁·e₂ (i called e₁₂ before), and can now compute the remaining products:

e₁·e₁₂ = e₂
e₂·e₁₂ = –e₁
and so on...

Look, there's a subspace of quarter rotations (90°), making e₁₂ look like the imaginary complex number i:

      1·e₁₂ = e₁₂
  e₁₂·e₁₂ = –1
     –1·e₁₂ = –e₁₂*
 –e₁₂·e₁₂ = 1

That oriented area you get by multiplying two vectors a·b specifies how much the result will rotate a towards b. See the picture below for an illustration of the types of generators in Cl(3,0). Of these it has 1 scalar, 3 vectors, 3 bivectors, and 1 trivector. Now look at Pascal's triangle below...

There's a much more delightful stuff to tell but this post is getting too long. Just put your questions in the comment section or follow the

fun references

Unfortunately there is background noise in Eckhard Hitzer's otherwise fascinating lecture. It's steady and in a quiet environment you might even forget it's there. Go and try, lots of information in here!
Tutorial 1 on Clifford's Geometric Algebra

Some of his work seems to be available on his homepage, but for some reason i couldn't access it. So here's an external link list instead:

+John Baez has highly interesting stuff to show at his lookouts around "The Octonions". If you find the beginning of the following page confusing, don't give up and try just a bit further down. Or the next page.

slehar's blog post (2014) has much material to offer, all explained in very basic terms. The author is also keen to interpret Clifford algebras to benefit the study of consciousness... The mathematics is certainly inspiring!
"Clifford Algebra: A Visual Introduction"

Wikipedia offers a nice heap of introductions to Cl(3,0) here:

There's a Clifford algebra master page for grown-ups:

And they have a "Comparison of vector algebra and geometric algebra":

Maxwell's original formulation appeared in his paper "A Dynamical Theory of the Electromagnetic Field":

The n-vector illustration i cut up is by User:Maschen, referenced here:

The picture of William Kingdon Clifford i cut from here:

Pascal's triangle by User:Drini, part of which you can see below, found here:

#scienceeveryday : #geometric and #clifford #algebra

Post has shared content
Why the brain sees mathematics as beauty

"To many of us mathematical formulae appear dry and inaccessible but to a mathematician an equation can embody the quintessence of beauty." (Prof Semir Zeki, UCL)

Brain scans on Mathematicians, by researchers at University College London, showed that a complex string of numbers and letters in mathematical formulae can evoke the same sense of beauty as artistic masterpieces and music from the greatest composers.

The study in the journal Frontiers in Human Neuroscience gave 15 mathematicians 60 formula to rate. The same emotional brain centres used to appreciate art were being activated by "beautiful" maths.

Does it get any more beautiful than Euler's identity? :)

Research paper:




Post has shared content
Diagram 18: rotations and spinors In 1775 Euler proved that any rotation in 3d can be described using an axis and an angle.

Hey, does that mean we can use coordinates to represent 3-rotation? Maybe like this?:

=> A unit vector <x,y,z> (for the axis),
=> and an angle alpha (rotate that much around the axis).

That's almost right, but not quite: It turns out, the space of 3-rotations SO(3) doesn't give a simply connected manifold. That sounds scary so let's observe first: Take a rotation. You can describe the same rotation by pointing the vector into the opposite orientation, and making the angle negative (let it turn in the opposite direction).

To make this more clear, forget about the angle for now. Unit vectors' tips generally live on unit spheres, each indicating an axis. Given one such vector, the opposite one points in the opposite direction. It means you only need half a sphere of vector tips! Crossing the equator gets you teleported to the other side (and the angle flipped, if you insist). 

What you just understood is a peculiar space known as RP² or the real projective plane. There's a gadget from topology, the cross cap, that could do the job when sewed to our hemisphere's equator. We also might have just formally declared the equator of our hemisphere to represent a single point, but let's not worry about that either.

Instead we should have some fun with RP². In the picture below you see a hemisphere (squashed flat) with a colored band going through the center. Note that tracing one edge of the band with your finger, when you get to the equator, the rules require you to jump to exact opposite point... to continue on the other edge! That's a Moebius band living in RP²!

Right on cue: A Moebius band demonstrates a space where, after 360° you end up in the same position, but on the other side of the paper. Let me tell you what a spinor is just after another observation:

Attach a ribbon to our ball (and to, say, the ground) and observe that after a 360° rotation (about any axis) the ribbon ends up twisted once. But if you do another 360° in the same direction (amounting to 720°), you can untwist the ribbon by moving it once around the ball. That's known as the Dirac belt trick:

Since after 360° the sphere has returned to it's original position, the rotation of a ball isn't enough to track the band! You can adjoin {+1, -1 } to the coordinate system to also track the ribbon. What you get is a spinor, albeit in an ugly dress!

=> Spin(3) additionally needs a bit (for it's sign)

First discovered in full generality by Élie Joseph Cartan, spinors were later used to describe the electron spin by Paul Dirac. I should also mention that Pauli matrices give a basis for Spin(3), and they're named after Wolfgang Pauli. A spinor Spin(n) then is the universal double cover of a rotation group SO(n).

Let's visualize that. Some facts first.

Spin(3) has room for two copies of SO(3). One can imagine Spin(3) folded into SO(3), and that's why it's characterized as a double cover of the latter. And the cool thing is, the space of Spin(3) is S³, the 3-dimensional hypersurface of a 4d ball. It's much simpler than RP³, the real projective 3-space! But i'm getting ahead of myself.

Remember, i told you to forget about the angle and showed you how to trace a Moebius band using only changes of the axis. That got us a 2-dimensional object but wasn't the full story. Putting the angle back in gets us a 3-dimensional object: RP³. Think of it as the inside of a ball. When you try to leave the ball through it's surface you will be teleported to the wall behind you, but turned upside down! That's it, now you understand RP³ = SO(3) = the space of rotations in 3d!

What about Spin(3) = S³, the 3-surface of a 4-ball? Let's you cut that 3-surface in half. You simply get two balls (say one white, one black, just like you get two hemispheres cutting a sphere in half). Those two balls' surfaces correspond to the equators (or borders) of the hemispheres, the interiors together is Spin(3).

Say the white ball is SO(3). Starting at the center (where the angle is zero) we go (turn the angle), until we get to 180°. That's somewhere on the border, about to leave the white ball. Turned upside down we enter the black ball until we get to it's center at 360°. Orientation is back to zero angle, but we're now in the black ball associated to a negative spinor and a twisted ribbon! Going further we get to -180° entering the positive sphere again, getting flipped back, upside up.

Maybe you noticed that a spinor does not feel any twisting happening at the equator surface. It means we can forget it ever happened and appreciate the symmetry here. There is no real difference between the angle and the orientation and the cut we did is totally artifical. That's why spinor representations are nice: They're smooth.

A final word of warning on SU(2). It's the space of all complex (linear-) transformations that:

a) Don't change the space's handedness (like a mirror would, having none is the special "S" thing), and
b) transform vectors of unit length to ones of the same length.

It just happens that all of SU(2) is somehow also in Spin(3). SU(2) is complex 2-dimensional, and Spin(3) is 3+1-dimensional (and a sign).
They're both 4d parameter spaces, and identical for all theoretical purposes (diffeomorphic). Special unitary groups SU(n) usually are bigger than the according Spin(2n) of the same parameter dimension.

SU(2) ~ Spin(3) is just a nice coincidence, but won't help much to understand the other Spin(n) (except maybe up to n=8).

By the way, the unit quarternions (also known as versors) give another representation for Spin(3)! And they're suitable for spinors in hyperbolic space, too!

Fun References

There's a lecture featuring Sir Michael Atiyah on youtube (thanks +David Roberts!). He's talking fast and has a lot to tell, be prepared to take breaks:
Sir Michael Atiyah, What is a Spinor?

He says: "Spinors are the square root of geometry". Because there are spinors for any dimension n, but we only have complex numbers for 2d (or comparable in 4d and 8d?). So we can tackle problems usually approached using complex numbers with them. How exciting!

Can't resist to tell you that Aiyah's thesis advisor was William Vallance Douglas Hodge, who had his Cambridge office next door to Paul Adrien Maurice Dirac's.

+John Baez' "This Week's Finds" have been an invaluable resource to check my thinking and pick up new connections! Here is a dive into Spin(8) and it's relations to the octonions, and the Leech lattice:

Week 90 begins with SO(n), does lots of introducing Lie algebras and then closes in on this enchanted dimension 8 again:

You can topologically immerse RP² in 3-space R³ as Boy's surface, i commented related posts by +Jeff Erickson, and +Allen Knutson here :

Greg Egan has a nice intro explaining the geometry, and also some algebra:

When looking at Spin(n) as a group we can ask for quotient groups, discrete subgroups, and more. All pleasing stuff here:

Here's the introduction to spinors in general, including history, examples, and definitions:

Here's another post of mine about Spin(p,q) for mixed hyperbolic spaces SO(p,q), one of which models our spacetime.

#spin   #3drotation #diagram

Post has shared content
Ceva’s Theorem

When this was discovered by Giovanni Ceva in 1678, it was the first really new result in geometry in the 1300 years since Euclid.

The converse is also true: if the equality holds, then the lines necessarily meet at a common point.

#mathematics #geometry

Post has shared content
Moon Saturn Occultation composite
Best I could do with the clouds interfering again 

#moon   #saturn   #occultation  

Post has shared content
Honda's Assisted Walking Device
Animated Photo

Post has shared content
Good morning everybody, how is your day going 

Post has shared content
Wait while more posts are being loaded