Cover photo
Verified name
Richard Green
Works at University of Colorado Boulder
Attended University College, Oxford
Lives in Longmont, Colorado
114,825 followers|42,244,273 views


Richard Green

Shared publicly  - 
Primes in the Gaussian integers

This picture by Oliver Knill shows the prime numbers in the Gaussian integers. A Gaussian integer is a complex number of the form a + bi, where a and b are integers and i is a square root of –1.

An important result about the ordinary integers is the Fundamental Theorem of Arithmetic, which states that every integer greater than 1 can be expressed as a product of prime numbers in an essentially unique way. For example, the integer 21 can be written as 3 x 7, or 7 x 3, or (–3)x(–7), or (–7)x(–3). However, these factorizations are essentially the same, in the sense that they differ only in the order of the factors and by multiplication by +1 or –1. The numbers +1 and –1 are called units, which means that they are the only integers z for which 1/z is also an integer.

An integer c can be defined to be prime if it is not zero or a unit, but the only way to factorize c = ab is for either a or b to be a unit. If c is an integer bigger than 1, then this is equivalent to the familiar definition of prime, namely that the number has no factors other than 1 and itself. The negatives of prime numbers (such as –3 and –7, as above) also satisfy the definition of prime. However, the units +1 and –1 are not defined to be prime, because this would cause problems; for example, the Fundamental Theorem of Arithmetic would be false as stated.

It turns out, somewhat remarkably, that the analogue of the Fundamental Theorem of Arithmetic also holds for the Gaussian integers, which means that the Gaussian integers have a unique factorization property. It can be shown that the only units in the Gaussian integers are the numbers 1, i, –1 and –i; in other words, these are the only Gaussian integers z for which 1/z is also a Gaussian integer. Some primes in the ordinary sense, like 7 and 11, are also prime in the Gaussian integers. However, other primes, like 2 and 5, are not prime when regarded as Gaussian integers: they can be factorized further as 2 = (1+i)(1–i) and 5 = (1+2i)(1–2i), and none of the factors is a unit. It is also possible to write 5 = (2–i)(2+i), but this is essentially the same as the previous factorization because the factors are unit multiples of the previous factors. (More precisely: we have 2–i = (–i)(1+2i) and 2+i = (i)(1–2i), where –i and i are both units.)

The picture plots the locations of the primes in the Gaussian integers. The prime numbers in the integers that remain prime in the Gaussian integers, such as 3, 7, 11 and 19, appear on the positive horizontal axis as the points (3,0), (7,0), (11,0) and (19,0). These are the prime numbers that congruent to 3 modulo 4; in other words, they leave a remainder of 3 when divided by 4. The entire picture is symmetric under rotation by a right angle, because a unit multiple of a prime is always a prime, and multiplying by the complex number i rotates the complex plane anticlockwise by a right angle.

In light brown, near the origin, we have the primes 1+i, 1–i, –1+i, and –1–i. These are the four primes that are factors of 2 = (1+i)(1–i). A knight's move from the origin, we have the eight primes (2+i), (1+2i), (–1+2i), (–2+i) and their negatives; these are the primes that are factors of 5, and we find the same behaviour for factors of a prime integer p whenever p is congruent to 1 modulo 4.

The Gaussian primes are closely related to the problem of writing a number as a sum of two squares. Pierre de Fermat proved 1640 that if a prime is congruent to 1 mod 4, then it can always be written as a sum of two squares; for example, 13 = 9 + 4 = 3^2 + 2^2, and 29 = 25 + 4 = 5^2 + 2^2. (As usual, Fermat did not bother to provide a proof of this result; the first rigorous proof was provided by Euler in the 1740s.) The problem of factorizing ordinary primes in the Gaussian integers is essentially the same: 13 = (3+2i)(3–2i) and 29 = (5+2i)(5–2i).

Relevant links

The picture appears in Knill's paper Goldbach for Gaussian, Hurwitz, Octavian and Eisenstein primes (, which, among other things, formulates a version of the Goldbach conjecture for Gaussian integers. Chapter 4 of the paper considers some more surprising questions, such as the frogger problem, which asks what happens if we think of the picture as a scenario in the popular 1980s computer game Frogger.

Another paper by Knill on a similar topic is the 71-page Some experiments in number theory ( Chapter 14 of that paper conjectures that if one runs Conway's Game of Life on the picture shown, then there is motion arbitrarily far from the origin. Knill has a website on these topics at

Wikipedia has more information about the Gaussian integers here:

Here's another post by me about the Gaussian integers in the work of my colleague Katherine Stange:

I thank +Owen Maresh for bringing some of these links to my attention in his post from June. I have been extremely busy for the last seven months, which is why I have barely posted, but this is no longer the case, so you can expect a decent level of output from me in the coming months.

#mathematics #scienceeveryday  
Refurio Anachro's profile photoDillen Allday's profile photoHamilton Carter's profile photoRichard Green's profile photo
I'm finding I don't have as much time as I thought, mostly because I was so busy for so long that I've built up a rather nasty backlog of things to do. But it will still be better than it was!
Add a comment...

Richard Green

Shared publicly  - 
For #floralfriday , here are some apple blossoms I saw while walking my dogs earlier this month. There are a lot more flowers than last year, which would be a good thing if anyone ever used the resulting apples.

This was taken with my iPhone 5 and edited with Snapseed. For contrast, my next post is likely to be a picture of one of the aforementioned dogs, taken with an Android phone.
Richard Green's profile photoGary Matthews's profile photoDebashish “Dev” Samaddar's profile photoBoonfuang Pangam's profile photo
apple blossoms ? WOW. Look very beautiful 
Add a comment...

Richard Green

Shared publicly  - 
Here's a picture I took in a park near my house in Longmont, Colorado on January 10. Most of the snow is now gone. It was taken using my iPhone 5 using the HDR feature, but without any other editing.

#treetuesday #longmont   #colorado  
Richard Green's profile photoAzlin Bloor's profile photoRandy T (Randylikestoclimb)'s profile photo
Super great colors +Richard Green ! That is a great shot!
Add a comment...

Richard Green

Shared publicly  - 
The complexity of integers

The complexity of an integer n is defined to be the smallest number of 1s required to build the integer using parentheses, together with the operations of addition and multiplication. 

For example, the complexity of the integer 10 is 7, because we can write 10=1+(1+1+1)x(1+1+1), or as (1+1+1+1+1)x(1+1), but there is no way to do this using only six occurrences of 1. You might think that the complexity of the number 11 would be 2, but it is not, because pasting together two 1s to make 11 is not an allowable operation. It turns out that the complexity of 11 is 8.

The complexity, f(n), of an integer n was first defined by K. Mahler and J. Popken in 1953, and it has since been rediscovered by various other people. A natural problem that some mathematicians have considered is that of finding upper and lower bounds of f(n) in terms of n. 

John Selfridge found a lower bound for f(n) by proving that f(n) is always greater than or equal to 3 log_3(n), where log_3(n) denotes the base 3 logarithm of n. This lower bound is sharp: it cannot be improved because the bound is achieved when n is a power of 3. For example, if n=81, we have n=3^4, which (by the definition of logarithm) means that log_3(n)=4. Selfridge's lower bound for f(n) is then 3x4=12. We can write 81=(1+1+1)x(1+1+1)x(1+1+1)x(1+1+1), which uses twelve occurrences of 1, and Selfridge's result shows that there is no way to achieve this using eleven or fewer 1s. Note that we are only allowed to use addition and multiplication in our expressions; using exponentiation is not allowed.

The problem of finding an upper bound is more complicated. Richard K. Guy found an upper bound for f(n), showing that it is bounded above by 3 log_2(n), which works out at about 4.755 log_3(n). (The 4.755 is an approximation to 3log(3)/log(2).) Guy found this bound using Horner's method, which is explained in the appendix below. The worst case of Guy's bound is achieved in the case that n has no zeros when it is expressed in base 2. 

More generally, it turns out to be difficult to improve the upper bound for f(n) because numbers whose binary digits contain a very uneven balance of 0s and 1s tend to cause problems. An example of this is the number 1439, which is 10110011111 in binary. It turns out that f(1439)=26, and an optimal expression for 1439 is 1+2(1+2(1+1+3((2)(2)(2)(2)+1)((3)(2)+1))), where 2 and 3 are shorthands for (1+1) and (1+1+1), respectively. This works out as about 3.928 log_3(1439).

However, it is possible to replace this value of 3.928 by a lower number for “almost all” integers n; in particular, recent work of J. Arias de Reyna and J. van de Lune shows that it can be replaced by 3.655 for a set of natural numbers of density 1. The recent paper Applications of Markov Chain Analysis to Integer Complexity  ( by Christopher E. Shriver improves this upper bound to 3.529 for suitably generic numbers. The paper mentions that extensive numerical computations by other authors suggest that it ought to be possible to improve the generic bound to 3.37.

Relevant links

The On-Line Encyclopedia of Integer Sequences lists the complexities f(n) of the first few natural numbers ( and mentions that Guy conjectured that f(p) = f(p–1) + 1 whenever p is prime. Guy's conjecture turned out to be false, but the smallest counterexample, which was found by Martin Fuller in 2008, is surprisingly big: the smallest such prime is 353942783.

The OEIS is an extremely useful resource for researchers in discrete mathematics. They are currently running their annual appeal for donations, and you can donate on this page:

The “extensive numerical computations” mentioned above are discussed in the paper by Iraids et al, which you can find at

Appendix: proof of Guy's upper bound

Horner's method works by expressing a (nonzero) number n in base 2 as a sequence of binary digits a_0 a_1 ... a_k, where we may assume that the last digit a_k is equal to 1. It is not hard to show from this that log_2(n) is greater than or equal to k and less than k+1. It is also immediate that n can be expressed as the polynomial 
a_0 + x a_1 + x^2 a_2 + ... + x^k a_k
when x is replaced by 2. Rearranging, we find that
n = a_0 + 2(a_1 + 2(a_2 + ... + 2(a_{k-1} + 2)...)),
because we assumed that a_k was equal to 1. We then replace each occurrence of 2 with “1+1”, and replace each occurrence of a_i with either 0 or 1. Finally, we remove all the occurrences of “0+”. The number of 1s in the result is at most 3k, as required; the bound is achieved whenever n is one less than a power of 2.

For example, n=42 is 101010 in binary, which can be written as 0+2(1+2(0+2(1+2(0+2)))). This expands to (1+1)(1+(1+1)(1+1)(1+(1+1)(1+1))), which uses 12 ones. Since 42 lies between 2^5=32 and 2^6=64, it follows that log_2(42) is more than 5, and 3 log_2(n) is more than 12, as required.

#mathematics #sciencesunday #spnetwork arXiv:1511.07842
Richard Green's profile photoHendrik Boom's profile photoUTSAV DEWAN's profile photo
Fabulous article......
Add a comment...

Richard Green

Shared publicly  - 
For #treetuesday, here are two maple trees from near my house in Longmont, Colorado. They have a habit of dropping Canadian flags all over the ground.

This was taken with my iPhone 5 and edited with #snapseed.
K.M Cantrell's profile photoaltimira jordão's profile photoSonali Dalal's profile photoKay Bridges's profile photo
Add a comment...

Richard Green

Shared publicly  - 
Here's a picture I took yesterday near my house in Longmont, Colorado. I think this flower is a black-eyed susan (Rudbeckia hirta); does anyone know for sure? I took the picture with my iPhone 5 and edited it with Snapseed.

More information on this species is here:

I'm sorry I haven't posted much recently; I plan to do better.

#floralfriday #snapseed
Richard Green's profile photoEdith Kukla's profile photoBoonfuang Pangam's profile photoRandy T (Randylikestoclimb)'s profile photo
I have the same dilemma with flowers. They are beautiful but I don't know many flower names.
Add a comment...

Richard Green

Shared publicly  - 
I've been criticised in the past for not posting enough pictures of my dogs. Here's a recent picture of my dog Luna, which my stepson took with his Samsung Galaxy phone.

The camera on his phone is sufficiently good that I'm tempted to switch to an Android phone. The drawback would be that my magic watch wouldn't work with it.
annarita ruberto's profile photoSam Stormborn Ormandy's profile photoRichard Green's profile photoPraveen K's profile photo
+Richard Green If you do decide to switch to Android, ask for opinions. Too many variables to account for in Android. 
Add a comment...

Richard Green

Shared publicly  - 
When does a random jigsaw puzzle have a unique solution?

Consider an n by n jigsaw puzzle, whose pieces have q different types of edges. If n is large, how big does q need to be so that the puzzle fits together in a unique way (up to rotation)? According to two recent papers, the answer is that q needs to be slightly greater than n.

A piece of a physical jigsaw puzzle has four edges, and two pieces match up if their edges lock together neatly. However, a mathematically equivalent way of thinking about this is to think of puzzle pieces as squares on which a picture has been drawn. From this point of view, two pieces lock together if it is possible to place the pieces next to each other along a common edge so that the picture matches up. The illustration shows what a picture might look like if cut up into squares and randomly rearranged. The question then becomes: how many different types of edges are necessary so that the picture can be uniquely reconstructed from a randomly rearranged version?

If there are too few edge types, it is clearly impossible to reconstruct the puzzle uniquely. One degenerate case involves an n by n square puzzle with square pieces and a blank picture, which is the case q = 1. This puzzle looks the same scrambled up as it does when solved, so it can be reconstructed by reassembling the puzzle into any arrangement of pieces with the correct overall shape, and the solution is highly non-unique.

At the other extreme, if every pair of interlocking edges in a jigsaw puzzle is different from every other pair, then clearly the puzzle can be solved uniquely. For an n by n puzzle with square pieces, there is a total of 2n^2 - 2n pairs of edges that fit together in the interior of the puzzle, and a total of 4n single edges along the outer boundary. This is a total of 2n^2 + 2n types of edges, which means that if q is at least 2n^2 + 2n, the puzzle can be reconstructed in a unique way.

We have glossed over a technicality here, which is that, in a real jigsaw puzzle, the edges on the outer boundary are all the same shape: perfectly straight. This does not affect the mathematical argument, because as n gets very large, the fraction of puzzle pieces that are boundary pieces tends to zero.

The two extreme cases above show that q has to be at least 1 and at most 2n^2 + 2n. It turns out that we can do better than a bound of about 2n^2. In order for a random puzzle to have a high probability of being uniquely solvable, we just need q to be greater than n to the power 1 + ε, where ε is any small positive number. This result was proven recently in the recent paper Unique reconstruction threshold for random jigsaw puzzles ( by Rajko Nenadov, Pascal Pfister, and Angelika Steger. The same result was proved independently (and on the same day!) in the paper Shotgun Assembly of Random Jigsaw Puzzles ( by Charles Bordenave, Uriel Feige, and Elchanan Mossel.

The paper by Nenadov et al contains the additional result that if q is considerably smaller than n, then there is a high probability that the puzzle does not have a unique solution. Here, “considerably smaller” means that q = o(n), i.e. that the ratio q/n approaches zero when n tends to infinity.

Relevant links

Both the papers above were written as an answer to a question posed in the 2015 paper Shotgun assembly of labeled graphs ( by Elchanan Mossel and Nathan Ross. The paper by Mossel and Ross considers a number of related questions, including the problem of reconstituting DNA from fragments, and the problem of reconstructing a big neural network from subnetworks that are observed in experiments.

The notation “q = o(n)” is called little-o notation. There are many variants of this concept, including big-O notation. More information on this can be found at

The illustration comes from the paper Jigsaw Puzzles with Pieces of Unknown Orientation by Andrew C. Gallagher of the Eastman Kodak Research Laboratories.

(I apologise for not having posted for months. I was very busy at work.)

#mathematics #scienceeveryday
sayed khan's profile photoRandy T (Randylikestoclimb)'s profile photoDanica Macanga's profile photoRichard Green's profile photo
I'm not busy any more and I've started posting again. Here's the next post in my Mathematics collection:
Add a comment...

Richard Green

Shared publicly  - 
The onion decomposition of a network

The onion decomposition of a network is a recently introduced network fingerprinting technique. It can be used to identify structural properties of a network, such as ”tree-like” structure, ”loopy” structure, or the property of having an ”interesting“ sub-network.

Examples of networks that can be analysed in this way include purely mathematical networks; physical networks like transportation networks and power grids; and social networks such as collaboration graphs. The picture shows three mathematical networks: (a) the Cayley tree, in which all but the terminal nodes are connected to exactly three other nodes; (b) the Erdős–Rényi random graph; and (c) the square lattice, in which each node not on the boundary of the network is connected to exactly four others.

Under the picture of each network is its onion spectrum. This can be thought of as a summary of the onion decomposition of the network, which we describe below. The number and nature of the pieces of the onion spectrum can be used to gain insight into the structure of the original network.

The onion decomposition was defined a few months ago in the paper Network structure at multiple scales via a new network statistic: the onion decomposition by Laurent Hébert-Dufresne, Joshua A. Grochow, and Antoine Allard (, and the picture shown here comes from that paper. The decomposition is a refinement of the previously known k-core decomposition, and it is called the onion decomposition because it is reminiscent of peeling an onion.

To explain how this works, we use concepts from graph theory. A graph is a mathematical object that can be used to model a network. A graph consists of a set of nodes, called vertices, and a set of edges, which connect pairs of vertices. We do not allow our graphs to contain more than one edge connecting the same pair of vertices, or to have any edges that connect a vertex to itself. The degree of a vertex is the number of edges emanating from it.

To decompose a graph, one first removes all vertices of degree zero (isolated points); these vertices form the 0-shell of the graph. To form the 1-shell, one removes all vertices of degree 1, followed by all the vertices that become vertices of degree 1 as a result of this process, then iterating the procedure again and again until there are no more vertices of degree 0 or 1. The vertices that are removed at each iteration of the construction of the 1-shell are called the layers of the 1-shell. The construction of the 2-shell (and higher shells) follows a similar procedure: at each iteration, one removes all vertices of degree 2 or lower, until this is no longer possible.

The onion spectrum shown in the picture shows the relative abundance of vertices in each layer within each shell of the corresponding graph. Each connected piece of the picture is shown in a different colour and corresponds to a shell, and each dot corresponds to each successive layer of that shell. The vertical scale is logarithmic, which means that a straight line corresponds to exponential behaviour. The meaning of the colours in the pictures of the actual networks is different: the size of a vertex corresponds to its degree, and the colour of a vertex shows whether it is shallow or deep in the network. The original k-core decomposition gives less information, because it only keeps track of the k-shells, and not the layers within each shell.

The paper shows how to use the onion spectrum to read off properties of the original network. Exponential decay in the spectrum, like in the Cayley tree case, is indicative of the graph having a tree-like structure, whereas sub-exponential decay, like in the random graph case, suggests that the graph has a lot of circuits. It is a nice exercise to show that the Cayley tree consists of one big 1-shell, and the square lattice consists of one big 2-shell; it follows from this that the onion spectrum of each graph has only one connected component. 

Networks like these last two, whose onion spectrum has fewer components than a random graph, tend to be disassortative, which means that nodes tend to be connected to nodes that are somehow ”unlike” themselves. One way to illustrate this is to colour the vertices of the square lattice in alternating colours, like a chess board; something analogous can be done for the Cayley tree. 

Conversely, a network with more components than a random graph will tend to be assortative, meaning that nodes tend to be connected to nodes that are somehow like themselves. A good example of this that is studied in the paper is the collaboration graph of condensed matter physics papers on the arXiv preprint server. Some other real world graphs that are studied are the graph of a power grid, which is tree-like and assortative, and the network of roads in Pennsylvania. The road network graph has one particular shell that exhibits a change in its decay rate, which is indicative of the network having an interesting sub-network. In fact, the road system consists of several essentially isolated components.

Relevant links

Here's another post by me about assortative and disassortative networks:
That post links to two other relevant posts by me, whose links I provide below for convenience.

A post by me about the mathematics of social networks:

A post by me about the random graph:

Wikipedia has more information about k-cores on its page about degeneracy in graph theory:

I'm glad that Google has fixed the discouraging bug that was corrupting the plus-one counts of posts. On the negative side, it looks like the site (which was associated with the spnetwork hashtag) is no more. Can anyone confirm or deny this?

#mathematics #sciencesunday  
Able Lawrence's profile photoRichard Green's profile photoMartin Conterez's profile photoOduma Quoi's profile photo
very interesting lesson
Add a comment...

Richard Green

Shared publicly  - 
The sandpile model with a billion grains of sand

This picture by Wesley Pegden shows an example of a stable configuration in the abelian sandpile model on a square lattice. This consists of a rectangular square array of a large number of pixels. Each pixel has one of four possible colours (blue, cyan, yellow and maroon) corresponding to the numbers 0, 1, 2 and 3 respectively. These numbers should be thought of as representing stacks of tokens, often called chips, which in this case might be grains of sand.

Despite its intricate fractal structure, this picture is generated by a simple iterative process, as follows. If a vertex of the grid (i.e., one of pixels) holds at least 4 chips, it is allowed to fire, meaning that it transfers one chip to each of its neighbours to the north, south, east and west. The boundary of the grid can be thought of as the edge of a cliff, meaning that any chips that cross the boundary will fall off and be lost. If no vertices can fire in a particular chip configuration, then the configuration is called stable. For example, the configuration in the picture is stable, because no pixel holds 4 or more chips.

One of the key theorems about this particular sandpile model is that any chip configuration will become stable after firing various vertices a finite number of times. More surprisingly, the ultimate stable configuration obtained does not depend on the order in which the vertices were fired. The irrelevance of the order in which the vertices are fired is why the model is called “abelian”.

If we start with 2^{30} chips, all placed on the same pixel, and we then perform firings repeatedly until a stable configuration is reached, the resulting stable configuration is the one shown in the picture. (The number 2^{30} is just over a billion.) It is clear from the symmetrical nature of the firing rules that the resulting picture will be symmetric under rotation by a right angle or mirror-reflection, but the fractal-like structures are much more surprising.

Relevant links

Wesley Pegden is an Assistant Professor of Mathematics at Carnegie Mellon University. His home page includes an interactive zoomable version of this image:
The page also allows you to generate corresponding images on other lattices, including a triangular lattice with six colours, and a hexagonal lattice with three colours. You can also change the number of chips, like Dr Evil from Austin Powers. (One billion chips. No, one million chips.)

The article The Amazing, Autotuning Sandpile by Jordan Ellenberg appeared in Nautilus in April 2015:
The article discusses this picture, and also includes some details from it, including a close-up of the centre.

I have posted about the sandpile model before, here:
The other post includes more technical details, and describes how to construct a group out of certain sandpiles. A surprising feature of this is that the identity element of the group has a very complicated appearance.

David Perkinson is a Professor of Mathematics at Reed College. He has a gallery of images relating to abelian sandpiles:

(Found via Cliff Pickover (@pickover) on Twitter.)

#mathematics #scienceeveryday
UTSAV DEWAN's profile photoHeiko Köster's profile photo
Fascinating article.
Add a comment...

Richard Green

Shared publicly  - 
Here's a sunrise shot from my walk yesterday morning near my house in Longmont, Colorado. As usual, this was taken with my iPhone 5 and processed in Snapseed.

I have had a lack of good material for posts recently, but I am still hoping this is about to change.


altimira jordão's profile photoPatrizia Melis's profile photoAchan Maureen's profile photoLynette Dorling's profile photo
I want to live in Longmont, Colorado! It's gorgeous there.
Add a comment...

Richard Green

Shared publicly  - 
Multiply-perfect numbers

A number n is said to be perfect if the sum of the divisors of n is 2n, and it is said to be k-perfect, or multiply-perfect, if the sum of the divisors of n is kn. For example, the number 120 is 3-perfect, because the divisors of 120 (which are 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, and 120) add up to 360, which is 3 times 120.

A Mersenne prime is a prime number of the form 2^m – 1; for example, 7 = 2^3 – 1 is a Mersenne prime. It is a theorem that if 2^m – 1 is prime, then 2^{m–1}(2^m – 1) is an even perfect number, and furthermore, that every even perfect number arises in this way from a Mersenne prime. In the example of m=3, we see that 2^2(2^3 – 1) = 4x7 = 28, and 28 is a perfect number because the sum of its divisors, 1+2+4+7+14+28, is 56, which is twice 28. At the time of writing, the largest known prime number, 2^{57885161} − 1, is a Mersenne prime.

Although the concept of a perfect number was known to Euclid around 300 BC, some basic questions about perfect numbers remain open. For example, it is not known whether or not there are any odd perfect numbers, although it is known that if there is one, it must be very large. It is also not known whether or not there are infinitely many even perfect numbers, because it is also not known whether there are infinitely many Mersenne primes.

Furthermore, it is not known whether there are infinitely many k-perfect numbers where k is 3 or more, although it is expected that for each such k, there will only be finitely many k-perfect numbers. For example, the known 3-perfect numbers are 120, 672, 523776, 459818240, 1476304896, and 51001180160, and it is expected that this list is complete. However, if an odd perfect number m were to be discovered, then the number 2m would be an example of an even 3-perfect number that is not in the list! 

The reason for this comes from properties of the function σ(n). If n is a positive integer, then σ(n) is defined to be the sum of the divisors of n; for example, we proved earlier that σ(28)=56 and σ(120)=360. A number n will then be perfect if and only if σ(n)=2n, and k-perfect whenever σ(n)=kn. The function σ(n) is an example of a multiplicative function, which means that σ(mn)=σ(m)σ(n) whenever m and n have no factors in common. Since 120=8x15, and 8 and 15 share no common factors other than 1, it follows that σ(120)=σ(8)σ(15)=(1+2+4+8)(1+3+5+15)=15x24=360. This gives a much easier way to compute the sum of the divisors of a number without explicitly listing all the factors.

If we now suppose that m is an odd perfect number, then σ(m)=2m. Since m is odd, it follows that σ(2m)=σ(2)σ(m)=(1+2)σ(m)=6m=3x2m, which is precisely the condition for 2m to be 3-perfect. This proves that twice an odd perfect number would have to be 3-perfect.

Another theorem about multiply-perfect numbers that is not too hard to prove is that every 3-perfect number has at least three distinct prime factors; for example, the number 120 has 2, 3, and 5 as its distinct prime factors. There are also lower bounds for the number of prime factors of a k-perfect number for k greater than 3. All of these were derived over 100 years ago in a two-page paper by Derrick N. Lehmer entitled Multiply Perfect Numbers.

Relevant links

The On-Line Encyclopedia of Integer Sequences has a very informative page about multiply-perfect numbers:

Wolfram MathWorld's page on multiperfect numbers contains a number of interesting links:

Wikipedia has detailed pages about Mersenne primes, perfect numbers and generalizations of the divisor function:

Here's a previous post by me with more details about Mersenne primes and perfect numbers:

My previous post linked to a video of Dave Gorman's stand-up comedy routine on perfect numbers. That link seems not to work any more, but this one seems to:

#mathematics #sciencesunday 
Vidyasagar Patri's profile photokiamot tullah's profile photoAj Wool's profile photoAnakay Solomon's profile photo

Add a comment...
Richard's Collections
Have him in circles
114,825 people
kimsry elec's profile photo
iOnurb's profile photo
Алекс Дрога's profile photo
Andie “Panda” P's profile photo
Holly Harvey's profile photo
Mohamed Bounaas's profile photo
Topsider Homes's profile photo
george jenkins's profile photo
Maria Ema “pukinha” dos Santos Puka's profile photo
professor of mathematics
  • University of Colorado Boulder
    Professor of Mathematics, present
  • Vodafone
  • Oxford University
    Postdoctoral Research Assistant, 1995 - 1997
  • Lancaster University
    Lecturer in Pure Mathematics, 1997 - 2003
  • Colorado State University
    Visiting Assistant Professor of Mathematics, 2001 - 2001
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Longmont, Colorado
Winnipeg, Manitoba, Canada - Southampton, England - Middlesbrough, England - Nairobi, Kenya - Zomba, Malawi - Manzini, Swaziland - Oxford, England - Newbury, England - Leamington Spa, England - Coventry, England - Lancaster, England - Fort Collins, Colorado, USA
mathematician and father of twins
I'm a professor of mathematics at the University of Colorado Boulder, interested mainly in algebra and combinatorics. 

Some of the best posts in my Google+ stream are those with the #mathematics tag.  My aim in these is to explain both new and historically interesting mathematical ideas to a general audience, and to produce posts that can be enjoyed on various levels.  My posts with the most reshares are often posts of this type.

Some of my non-mathematical posts are commentary on other people's photography and works of art, or various photos I took.

I am unlikely to give +1s to the following material:

(1) extreme sports;

(2) pictures of spiders;

(3) pornography;

(4) posts that can't spell “its”.

My main aim on Google+ is to post about difficult topics and still get decent amounts of engagement. I try to keep the quality of my posts as high as I can, and I'm not interested in trying to get vast numbers of +1s. Sometimes I post pictures of cats and dogs, but they're usually photographs I took. I don't always post gifs, but when I do, they're often gifs I created, either from scratch or from someone else's video. I don't usually post poorly executed or out of focus pictures, however amusing they are. Whatever I post, I try to include interesting commentary on it, and I'm careful to try to assign credit. I never want someone to feel that I stole their work with my post.

In the 90s, I wrote a Multi-User Dungeon called "Island."  I was once described by Stephen Fry as "sick".

Some other topics I may post about on occasion include the Beatles, British comedy, chess, linguistics, music theory, science in general, Star Trek, and Tolkien.
Bragging rights
I'm the author of "Combinatorics of Minuscule Representations", a book published by Cambridge University Press. I have graduated seven PhD students.
  • University College, Oxford
    BA (MA), mathematics, 1989 - 1992
  • University of Warwick
    MSc/PhD, mathematics, 1992 - 1995
  • St Bernadette's RC Primary School, Middlesbrough
  • Kestrel Manor School, Nairobi
  • Sir Harry Johnston Primary School, Zomba
Basic Information
February 10
Other names
R.M. Green