Posts

Post is pinned.Post has attachment

Public

Here is a place for anyone to discuss or share projects and possibilities related to software tools for large-scale collaboration (like OPSN).

Add a comment...

Post has attachment

Public

This is a fun "optical illusion" to figure out:

https://imgur.com/L8swkHh

https://imgur.com/L8swkHh

Add a comment...

Post has shared content

Public

They approximate dense astronomical "disks" (perhaps warped) in a continuous way; apparently the Schrödinger equation (for one particle orbiting a center in 3d) then emerges naturally. It would be really interesting to understand why. (I only read this overview, not the original paper.)

Massive Astronomical Objects Governed by Schrödinger Equation

http://www.sci-news.com/astronomy/astronomical-objects-schroedinger-equation-05786.html

#astronomy #physics #science

http://www.sci-news.com/astronomy/astronomical-objects-schroedinger-equation-05786.html

#astronomy #physics #science

Add a comment...

Post has shared content

Public

**Surprise: A virus-like protein is important for cognition and memory**

*A protein involved in cognition and long-term memory, called Arc, acts like a protein from viruses. Arc can encapsulate and deliver its own genetic material to brain cells (neurons) similar to the way viruses infect host cells. This picture …more A protein involved in cognition and storing long-term memories looks and acts like a protein from viruses. The protein, called Arc, has properties similar to those that viruses use for infecting host cells, and originated from a chance evolutionary event that occurred hundreds of millions of years ago. The prospect that virus-like proteins could be the basis for a novel form of cell-to-cell communication in the brain could change our understanding of how memories are made, according to Jason Shepherd, Ph.D., a neuroscientist at University of Utah Health and senior author of the study publishing in Cell on Jan. 11. Shepherd first suspected that something was different about Arc when his colleagues captured an image of the protein showing that Arc was assembling into large structures. With a shape that resembles a capsule from a lunar lander, these structures looked a lot like the retrovirus, HIV. "At the time, we didn't know much about the molecular function or evolutionary history of Arc," says Shepherd who has researched the protein for 15 years. "I had almost lost interest in the protein, to be honest. After seeing the capsids, we knew we were onto something interesting." The gap in research was not for want of an interesting subject. Prior work had shown that mice lacking Arc forgot things they had learned a mere 24 hours earlier. Further, their brains lacked plasticity. There is a window of time early in life when the brain is like a sponge, easily soaking up new knowledge and skills. Without Arc, the window never opens. Scientists had never considered that mechanisms responsible for acquiring knowledge could stem from foreign origins. Now, the work by Shepherd and his team has raised this intriguing possibility.*

Add a comment...

Post has attachment

Public

This reports on a nearby star with 7 earth-sized planets which (despite probably being tidally locked) might have the conditions to have liquid surface water at reasonable temperatures. And the scientists think they have a clear path to measuring the gases in their atmospheres (and thus inferring whether there is evidence for life there) by seeing how the star's light goes through them.

Add a comment...

Post has shared content

Public

If you like advanced math, you might like to read Greg Egan describe how to use "finite group representation theory" to come up with an efficient way to animate 1800 spherical "gears" arranged symmetrically in 4-dimensional space. (Even if you don't get the math, there's a cool picture at the end.)

**Three cheers for Schur and Frobenius!**

The

**120-cell**is a four-dimensional polytope with 600 vertices, 1200 edges, 720 pentagonal faces and 120 dodecahedral cells. Suppose we place 1800 spheres, all of equal radii, at every vertex and every edge-centre of a 120-cell, and try to find a way for those 1800 spheres to rotate so that their surfaces roll against each other at every one of the 2400 points of contact between them.

The image below shows the projection down to three dimensions of 830 spheres out of the total of 1800: those that lie entirely on one side of a hyperplane through the origin.

Although these spheres are located in four-dimensional space, we want each of them to remain in the three-dimensional subspace tangent to a hypersphere that shares its centre with the 120-cell — just as if we had an arrangement of spinning discs on the surface of a globe, we would want them to remain tangent to the globe, not to wobble back and forth and lose contact with each other. So, we can characterise the angular velocity of each sphere with the same number of degrees of freedom, 3, that it would have in three dimensions. That means we have a total of 1800 × 3 = 5400 degrees of freedom.

Similarly, at each of the 2400 points of contact between the spheres, the linear velocity of each sphere’s surface is constrained to lie in a plane that is both tangent to the two spheres, and tangent to a hypersphere with the same centre as the 120-cell. So there are 2 degrees of freedom at each contact point, and a total of 2400 × 2 = 4800 degrees of freedom.

This tells us that we can write the linear operator

*T*that takes all possible angular velocities for the 1800 spheres, and spits out the difference in linear velocities of the spheres’ surfaces at the 2400 contact points, as a 4800 × 5400 matrix. And what we are seeking is the space of all solutions to the linear equation:

*T*

**ω**= 0

where

**ω**is a 5400-component vector describing the angular velocities of the 1800 spheres.

Now, with computers it’s not impossible to solve a system of 4800 linear equations in 5400 variables by sheer brute force ... but it’s more efficient, more enlightening, and more enjoyable to exploit the

*symmetry*of this problem to reduce it to something much simpler. And it turns out that with the judicious use of group theory, we can transform our original 4800 × 5400 matrix into a collection of vastly smaller matrices, the largest of which is just 18 × 16.

The 120-cell is a highly symmetric object, with a group of symmetries, known as H₄, with 14,400 elements, each of which is either a rotation or a combination of a rotation and a reflection. If we rotate and/or reflect the 120-cell with an element

*g*of H₄, we will get a new vector of angular velocities in the 5400-dimensional domain of our linear operator

*T*, and a new vector of contact velocities in the 4800-dimensional co-domain of

*T*. Because everything about

*T*comes from the geometry of the 120-cell, for

*any*element

*g*of H₄ we have:

ρ₄₈₀₀(

*g*)

*T*=

*T*ρ₅₄₀₀(

*g*)

By ρ₄₈₀₀ and ρ₅₄₀₀ I mean the

**representations**of H₄ on the 4800-dimensional space of contact velocities (which we will call V₄₈₀₀) and the 5400-dimensional space of angular velocities (which we will call V₅₄₀₀). In general, a representation of a group on a vector space

*V*is just a homomorphism from the group to a subgroup of all the invertible linear operators on

*V*, i.e. we have:

ρ(

*g*) ρ(

*h*) = ρ(

*g*

*h*)

ρ(1) =

*I*

for any elements

*g*and

*h*of the group, where in each case ρ gives us some invertible linear operator on

*V*, and specifically it takes the identity of the group to the identity operator

*I*on

*V*.

Given any representation ρ of a finite group

*G*on a finite-dimensional vector space

*V*, we can always “decompose”

*V*into

**invariant subspaces**. We say that a subspace

*W*of

*V*is

**invariant**if for all group elements

*g*and all vectors

*w*in

*W*, ρ(

*g*)

*w*also lies in

*W*. In other words, the representation’s action never moves a vector from

*W*out of

*W*. This means that, if we like, we can actually ignore the rest of the larger vector space,

*V*, and talk about ρ restricted to

*W*as a representation in its own right: a

**subrepresentation**of the original one. For example, consider the 6-dimensional vector space of functions on a circle that take the form

*A*sin(

*n*θ) +

*B*cos(

*n*θ) for

*n*= 1,2,3, acted on by the group of rotatations and reflections of the circle. Each of the 2-dimensional subspaces we get by fixing the value of

*n*is invariant: rotating or reflecting the circle can’t change the frequency of the function.

An

**irreducible**representation, or

**irrep**for short, is a representation that contains no non-trivial subrepresentations. That is, if the representation acts on

*V*, there are no invariant subspaces of

*V*other than {0} and

*V*itself.

The representations ρ₄₈₀₀ and ρ₅₄₀₀ are certainly not irreducible! But we can break V₅₄₀₀ and V₄₈₀₀ down into the smallest possible invariant subspaces. Within each such subspace, the “big” representation we started with will act just like an irrep of a much lower dimension.

If we completely reduce our two big vector spaces this way, and choose bases whose elements lie in the resulting subspaces, that will let us rewrite the matrix for

*T*as a block matrix, made up of blocks that link the various irreducible subspaces of V₅₄₀₀ with those of V₄₈₀₀.

To see why this is helpful, our first three cheers go to Issai Schur, who was born in Russia in 1875, and spent most of his life in Germany. He is one of those mathematicians who discovered so many things that he gets a whole long list of them on Wikipedia:

https://en.wikipedia.org/wiki/List_of_things_named_after_Issai_Schur

The beautiful result called

**Schur’s Lemma**which was published in 1905 says that any linear operator that commutes with the action of a group (as our operator

*T*does) and maps one irreducible representation into another (as those individual blocks in the new form for the matrix of

*T*do) will be non-zero

**only if**the two irreducible representations are

**equivalent**. Two representations are said to be equivalent if they are really “doing the same thing”, even if they act on different vector spaces. Formally, that means we can find some isomorphism between the two spaces that lets us identify their vectors in such a way that however the group acts on one space, under the identification it acts in exactly the same way on the other.

So if we can carry out this decomposition, most of the blocks in our new matrix for

*T*will turn out to be zero, and we will be left with a few much smaller matrices to deal with.

One way to approach this is to construct a set of

**projection operators**that map the original vector space

*V*into

**isotypic subspaces**. An isotypic subspace isn’t quite an irreducible subspace; rather, it can consist of one or more copies of the same irrep. So the results we get from such a projection will depend on how many irreducible subspaces in

*V*transform under the same irrep. This is still helpful, because it still lets us break the domain and co-domain of

*T*into subspaces in such a manner that we know whether or not they can be coupled to each other in the new, block-matrix form of

*T*.

To construct these projection operators, we need to know the linear operators that the original representation assigns to every element of the symmetry group, and also the

**characters**of all the irreps. The “character” of a representation is the trace of the matrix that the representation assigns to each element of the group. There is then a relatively simple formula for each projection P₀ associated with an irrep ρ₀:

P₀ = [dim ρ₀ / |

*G*|] Σ over all

*g*∈

*G*of χ₀(g⁻¹) ρ(g)

where χ₀ is the character of the irrep ρ₀, and ρ is the original representation on the whole of

*V*.

Our symmetry group H₄ has 34 different irreps. No symmetry of the 120-cell can map a vertex into an edge-centre, of course, so we can start out with spaces of dimension 3 × 600 = 1800 for the vertex angular velocities, 3 × 1200 = 3600 for the edge-centre angular velocities, and 2 × 2400 = 4800 for the contact velocities. Constructing the projections we need would then involve summing matrices of dimensions 1800 × 1800, 3600 × 3600 and 4800 × 4800 over the 14,400 elements of H₄, and doing this for all 34 irreps. We can then find bases for the isotypic subspaces by taking linearly independent subsets of columns from the matrices for the projections, or various linear combinations of the columns.

Again, computers make this possible ... but it still seems hugely inefficient.

Fortunately, we have one more trick up our sleeve, thanks to Ferdinand Georg Frobenius. Frobenius was Schur’s doctoral advisor, and he too has a long list of things named after him:

https://en.wikipedia.org/wiki/List_of_things_named_after_Ferdinand_Georg_Frobenius

The particular result of Frobenius we will use concerns a special kind of representation, known as an

**induced representation**. If we pick any vertex of the 120-cell, there will be a subgroup of H₄ that leaves that vertex fixed. Similarly, if we pick any edge-centre, or any contact point between a sphere at a vertex and one at an edge-centre, there will again be a subgroup of H₄ that leaves that point fixed.

If we restrict H₄ to one of these subgroups — let’s call the subgroup

*H*— then our original representation of H₄ will give us a representation of

*H*on a much smaller vector space: the space of angular velocities or contact velocities for whatever feature of the 120-cell it is that

*H*keeps fixed.

Equally, though, we can work backwards: given a representation ρ of

*H*on any vector space

*V*, we can get a representation of the full group, H₄. To do this, we first identify each relevant feature of the 120-cell with one of the

**left cosets**of the subgroup

*H*: since all of

*H*keeps the chosen feature fixed, the various cosets

*g*

*H*for different choices of

*g*will map the chosen feature to all the others of the same kind. For example, there is a 24-element subgroup of H₄ that fixes your favourite vertex of the 120-cell, and if we use it to partition H₄ into left cosets, we get 600 of them, each of which consists of those elements of H₄ that map your favourite vertex to each of the 600 vertices of the 120-cell.

If we give each of the relevant features a label, say,

*f*, then we can pick an element of H₄, say

*x*(

*f*), such that all the elements of the coset

*x*(

*f*)

*H*map the chosen feature to the one with the label

*f*. Given any element

*g*of H₄,

*g*

*x*(

*f*) must belong to some coset that we will call

*x*(

*f*,

*g*)

*H*, and so our choice of an

*x*for each

*f*gives us a unique element of

*H*for each

*f*and

*g*:

*h*(

*f*,

*g*) =

*x*(

*f*,

*g*)⁻¹

*g*

*x*(

*f*)

We can then build an

**induced representation**of H₄ on the vector space we get by associating a separate copy of

*V*with each of the relevant features of the 120-cell:

ρ(Induced)(

*g*)(v₁, v₂, ...) = (ρ(

*h*(1,

*g*))v₁, ρ(

*h*(2,

*g*))v₂, ...)

Our representations of H₄ have actually been of this form all along! That might sound odd, because up until now we haven’t explicitly discussed doing anything that corresponds to a choice of coset elements

*x*(

*f*), which is an essential ingredient of such a representation. But in fact, such a choice has been implicit from the start in the need to choose specific bases for the angular velocities and contact velocities at each point: we have said all along that we want to narrow these things down from the ambient 4-dimensional space to the 3- or 2-dimensional subspaces in which those velocities live, but these are different subspaces at each point, and to actually calculate anything we need to choose bases for all of them. The implicit choices of the

*x*(

*f*) are just the elements of H₄ that map the basis at one particular feature into the bases at all the other relevant ones.

Now, suppose we have

*any*representation ρ₁ of a group

*G*on some vector space

*W*, along with an

*induced representation*ρ(Induced) of

*G*that we obtain by the construction described above, starting with a representation ρ₂ of the subgroup

*H*of

*G*on some vector space

*V*. Then the

**Frobenius reciprocity theorem**says that if we restrict the representation ρ₁ of

*G*to obtain a representation of the subgroup

*H*, then the space of all linear maps between

*W*and the induced representation of

*G*that commute with the actions of

*G*(namely ρ₁ and ρ(Induced)) is isomorphic to the space of all linear maps between

*W*and

*V*that commute with the actions of

*H*(namely ρ₁ restricted to

*H*, and ρ₂).

To unpack this a bit, suppose

*S*is a linear map from

*W*to

*V*that commutes with the actions of

*H*, i.e.:

*S*ρ₁(

*h*) = ρ₂(

*h*)

*S*

Then we can construct a linear map

*U*from

*W*to the induced representation (the direct sum of a whole lot of copies of

*V*, one for each coset of

*H*in

*G*):

*U*(

*w*) = (

*S*ρ₁(

*x*(1))⁻¹

*w*,

*S*ρ₁(

*x*(2))⁻¹

*w*, ... )

It’s not too hard to check that for any

*g*in

*G*:

*U*ρ₁(

*g*) = ρ(Induced)(g)

*U*

Why is this useful? If we choose ρ₁ to be an irrep of

*G*, we can start with suitable linear maps like

*S*from

*W*to

*V*, and then use them to build maps like

*U*from

*W*to the induced representation — giving us a way to find bases for each copy of the irrep ρ₁ within the induced representation.

How do we get maps like

*S*, which need to commute with the actions of

*H*? We can take

*any*linear map

*M*from

*W*to

*V*, and then “average” it over

*H*:

*S*= [1 / |

*H*|] Σ over all

*h*∈

*H*of ρ₂(

*h*)

*M*ρ₁(

*h*)⁻¹

So, starting from a basis of all linear maps from

*W*to

*V*, we can generate a basis of all linear maps from

*W*to however many copies of the irrep ρ₁ there are in the induced representation.

The basis we get from each map

*U*will span a single, irreducible subspace in our original huge vector space, so there is no more work needed to split the isotypic subspaces. What’s more, since we obtain our bases for all these irreducible subspaces by applying maps to a single basis of

*W*, the matrices that describe the restriction of our linear operator to each isotypic subspace will always be composed of irrep-sized blocks that are multiples of the identity. A matrix composed of multiples of the identity can be manipulated almost as easily as an equivalent matrix of scalars. And in the end, the largest matrix that arises from our 120-cell problem has dimensions of just 18 × 16, and is due to one irrep of H₄ that occurs 16 times in the space of angular velocities for the spheres, and 18 times in the space of linear velocity differences at the contact points.

More details at:

http://www.gregegan.net/SCIENCE/Bearings/Bearings.html

Add a comment...

Post has attachment

Public

I needed a bit of hope, to balance my fears around this election (and to celebrate its upcoming over-ness). And I wanted to try to do something useful. So I wrote this Medium article on how "overlaid personal semantic networks" might possibly help.

https://medium.com/@BruceSmith1/better-discussion-systems-could-counteract-polarization-da8644ee6773#.2jxqlapgo

https://medium.com/@BruceSmith1/better-discussion-systems-could-counteract-polarization-da8644ee6773#.2jxqlapgo

Add a comment...

Post has attachment

Public

ZeroMQ is an asynchronous messaging framework. It sounds very good, according to its own guide: http://zguide.zeromq.org . Does anyone with experience in these areas have an opinion on it? If you already want to use asynchronous messaging as the basis of a distributed application, is ZeroMQ a good way? (Are there alternatives that should be considered, if you still want generality, speed, reliability, and ease of use?)

Add a comment...

Post has shared content

Public

I haven't read the book, but I share concern about the issue. (But the solution is not to try to restore "gatekeepers", but to introduce better rating of knowledge.)

The War on Science is against all knowledge and fact-based professions. As clearly shown in --

Mind you, some parts of the War on Science are waged by some dark-insipid corners of the far left. Though nothing like the all-out war on ALL science and all other knowledge professions by all corners of the entire US right.

No one is more terrified of the possible return of a Republican Administration than our senior military officer corps.

https://www.amazon.com/gp/product/0190469412/?_encoding=UTF8&tag=contbrin-20

**The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters**, by Tom Nichols, a professor at the Naval War College.Mind you, some parts of the War on Science are waged by some dark-insipid corners of the far left. Though nothing like the all-out war on ALL science and all other knowledge professions by all corners of the entire US right.

No one is more terrified of the possible return of a Republican Administration than our senior military officer corps.

https://www.amazon.com/gp/product/0190469412/?_encoding=UTF8&tag=contbrin-20

Add a comment...

Post has attachment

Public

This article has a theory of procrastination that strikes me as reasonably likely (at least compared to my own experience). Briefly, it's a fear of creating poor-quality work, measured by comparison with the best finished work of the best other people. They suggest that there are personality types or beliefs more or less subject to this problem, based on whether one feels intuitively that talent is learned or inborn. Unfortunately, they give no guidance about overcoming this defect if you have it. Even so, it might be worth reading and thinking about.

Add a comment...

Wait while more posts are being loaded