Post has attachment
Public
Explore Density with your kids!
#Science #ScienceSunday #ScienceExperiment #Homeschool #Learning #Education
#Science #ScienceSunday #ScienceExperiment #Homeschool #Learning #Education
Add a comment...
Post has attachment
Add a comment...
Post has shared content
Public
So cool
Some Microbial Art for your #ScienceSunday amusement!
[curated by +Allison Sekuler and +Robby Bowles]
The artist that created them is Professor +Eshel Ben-Jacob, from the Tel-Aviv University. From his website* you can get the following statement about the meaning and importance of his research and artwork:
THE SCIENCE BEHIND THE ART
These images are part of a series of remarkable patterns that bacteria form when grown in a petri dish. While the colors and shading are artistic additions, the image templates are actual colonies of tens of billions of these microorganisms. The colony structures form as adaptive responses to laboratory-imposed stresses that mimic hostile environments faced in nature. They illustrate the coping strategies that bacteria have learned to employ, strategies that involve cooperation through communication. These selfsame strategies are used by the bacteria in their struggle to defeat our best antibiotics. Thus, if we understand the mechanisms behind the patterns, we can learn how to outsmart the bacteria - for example, by tampering with their communication - in our ongoing battle for our health.
The images come from the laboratory of Prof Eshel Ben-Jacob, of the Tel-Aviv University (http://star.tau.ac.il/~inon/baccyber0.html.) as part of a collaboration with Prof. Herbert Levine of UCSDs National Science Foundation Frontier Center for Theoretical Biological Physics (http://ctbp.ucsd.edu). The goal of this research is to unravel the adaptation secrets enabling bacterial survival against all odds. Their efforts build upon progress in two disparate fields - pattern formation in complex dynamical systems and the molecular biology and biophysics of bacteria.
In a sense, the strikingly beautiful organization of the pattern reflects the underlying social intelligence of the bacteria. The once controversial idea that bacteria cooperate to solve challenges has become commonplace, with the discovery of specific channels of communication between the cells and specific mechanisms facilitating the exchange of genetic information.
Retrospectively, these capabilities should not have been seen as so surprising, as bacteria set the stage for all life on Earth and indeed invented most of the processes of biology. As we try to stay ahead of the disease-causing varieties of these versatile creatures, we must use our own intelligence to understand them.
These images remind us never to underestimate our opponent.
*here: http://star.tau.ac.il/~eshel/gallery.html
Images taken from here: http://www.microbialart.com/galleries/ben-jacob/#full-screen
[curated by +Allison Sekuler and +Robby Bowles]
The artist that created them is Professor +Eshel Ben-Jacob, from the Tel-Aviv University. From his website* you can get the following statement about the meaning and importance of his research and artwork:
THE SCIENCE BEHIND THE ART
These images are part of a series of remarkable patterns that bacteria form when grown in a petri dish. While the colors and shading are artistic additions, the image templates are actual colonies of tens of billions of these microorganisms. The colony structures form as adaptive responses to laboratory-imposed stresses that mimic hostile environments faced in nature. They illustrate the coping strategies that bacteria have learned to employ, strategies that involve cooperation through communication. These selfsame strategies are used by the bacteria in their struggle to defeat our best antibiotics. Thus, if we understand the mechanisms behind the patterns, we can learn how to outsmart the bacteria - for example, by tampering with their communication - in our ongoing battle for our health.
The images come from the laboratory of Prof Eshel Ben-Jacob, of the Tel-Aviv University (http://star.tau.ac.il/~inon/baccyber0.html.) as part of a collaboration with Prof. Herbert Levine of UCSDs National Science Foundation Frontier Center for Theoretical Biological Physics (http://ctbp.ucsd.edu). The goal of this research is to unravel the adaptation secrets enabling bacterial survival against all odds. Their efforts build upon progress in two disparate fields - pattern formation in complex dynamical systems and the molecular biology and biophysics of bacteria.
In a sense, the strikingly beautiful organization of the pattern reflects the underlying social intelligence of the bacteria. The once controversial idea that bacteria cooperate to solve challenges has become commonplace, with the discovery of specific channels of communication between the cells and specific mechanisms facilitating the exchange of genetic information.
Retrospectively, these capabilities should not have been seen as so surprising, as bacteria set the stage for all life on Earth and indeed invented most of the processes of biology. As we try to stay ahead of the disease-causing varieties of these versatile creatures, we must use our own intelligence to understand them.
These images remind us never to underestimate our opponent.
*here: http://star.tau.ac.il/~eshel/gallery.html
Images taken from here: http://www.microbialart.com/galleries/ben-jacob/#full-screen
‹





›
2012-02-05 (11 photos)
11 Photos - View album
Add a comment...
Post has shared content
Public
The Expanding Universe.. and some embarrassing calculations! : This is a beautifully written article in Ars Technica, which explains the role of the cosmological constant, and more particularly how physicists are at a loss to explain absurd results when taking this as a constant. More particularly, it's a really excellent exposition on the basics.
Article Extract: The vacuum of space isn't actually "empty"; it teems with particles that pop in and out of existence, giving the vacuum an energy of its own. But here's an embarrassing fact about that energy: it predicts that the cosmological constant (which provides a measure of the rate of the expansion of the Universe) should be 10^120 times larger than we think it actually is.
When Einstein was first formulating a new theory of gravity, his solutions predicted that the Universe was expanding. At the time, the Universe was widely regarded to be static, so Einstein added a constant that counteracted the expansion and kept the Universe unchanging. Everyone rejoiced—electromagnetism, space, time, and gravity could all live together in harmony. Later, Edwin Hubble took advantage of a new generation of telescopes to measure the speed at which distant galaxies were moving. He found that the further away a galaxy was, the faster away from us it was moving. The conclusion was inescapable: the Universe was expanding. Everyone chuckled over Einstein's big goof.
Scientists now can measure the rate at which the Universe expands. Turns out it's not a constant; every day, the Universe expands a bit faster than it did the day before. Inflation, it seems, is a physical as well as an economic universal, and Einstein's cosmological constant was back (albeit in altered form).
Funnily enough, it wouldn't have mattered whether the new cosmological constant was positive, negative, or zero—problems were going to arise. This is because Einstein's work had also established that mass and energy are two sides of the same coin. Since mass causes space and time to warp, so too should energy. So why doesn't the vacuum energy bend space and time? When physicists bolt the quantum vacuum energy on to general relativity, they get absurd results unless some kind of correction factor (to the tune of 10^120) is carefully added to counteract the vacuum. This fine-tuning bothers people because there is simply no way to obtain these numbers naturally.
Enter the new work by Nemanja Kaloper (UC-Davis) and Antonio Padilla (University of Nottingham), who have proposed a modification to general relativity that naturally generates a small cosmological constant. According to the researchers, the cosmological constant should be treated as the average of the vacuum contribution over all space and time. When this happens, the local vacuum energy contributions appear twice in the equations with opposite signs. No matter what energy the vacuum has right now, it can't bend space and time—think of it as pushing with one hand and pulling with the other.
Article Link: http://arstechnica.com/science/2014/03/getting-the-math-of-the-universe-to-cancel-out/
Research paper: http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.091304
More about Vacuum state: http://en.wikipedia.org/wiki/Vacuum_state
The cosmological constant: http://www.space.com/9593-einstein-biggest-blunder-turns.html
Wikipedia link on the Cosmological constant: http://en.wikipedia.org/wiki/Cosmological_constant
NASA link: http://map.gsfc.nasa.gov/universe/uni_accel.html
Pics courtesy and detail: Pic on right: from http://complex.elte.hu/astro.html ( The best 3 dimensional map of the Universe. The above animation is based on the cca. 5 million galaxies in the SDSS Early Data Release. There is no visible structure in the distribution of the most distant quasars (white dots) but galaxies (yellow and green dots) are clustered on a foam-like structure. The slices are not physical, they are caused by the survey geometry ) Pic on left: From main article in http://arstechnica.com (Jim Brau, University of Oregon).
#science #sciencesunday #scienceeveryday #inflation
Article Extract: The vacuum of space isn't actually "empty"; it teems with particles that pop in and out of existence, giving the vacuum an energy of its own. But here's an embarrassing fact about that energy: it predicts that the cosmological constant (which provides a measure of the rate of the expansion of the Universe) should be 10^120 times larger than we think it actually is.
When Einstein was first formulating a new theory of gravity, his solutions predicted that the Universe was expanding. At the time, the Universe was widely regarded to be static, so Einstein added a constant that counteracted the expansion and kept the Universe unchanging. Everyone rejoiced—electromagnetism, space, time, and gravity could all live together in harmony. Later, Edwin Hubble took advantage of a new generation of telescopes to measure the speed at which distant galaxies were moving. He found that the further away a galaxy was, the faster away from us it was moving. The conclusion was inescapable: the Universe was expanding. Everyone chuckled over Einstein's big goof.
Scientists now can measure the rate at which the Universe expands. Turns out it's not a constant; every day, the Universe expands a bit faster than it did the day before. Inflation, it seems, is a physical as well as an economic universal, and Einstein's cosmological constant was back (albeit in altered form).
Funnily enough, it wouldn't have mattered whether the new cosmological constant was positive, negative, or zero—problems were going to arise. This is because Einstein's work had also established that mass and energy are two sides of the same coin. Since mass causes space and time to warp, so too should energy. So why doesn't the vacuum energy bend space and time? When physicists bolt the quantum vacuum energy on to general relativity, they get absurd results unless some kind of correction factor (to the tune of 10^120) is carefully added to counteract the vacuum. This fine-tuning bothers people because there is simply no way to obtain these numbers naturally.
Enter the new work by Nemanja Kaloper (UC-Davis) and Antonio Padilla (University of Nottingham), who have proposed a modification to general relativity that naturally generates a small cosmological constant. According to the researchers, the cosmological constant should be treated as the average of the vacuum contribution over all space and time. When this happens, the local vacuum energy contributions appear twice in the equations with opposite signs. No matter what energy the vacuum has right now, it can't bend space and time—think of it as pushing with one hand and pulling with the other.
Article Link: http://arstechnica.com/science/2014/03/getting-the-math-of-the-universe-to-cancel-out/
Research paper: http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.091304
More about Vacuum state: http://en.wikipedia.org/wiki/Vacuum_state
The cosmological constant: http://www.space.com/9593-einstein-biggest-blunder-turns.html
Wikipedia link on the Cosmological constant: http://en.wikipedia.org/wiki/Cosmological_constant
NASA link: http://map.gsfc.nasa.gov/universe/uni_accel.html
Pics courtesy and detail: Pic on right: from http://complex.elte.hu/astro.html ( The best 3 dimensional map of the Universe. The above animation is based on the cca. 5 million galaxies in the SDSS Early Data Release. There is no visible structure in the distribution of the most distant quasars (white dots) but galaxies (yellow and green dots) are clustered on a foam-like structure. The slices are not physical, they are caused by the survey geometry ) Pic on left: From main article in http://arstechnica.com (Jim Brau, University of Oregon).
#science #sciencesunday #scienceeveryday #inflation


2014-03-22
2 Photos - View album
Add a comment...
Post has shared content
Public
Fractals, Fibonacci, and factorizations
The rule for generating the famous Fibonacci sequence 1, 1, 2, 3, 5, 8, 13, 21, ... is that each number (after the first two) is the sum of the previous two numbers. The Fibonacci word is an infinite string of zeros and ones with properties reminiscent of the Fibonacci sequence, and the Fibonacci fractal, shown in the picture, is a way to represent the Fibonacci word in the form of a fractal.
One way to generate the Fibonacci word is to define strings of zeros and ones by the rules S(0)=0, S(1)=01 and S(n)=S(n–1)S(n–2) when n is at least 2. This gives rise to the sequence of strings 0, 01, 010, 01001, 01001010, 0100101001001, ..., whose limit, as n tends to infinity, is the Fibonacci word. There are other equivalent, but superficially very different, ways to generate this word, including (a) using an explicit formula for each digit given in terms of the golden ratio; (b) using a substitution rule; and (c) using the Zeckendorf representation of integers in terms of Fibonacci numbers.
By suitably interpreting the digits of the Fibonacci word as turtle graphics instructions in a Logo-like programming language, it is possible to represent the word as a fractal. More precisely, if one reads the digits in order, then the n-th digit corresponds to the following sequence of instructions:
1. draw a segment forwards;
2. if the digit is 0, then turn left 90 degrees is n is even, and turn right 90 degrees if n is odd.
The picture shows the result of this procedure after many iterations. The resulting curve has various interesting mathematical properties, some of which concern the square-shaped gaps. By inspection, we count one large square gap (in the middle, at the bottom); five smaller square gaps, and 21 square gaps of the next size down. The numbers of these gaps, sorted by size, turn out to be given by every third Fibonacci number starting with the second 1 (1, 5, 21, 89...) which means that there are 89 squares of the next size down. Furthermore, each square has a side length that is 1+√2 times the side length of the square of the next size down; the number 1+√2 is known as the silver ratio.
The recent paper Factorizations of the Fibonacci Infinite Word by Gabriele Fici (http://arxiv.org/abs/1508.06754) surveys some factorizations of the Fibonacci word and shows how to derive these factorizations using elementary properties of the Fibonacci numbers. In some cases, this gives easier derivations of the results than were previously known. An example of such a factorization involves the sequences S(n) from earlier. Proposition 1 of the paper proves that the Fibonacci word can be factorized as the infinite product 0.1.S(0).S(1).S(2)..., where the symbol . is used to separate the factors.
One of the most surprising factorizations in the paper is Proposition 9, which involves the reversals, T(n), of the strings S(n). The strings T(0), T(1) and so on are then given by the sequence 0, 10, 010, 10010, 01010010, ... Remarkably, the concatenation of the strings T(n) also gives the Fibonacci word, even though the ingredients being used to construct it are backwards and generally not palindromic. Another way to say this is that the Fibonacci word can be factorized as the infinite product T(0).T(1).T(2)...
Relevant links
The 2009 paper The Fibonacci Word fractal by Alexis Monnerot-Dumaine is an excellent guide to the mathematical properties of the fractal, and the picture of the fractal here comes from that paper. You can download the paper for free at https://hal.archives-ouvertes.fr/hal-00367972/document
Monnerot-Dumaine's paper explains how to construct the Fibonacci word using a substitution rule, and explores what the fractal looks like if one makes turns at angles other than a right angle.
Fici's paper explains how to construct the word using the Zeckendorf representation of natural numbers. It is a theorem that any positive integer can be expressed uniquely as the sum of one or more distinct non-consecutive Fibonacci numbers. This is called Zeckendorf's Theorem, even though Zeckendorf was not the first to prove it: https://en.wikipedia.org/wiki/Zeckendorf's_theorem
Wikipedia's article on the Fibonacci word gives an explicit formula for the n-th digit of the word and mentions many other interesting properties. For example, the Fibonacci word is often cited as the worst case for algorithms detecting repetitions in a string. https://en.wikipedia.org/wiki/Fibonacci_word
The On-Line Encyclopedia of Integer Sequences on the Fibonacci word: https://oeis.org/A003849
Wikipedia on turtle graphics: https://en.wikipedia.org/wiki/Turtle_graphics
I have posted about the Fibonacci word twice before, although not recently.
My post from March 2013 discusses the word in the context of self-shuffling words: https://plus.google.com/101584889282878921052/posts/YnUkZ986LMM
My post from December 2012 discusses Fibonacci snowflakes and some generalizations of the Fibonacci word: https://plus.google.com/101584889282878921052/posts/KSuUFJV6tyv
If you're disappointed that I didn't talk about the golden ratio, have a look at the aspect ratio of the accompanying picture.
#mathematics #sciencesunday #spnetwork arXiv:1508.06754
The rule for generating the famous Fibonacci sequence 1, 1, 2, 3, 5, 8, 13, 21, ... is that each number (after the first two) is the sum of the previous two numbers. The Fibonacci word is an infinite string of zeros and ones with properties reminiscent of the Fibonacci sequence, and the Fibonacci fractal, shown in the picture, is a way to represent the Fibonacci word in the form of a fractal.
One way to generate the Fibonacci word is to define strings of zeros and ones by the rules S(0)=0, S(1)=01 and S(n)=S(n–1)S(n–2) when n is at least 2. This gives rise to the sequence of strings 0, 01, 010, 01001, 01001010, 0100101001001, ..., whose limit, as n tends to infinity, is the Fibonacci word. There are other equivalent, but superficially very different, ways to generate this word, including (a) using an explicit formula for each digit given in terms of the golden ratio; (b) using a substitution rule; and (c) using the Zeckendorf representation of integers in terms of Fibonacci numbers.
By suitably interpreting the digits of the Fibonacci word as turtle graphics instructions in a Logo-like programming language, it is possible to represent the word as a fractal. More precisely, if one reads the digits in order, then the n-th digit corresponds to the following sequence of instructions:
1. draw a segment forwards;
2. if the digit is 0, then turn left 90 degrees is n is even, and turn right 90 degrees if n is odd.
The picture shows the result of this procedure after many iterations. The resulting curve has various interesting mathematical properties, some of which concern the square-shaped gaps. By inspection, we count one large square gap (in the middle, at the bottom); five smaller square gaps, and 21 square gaps of the next size down. The numbers of these gaps, sorted by size, turn out to be given by every third Fibonacci number starting with the second 1 (1, 5, 21, 89...) which means that there are 89 squares of the next size down. Furthermore, each square has a side length that is 1+√2 times the side length of the square of the next size down; the number 1+√2 is known as the silver ratio.
The recent paper Factorizations of the Fibonacci Infinite Word by Gabriele Fici (http://arxiv.org/abs/1508.06754) surveys some factorizations of the Fibonacci word and shows how to derive these factorizations using elementary properties of the Fibonacci numbers. In some cases, this gives easier derivations of the results than were previously known. An example of such a factorization involves the sequences S(n) from earlier. Proposition 1 of the paper proves that the Fibonacci word can be factorized as the infinite product 0.1.S(0).S(1).S(2)..., where the symbol . is used to separate the factors.
One of the most surprising factorizations in the paper is Proposition 9, which involves the reversals, T(n), of the strings S(n). The strings T(0), T(1) and so on are then given by the sequence 0, 10, 010, 10010, 01010010, ... Remarkably, the concatenation of the strings T(n) also gives the Fibonacci word, even though the ingredients being used to construct it are backwards and generally not palindromic. Another way to say this is that the Fibonacci word can be factorized as the infinite product T(0).T(1).T(2)...
Relevant links
The 2009 paper The Fibonacci Word fractal by Alexis Monnerot-Dumaine is an excellent guide to the mathematical properties of the fractal, and the picture of the fractal here comes from that paper. You can download the paper for free at https://hal.archives-ouvertes.fr/hal-00367972/document
Monnerot-Dumaine's paper explains how to construct the Fibonacci word using a substitution rule, and explores what the fractal looks like if one makes turns at angles other than a right angle.
Fici's paper explains how to construct the word using the Zeckendorf representation of natural numbers. It is a theorem that any positive integer can be expressed uniquely as the sum of one or more distinct non-consecutive Fibonacci numbers. This is called Zeckendorf's Theorem, even though Zeckendorf was not the first to prove it: https://en.wikipedia.org/wiki/Zeckendorf's_theorem
Wikipedia's article on the Fibonacci word gives an explicit formula for the n-th digit of the word and mentions many other interesting properties. For example, the Fibonacci word is often cited as the worst case for algorithms detecting repetitions in a string. https://en.wikipedia.org/wiki/Fibonacci_word
The On-Line Encyclopedia of Integer Sequences on the Fibonacci word: https://oeis.org/A003849
Wikipedia on turtle graphics: https://en.wikipedia.org/wiki/Turtle_graphics
I have posted about the Fibonacci word twice before, although not recently.
My post from March 2013 discusses the word in the context of self-shuffling words: https://plus.google.com/101584889282878921052/posts/YnUkZ986LMM
My post from December 2012 discusses Fibonacci snowflakes and some generalizations of the Fibonacci word: https://plus.google.com/101584889282878921052/posts/KSuUFJV6tyv
If you're disappointed that I didn't talk about the golden ratio, have a look at the aspect ratio of the accompanying picture.
#mathematics #sciencesunday #spnetwork arXiv:1508.06754

Add a comment...
Add a comment...
Post has attachment
Add a comment...
Post has shared content
Public
Art or Science?
Are these pastel fractals the creation of an avant garde artist from some postmodern cubism movement? You may be surprised to learn that these are high resolution images of bacterial populations growing on a petri dish!
◈ Bacterial Art: First, the familiar E. coli bacteria were genetically marked with differently colored fluorescent proteins before mixing together on an agar plate. Each rod-shaped bacterium grows by division to give a single file of cells that is sensitive to small mechanical forces from neighboring cells pushing and jostling against each other. The line of cells buckles in a way that is predicted by fractal mathematics. As the bacteria grow to form a confluent film, jagged boundaries emerge between differently colored clonal lines. Zooming in, the patterns are self-similar, repeating at scales from millimeters to micrometers! Mutant bacteria that form spherical cells don't produce these fractal patterns.
◈ Form and Function: What do these beautiful images teach us? They help us understand how patterning happens on a nanoscale. In synthetic biology the goal is to engineer populations of cells to produce spatial patterns, synchronized signals and predictable behavior that can be simulated using simple, mathematically coded rules.
◈ Life Imitates Art? Oscar Wilde reversed the conventional when he claimed that life imitates art far more than art imitates life. What do you think he meant by this? It seems to me that this bacterial fractal "art" perfectly illustrates John Berger's definition of Cubism: "The metaphorical model of Cubism is the diagram: The diagram being a visible symbolic representation of invisible processes, forces, structures."
Reference (and more beautiful images): http://data.plantsci.cam.ac.uk/Haseloff/resources/LabPapers/Rudge2013.pdf
#ScienceSunday
Are these pastel fractals the creation of an avant garde artist from some postmodern cubism movement? You may be surprised to learn that these are high resolution images of bacterial populations growing on a petri dish!
◈ Bacterial Art: First, the familiar E. coli bacteria were genetically marked with differently colored fluorescent proteins before mixing together on an agar plate. Each rod-shaped bacterium grows by division to give a single file of cells that is sensitive to small mechanical forces from neighboring cells pushing and jostling against each other. The line of cells buckles in a way that is predicted by fractal mathematics. As the bacteria grow to form a confluent film, jagged boundaries emerge between differently colored clonal lines. Zooming in, the patterns are self-similar, repeating at scales from millimeters to micrometers! Mutant bacteria that form spherical cells don't produce these fractal patterns.
◈ Form and Function: What do these beautiful images teach us? They help us understand how patterning happens on a nanoscale. In synthetic biology the goal is to engineer populations of cells to produce spatial patterns, synchronized signals and predictable behavior that can be simulated using simple, mathematically coded rules.
◈ Life Imitates Art? Oscar Wilde reversed the conventional when he claimed that life imitates art far more than art imitates life. What do you think he meant by this? It seems to me that this bacterial fractal "art" perfectly illustrates John Berger's definition of Cubism: "The metaphorical model of Cubism is the diagram: The diagram being a visible symbolic representation of invisible processes, forces, structures."
Reference (and more beautiful images): http://data.plantsci.cam.ac.uk/Haseloff/resources/LabPapers/Rudge2013.pdf
#ScienceSunday

Add a comment...
Post has shared content
Public
I’m over here pondering any connection between biomimicry & psychology. Mhm.
“
“We’ve already seen an explosion in the relationship between understanding biology using information sciences and then developing ideas in information sciences based on biological insight,” he says. “I think there’s still a lot of room there to play with computer science and biology by learning from biological systems.”
“
“
“We’ve already seen an explosion in the relationship between understanding biology using information sciences and then developing ideas in information sciences based on biological insight,” he says. “I think there’s still a lot of room there to play with computer science and biology by learning from biological systems.”
“
The power of the swarm
While this article does not specifically speak about it, swarm intelligence is a fascinating study of how bees decide to carry out tasks.
Ants and bees aren’t just informing energy grids. “Some of the early successes in biomimicry already have come from millions of dollars saved by mimicking how an ant communicates information and translating that into how you send server packets over the Web or how you pick a route for your trucks to drive or something like that,” McGee says.
There’s plenty more to learn; researchers at Pacific Northwest National Laboratory have developed a computer network security system based on the swarm intelligence ants use to defend their hills, and going all the way back to 2007 researchers inspired by honeybee communications built a system that lets networks optimize performance by taking advantage of idle servers during periods of high demand. But McGee thinks we’ve just scratched the surface of what biology can do for IT.
“We’ve already seen an explosion in the relationship between understanding biology using information sciences and then developing ideas in information sciences based on biological insight,” he says. “I think there’s still a lot of room there to play with computer science and biology by learning from biological systems.”
#biomimicry #science #scienceeveryday #sciencesunday
While this article does not specifically speak about it, swarm intelligence is a fascinating study of how bees decide to carry out tasks.
Ants and bees aren’t just informing energy grids. “Some of the early successes in biomimicry already have come from millions of dollars saved by mimicking how an ant communicates information and translating that into how you send server packets over the Web or how you pick a route for your trucks to drive or something like that,” McGee says.
There’s plenty more to learn; researchers at Pacific Northwest National Laboratory have developed a computer network security system based on the swarm intelligence ants use to defend their hills, and going all the way back to 2007 researchers inspired by honeybee communications built a system that lets networks optimize performance by taking advantage of idle servers during periods of high demand. But McGee thinks we’ve just scratched the surface of what biology can do for IT.
“We’ve already seen an explosion in the relationship between understanding biology using information sciences and then developing ideas in information sciences based on biological insight,” he says. “I think there’s still a lot of room there to play with computer science and biology by learning from biological systems.”
#biomimicry #science #scienceeveryday #sciencesunday
Add a comment...
Post has shared content
Public
The complexity of integers
The complexity of an integer n is defined to be the smallest number of 1s required to build the integer using parentheses, together with the operations of addition and multiplication.
For example, the complexity of the integer 10 is 7, because we can write 10=1+(1+1+1)x(1+1+1), or as (1+1+1+1+1)x(1+1), but there is no way to do this using only six occurrences of 1. You might think that the complexity of the number 11 would be 2, but it is not, because pasting together two 1s to make 11 is not an allowable operation. It turns out that the complexity of 11 is 8.
The complexity, f(n), of an integer n was first defined by K. Mahler and J. Popken in 1953, and it has since been rediscovered by various other people. A natural problem that some mathematicians have considered is that of finding upper and lower bounds of f(n) in terms of n.
John Selfridge found a lower bound for f(n) by proving that f(n) is always greater than or equal to 3 log_3(n), where log_3(n) denotes the base 3 logarithm of n. This lower bound is sharp: it cannot be improved because the bound is achieved when n is a power of 3. For example, if n=81, we have n=3^4, which (by the definition of logarithm) means that log_3(n)=4. Selfridge's lower bound for f(n) is then 3x4=12. We can write 81=(1+1+1)x(1+1+1)x(1+1+1)x(1+1+1), which uses twelve occurrences of 1, and Selfridge's result shows that there is no way to achieve this using eleven or fewer 1s. Note that we are only allowed to use addition and multiplication in our expressions; using exponentiation is not allowed.
The problem of finding an upper bound is more complicated. Richard K. Guy found an upper bound for f(n), showing that it is bounded above by 3 log_2(n), which works out at about 4.755 log_3(n). (The 4.755 is an approximation to 3log(3)/log(2).) Guy found this bound using Horner's method, which is explained in the appendix below. The worst case of Guy's bound is achieved in the case that n has no zeros when it is expressed in base 2.
More generally, it turns out to be difficult to improve the upper bound for f(n) because numbers whose binary digits contain a very uneven balance of 0s and 1s tend to cause problems. An example of this is the number 1439, which is 10110011111 in binary. It turns out that f(1439)=26, and an optimal expression for 1439 is 1+2(1+2(1+1+3((2)(2)(2)(2)+1)((3)(2)+1))), where 2 and 3 are shorthands for (1+1) and (1+1+1), respectively. This works out as about 3.928 log_3(1439).
However, it is possible to replace this value of 3.928 by a lower number for “almost all” integers n; in particular, recent work of J. Arias de Reyna and J. van de Lune shows that it can be replaced by 3.655 for a set of natural numbers of density 1. The recent paper Applications of Markov Chain Analysis to Integer Complexity (http://arxiv.org/abs/1511.07842) by Christopher E. Shriver improves this upper bound to 3.529 for suitably generic numbers. The paper mentions that extensive numerical computations by other authors suggest that it ought to be possible to improve the generic bound to 3.37.
Relevant links
The On-Line Encyclopedia of Integer Sequences lists the complexities f(n) of the first few natural numbers (https://oeis.org/A005245) and mentions that Guy conjectured that f(p) = f(p–1) + 1 whenever p is prime. Guy's conjecture turned out to be false, but the smallest counterexample, which was found by Martin Fuller in 2008, is surprisingly big: the smallest such prime is 353942783.
The OEIS is an extremely useful resource for researchers in discrete mathematics. They are currently running their annual appeal for donations, and you can donate on this page: http://oeisf.org/
The “extensive numerical computations” mentioned above are discussed in the paper by Iraids et al, which you can find at http://arxiv.org/abs/1203.6462
Appendix: proof of Guy's upper bound
Horner's method works by expressing a (nonzero) number n in base 2 as a sequence of binary digits a_0 a_1 ... a_k, where we may assume that the last digit a_k is equal to 1. It is not hard to show from this that log_2(n) is greater than or equal to k and less than k+1. It is also immediate that n can be expressed as the polynomial
a_0 + x a_1 + x^2 a_2 + ... + x^k a_k
when x is replaced by 2. Rearranging, we find that
n = a_0 + 2(a_1 + 2(a_2 + ... + 2(a_{k-1} + 2)...)),
because we assumed that a_k was equal to 1. We then replace each occurrence of 2 with “1+1”, and replace each occurrence of a_i with either 0 or 1. Finally, we remove all the occurrences of “0+”. The number of 1s in the result is at most 3k, as required; the bound is achieved whenever n is one less than a power of 2.
For example, n=42 is 101010 in binary, which can be written as 0+2(1+2(0+2(1+2(0+2)))). This expands to (1+1)(1+(1+1)(1+1)(1+(1+1)(1+1))), which uses 12 ones. Since 42 lies between 2^5=32 and 2^6=64, it follows that log_2(42) is more than 5, and 3 log_2(n) is more than 12, as required.
#mathematics #sciencesunday #spnetwork arXiv:1511.07842
The complexity of an integer n is defined to be the smallest number of 1s required to build the integer using parentheses, together with the operations of addition and multiplication.
For example, the complexity of the integer 10 is 7, because we can write 10=1+(1+1+1)x(1+1+1), or as (1+1+1+1+1)x(1+1), but there is no way to do this using only six occurrences of 1. You might think that the complexity of the number 11 would be 2, but it is not, because pasting together two 1s to make 11 is not an allowable operation. It turns out that the complexity of 11 is 8.
The complexity, f(n), of an integer n was first defined by K. Mahler and J. Popken in 1953, and it has since been rediscovered by various other people. A natural problem that some mathematicians have considered is that of finding upper and lower bounds of f(n) in terms of n.
John Selfridge found a lower bound for f(n) by proving that f(n) is always greater than or equal to 3 log_3(n), where log_3(n) denotes the base 3 logarithm of n. This lower bound is sharp: it cannot be improved because the bound is achieved when n is a power of 3. For example, if n=81, we have n=3^4, which (by the definition of logarithm) means that log_3(n)=4. Selfridge's lower bound for f(n) is then 3x4=12. We can write 81=(1+1+1)x(1+1+1)x(1+1+1)x(1+1+1), which uses twelve occurrences of 1, and Selfridge's result shows that there is no way to achieve this using eleven or fewer 1s. Note that we are only allowed to use addition and multiplication in our expressions; using exponentiation is not allowed.
The problem of finding an upper bound is more complicated. Richard K. Guy found an upper bound for f(n), showing that it is bounded above by 3 log_2(n), which works out at about 4.755 log_3(n). (The 4.755 is an approximation to 3log(3)/log(2).) Guy found this bound using Horner's method, which is explained in the appendix below. The worst case of Guy's bound is achieved in the case that n has no zeros when it is expressed in base 2.
More generally, it turns out to be difficult to improve the upper bound for f(n) because numbers whose binary digits contain a very uneven balance of 0s and 1s tend to cause problems. An example of this is the number 1439, which is 10110011111 in binary. It turns out that f(1439)=26, and an optimal expression for 1439 is 1+2(1+2(1+1+3((2)(2)(2)(2)+1)((3)(2)+1))), where 2 and 3 are shorthands for (1+1) and (1+1+1), respectively. This works out as about 3.928 log_3(1439).
However, it is possible to replace this value of 3.928 by a lower number for “almost all” integers n; in particular, recent work of J. Arias de Reyna and J. van de Lune shows that it can be replaced by 3.655 for a set of natural numbers of density 1. The recent paper Applications of Markov Chain Analysis to Integer Complexity (http://arxiv.org/abs/1511.07842) by Christopher E. Shriver improves this upper bound to 3.529 for suitably generic numbers. The paper mentions that extensive numerical computations by other authors suggest that it ought to be possible to improve the generic bound to 3.37.
Relevant links
The On-Line Encyclopedia of Integer Sequences lists the complexities f(n) of the first few natural numbers (https://oeis.org/A005245) and mentions that Guy conjectured that f(p) = f(p–1) + 1 whenever p is prime. Guy's conjecture turned out to be false, but the smallest counterexample, which was found by Martin Fuller in 2008, is surprisingly big: the smallest such prime is 353942783.
The OEIS is an extremely useful resource for researchers in discrete mathematics. They are currently running their annual appeal for donations, and you can donate on this page: http://oeisf.org/
The “extensive numerical computations” mentioned above are discussed in the paper by Iraids et al, which you can find at http://arxiv.org/abs/1203.6462
Appendix: proof of Guy's upper bound
Horner's method works by expressing a (nonzero) number n in base 2 as a sequence of binary digits a_0 a_1 ... a_k, where we may assume that the last digit a_k is equal to 1. It is not hard to show from this that log_2(n) is greater than or equal to k and less than k+1. It is also immediate that n can be expressed as the polynomial
a_0 + x a_1 + x^2 a_2 + ... + x^k a_k
when x is replaced by 2. Rearranging, we find that
n = a_0 + 2(a_1 + 2(a_2 + ... + 2(a_{k-1} + 2)...)),
because we assumed that a_k was equal to 1. We then replace each occurrence of 2 with “1+1”, and replace each occurrence of a_i with either 0 or 1. Finally, we remove all the occurrences of “0+”. The number of 1s in the result is at most 3k, as required; the bound is achieved whenever n is one less than a power of 2.
For example, n=42 is 101010 in binary, which can be written as 0+2(1+2(0+2(1+2(0+2)))). This expands to (1+1)(1+(1+1)(1+1)(1+(1+1)(1+1))), which uses 12 ones. Since 42 lies between 2^5=32 and 2^6=64, it follows that log_2(42) is more than 5, and 3 log_2(n) is more than 12, as required.
#mathematics #sciencesunday #spnetwork arXiv:1511.07842

Add a comment...
Wait while more posts are being loaded