Profile cover photo
Profile photo
Gary Gauthier
3,359 followers
3,359 followers
About
Posts

Post has attachment
Artificial Intelligence is being developed to replace hedge fund managers.

The president and founder of the world's largest hedge fund, Ray Dalio believes that his firm can do without its human managers. He's commissioned a team of software engineers to create Artificial Intelligence that automates the operation of his company according to a master plan. The firm manages $160 billion in assets.

Many of the firm's remaining employees would be cast in the role of caretakers who would make sure that things are running smoothly and be in charge of ringing the alarm bells if something should go wrong with the computer systems—or, "intervening when something isn’t working.”

The team of engineers who will work on the project is spearheaded by IBM veteran, David Ferrucci, who led the development of the Watson supercomputer. Watson excels at understanding and speaking in natural language. Its artificial intelligence engine beat world champion Jeopardy players at their own game.

Dalio wants the new AI system, which will be in charge of the day-to-day management of the firm, "to make three-quarters of all management decisions within five years." This includes selecting future hires for the firm and ranking the recommendations and performance of employees.

The benefits to having AI machines run any business operation are many and varied. They are more efficient, less costly, and excel at following detailed instructions. As compared to humans, machines have relatively few bad days and remove all emotion as a potential variable that might affect their goals or a preferred outcome.

This again raises the question how the market for redundant employees replaced by AI will play out—across different industries—in the long run.

Post has shared content
Why are barns painted red? Because red paint is the cheapest. Why is red paint cheap? Because the elements that make up the red dye are very plentiful.

What are these elements? Iron and Oxygen. The Earth’s crust is 6% iron and 30% oxygen—no shortage here.

Where do the elements come from? From stars that exploded. The amount of each element on earth exists in the same ratio as the elements occur in exploding stars.

What are some colors associated with other elements asides the red from iron? Copper: blues and greens—Cobalt: deep blues—Chromium: Yellow.

What's the connection between iron and red blood cells? Blood is red because it contains iron. Red blood cells contain a protein (hemoglobin); the protein consitst of subunits (hemes) that can bind iron and oxygen. Blood has a deep red hue because of how the chemical bonds between the iron and the oxygen reflect light.
How the price of paint is set in the hearts of dying stars

Today I’m going to try to explain the real reason that barns are painted red: nuclear fusion. And yes, this is an excuse to take a mad ride around some of the stranger corners of physics and chemistry in order to give you the real, this-is-not-BS, answer to a simple question.

This question got stuck in my head as a result of an episode of a long-forgotten sitcom called Head of the Class, about a high school class full of smart kids. (Sort of like Welcome Back, Kotter in reverse) This being an American show, it’s obligatory to occasionally emphasize the superiority of the ordinary virtue of “plain folk,” so in one episode the protagonists face off in some kind of academic contest with kids from a rural school, and end up losing because their city-slicker knowledge can’t answer the question “why are barns red?” (And this episode appears to have annoyed me enough that, several decades later when I have only the haziest memory of the show’s existence, I still remember it) The answer the show gives is “because red paint is cheaper,” which is absolutely true, but it doesn’t really tell you why red paint is cheaper. It clearly isn’t because the Central Committee for the Pricing of Paints has decreed that red shall be in vogue this century, or because of the secret Communist sympathies of early American farmers. In fact, to answer this we have to go all the way to the formation of matter itself.

Paints & Pigments & The Sun

First of all, let’s think about what paint is. At a minimum, paint is a combination of a binder (some material that dries to form a film, like acrylic or oil) and a pigment, some material which gives it a color. A pigment is a material which absorbs some colors of light and reflects others; most pigments are minerals. (There are also organic pigments, such as the Imperial Tyrian purple made from the snot of the Murex snail, but not as many, and they tend to be much more expensive for the simple reason that there are a lot more rocks than there are animals and plants.) So for something to be a cheap pigment, it has to be a good pigment, and it has to be cheap. So let’s figure out what makes each of these happen.

To be a good pigment, first and foremost, something has to have a nice, bright color. The way pigments produce color is that light shines on them, and they absorb some, but not all, of the colors of light. (Remember that white light is a mixture of many colors of light) For example, red ochre, a.k.a. hematite, a.k.a. anhydrous iron oxide (Fe2O3), absorbs yellow, green and blue light, so the light that reflects off of it is reddish-orange. (This happens to be the pigment that’s used in barn paint, so we’re going to come back to it.) Light is absorbed when a photon (a particle of light) strikes an electron in the pigment and is absorbed, transferring its energy to the electron. But quantum mechanics tells us that an electron can’t absorb just any amount of energy: the particular energies (and therefore colors) that it can absorb depend on the layout of the electrons in the material, which in turn depends on its chemistry.

The detailed calculations, or even the not-so-detailed calculations, are way beyond the scope of this post. (There are even whole books about it, like Nassau’s The Physics and Chemistry of Color) But there’s one important pattern which I can at least tell you about, which is that if you look at the various atoms which form a pigment, and you look at their outermost electrons (not the inner electrons, which are so tightly bound to their atom that they don’t participate in chemistry; all of chemistry is determined by the behavior of the outermost electrons around an atom) then it turns out that certain kinds of outermost electrons form pigments, and certain ones don’t.

The magic property is what’s called “angular momentum,” which basically measures how fast these outermost electrons are rotating around the nucleus. Electrons in atoms get angular momentum only in fixed increments (there’s that quantum mechanics again, only fixed increments allowed) and for historical reasons, the first few increments are named “s,” “p,” “d,” and “f.” On the periodic table, (http://www.webelements.com/) the elements whose outer electrons are “s” form the two tall leftmost columns; the “p” elements are the big square on the right; the “d” elements are the big block in the middle; and the “f” elements are the two rows off at the bottom. (If we ever make element 121, it would be the first “g” element) 

Electrons with less angular momentum spin in more spherical (rather than deformed) orbits, and when multiple electrons are trying to fly in the same spherical orbit, they repel each other pretty strongly. The result of this is that two “s” electrons meeting will have very different energies -- and it turns out that, in quantum mechanics, the amount of energy an electron can absorb is exactly the difference between these energy levels. So “s” means a big gap, “p” a slightly smaller one, and so on. And it turns out that “d” electrons are right at the sweet spot where that gap corresponds to visible light. 

Well, why are those particular colors of light visible? It’s because of the temperature of the Sun: our eyes didn’t evolve to see X-rays because there aren’t many X-rays to see around here. Instead, they’re very sensitive in the range of colors that the Sun produces, from red (around 780nm wavelength) to a peak brightness of yellow (around 600nm) all the way up to violet (around 400nm). Those colors correspond to energy gaps of about 0.3 electron volts (eV, a good unit of energy for studying atoms) which are right around the energies of chemical bonds involving d electrons. S- and p- bonds involve energies of 1-3 eV, corresponding to wavelengths around 100nm, in the far ultraviolet range.

Did we just get lucky that the Sun is yellow, and if we lived orbiting another star might the useful pigments come from p bonds? Surprisingly, the answer is no. The Sun’s color comes pretty directly from its temperature: it’s literally glowing yellow-hot, with a surface temperature of about 5,800K. The coolest stars, red dwarfs, are about 2,800K and glow red. The hottest stars, the type O stars, go up to about 40,000K, only 72nm; but it turns out that when a star gets any hotter than class F (about 7,000K, about 400nm -- blue) its lifespan starts to decrease precipitously. This is because the temperature of stars is actually fixed by the kinds of fusion reaction going on in their core, which I’ll get back to in a moment, and those hotter reactions burn through their fuel a lot faster. The net result is that any star that’s going to last long enough to have planets with life on them might be a bit redder or a bit bluer than our sun, but not radically so: and it’s those d-orbitals that are going to make the best pigments for anyone whose eyeballs evolved there.

How the price of iron is determined in the centers of stars

So now we know what makes a good pigment. What makes a cheap pigment? Obviously, that it’s plentiful. The red pigment that makes cheap paint is red ochre, which is just iron and oxygen. These are incredibly plentiful: the Earth’s crust is 6% iron and 30% oxygen. Oxygen is plentiful and affects the color of compounds it’s in by shaping them, but the real color is determined by the d-electrons of whatever attaches to it: red from iron, blues and greens from copper, a beautiful deep blue from cobalt, and so on.

So if we know that good pigments will all come from elements in that big d-block in the middle, the real question is, why is one of these elements, iron, so much more common than all of the others? Why isn’t our world made mostly of, say, copper, or vanadium?

The answer, again, is nuclear fusion. 

To explain this, we need to think about how fusion actually works. The basic principle is that two small atomic nuclei combine to form a bigger nucleus. Now, there are two forces at work here: there’s an electromagnetic force, which makes the positively-charged nuclei repel each other, and repel each other more and more as they get closer. And there’s the strong nuclear force, which is what holds nuclei together: it’s powerfully attractive, much stronger than the electromagnetic force, but it has the interesting property that it simply shuts off at distances of much more than about 1fm. (10^15m, the size of a medium nucleus) So to make fusion happen, you need to somehow push two nuclei together with enough force (generally in the form of heat and pressure) to overcome their repulsion until they get within range of the strong force, at which point it will yoink them together with spectacular force and release a good deal of energy in the process.

This gives us two rules of thumb. As the nuclei involved get bigger, the amount of energy (heat and pressure, in particular) required to set fusion off gets higher, because you have more repulsion that you have to overcome before fusion can start. And second, as the nuclei get bigger, the amount of energy you get back from the fusion gets smaller: in the bigger nucleus that you would form, you still have all of this repulsion, but the strong force can only bind together the nucleons that are close to each other, so as the nucleus gets bigger you keep adding repulsion but you don’t keep adding attraction. 

This means that fusion of really small elements is very efficient; combining two hydrogen atoms is just great. (For various technical reasons, the slightly heavier isotopes of Hydrogen - deuterium (a proton with a neutron) and tritium (a proton with two neutrons) do better than bare protons. That’s where the “D-T” of D-T fusion comes from, and it’s the kind that powers both the Sun and H-bombs.)

In fact, once the atoms get too big, you no longer get back any net energy from fusion: the last reactions which turn out to be net-positive are the ones that form atoms with 56 total neutrons and protons in them. Beyond that, fusion starts consuming more energy than it produces, and won’t light up anything. (If you go far enough beyond that, to 232 nucleons or more, you start to see nuclei that are so unstable that a swift kick will make them separate enough that repulsion takes over, and they explode with a bang: that’s nuclear fission, a subject for another time)

Now imagine a star. It starts out its life as a giant ball of primordial hydrogen from the formation of the universe, and under the tremendous pressure of gravity, it starts to fuse. As it fuses, it starts to form heavier elements like helium: but (rule 1) it takes higher temperatures than these mere hydrogen fusion temperatures to make helium do any fusing, so the Helium basically acts as a pollutant and just gums up the works. Ultimately, it reduces the efficiency of fusion so much that power levels start to go down.

But the only thing holding the star up was the energy of the fusion reactions, so as power levels go down, the star starts to shrink. And as it shrinks, the pressure goes up, and the temperature goes up, until suddenly it hits a temperature where a new reaction can get started. These new reactions give it a big burst of energy, but start to form heavier elements still, and so the cycle gradually repeats, with the star reacting further and further up the periodic table, producing more and more heavy elements as it goes.

Until it hits 56. At that point, the reactions simply stop producing energy at all; the star shuts down and collapses without stopping. This collapse raises the pressure even more, and sets off various nuclear reactions which will produce even heavier elements, but they don’t produce any energy: just stuff. These reactions only happen briefly, for a few centuries (or for some reactions, just a few hours!) while the star is collapsing, so they don’t produce very much stuff that’s heavier than 56. 

If the star is small, it will end up as a slowly-cooling cinder, or as a white dwarf. But if it’s big enough, then this collapse will send shock waves through the body of the star which bounce off the star’s core, pushing the collapsing wall of matter outward with more than enough energy to escape its gravity: the star explodes in a supernova, carrying off a good ⅓ of its total mass, and seeding the rest of the universe with elements heavier than the simple hydrogen we started with. Those elements, in turn, will join the mix for the next generation of stars, as well as the accretion clouds of stuff around them which turns into clumps rather than falling into those stars: that is, the planets. And this is how all of the chemical elements in the universe were formed.

How do we know that this is really where the elements came from? There’s a whole field of science around this, but the classic paper is commonly known as “B2FH” for its authors -- Burbidge, Burbidge, Fowler, and Hoyle. Using only the physics and the computational resources available to them in 1957, they calculated all of the various processes by which elements would be formed in stars, in enough detail to predict the ratios of elements which would be formed, and to predict the abundance ratios of chemical elements in our solar system. Amazingly enough, they made a pretty damned good and thorough prediction, enough that even then it was clear that this was a smoking gun -- and it’s been refined considerably since. 

So how does this tie in to red paint? Well, I told you before that the magic cutoff for ordinary fusion is at 56 nucleons. Because it’s the last point in the normal reaction chain, a lot of the fusion products tend to “build up” there before the star explodes, and so you get a lot more of isotope 56 than you do of anything except for the really light elements that didn’t fuse at all, or didn’t fuse much. (Check out the first figure in the B2FH paper, linked below) And what has 56 nucleons in it and is stable? A mixture of 26 protons and 30 neutrons -- that is, iron.

So it’s because of the details of nuclear fusion -- the particular size at which nuclei stop producing energy -- that iron is the most common element heavier than neon. And as we saw before, you have to be a d-block element to make a decent pigment, which means that iron is going to be, by far, the most plentiful pigment for any species which lives on a star that isn’t about to blow up. And it’s going to bond to oxygen, the most plentiful thing around in planetary crusts for it to bond to (only hydrogen and helium are more common, and they tend to evaporate), to form iron oxides: those rich, red ochres that we mix with oils to form a cheap, stable, red paint.

And that’s why barns are painted red.


To learn more:
Something a lot more interesting than you would guess: http://en.wikipedia.org/wiki/Paint
http://en.wikipedia.org/wiki/Pigment
The color of the Sun: http://en.wikipedia.org/wiki/Sunlight
Colors of stars, and a place to start about how they get them: http://en.wikipedia.org/wiki/Main_sequence
The abundance of elements in the universe, the Earth, the human body, and other places:
http://en.wikipedia.org/wiki/Abundance_of_the_chemical_elements
A nice diagram of how much energy you get from fusion and fission for various elements, thanks to +Jas Stronghttp://hyperphysics.phy-astr.gsu.edu/hbase/nucene/imgnuk/bcurv.gif
The 1980’s sitcom that inspired this: http://www.imdb.com/title/tt0090444/

To learn a lot more about color:
http://www.amazon.com/Physics-Chemistry-Color-2nd/dp/0471391069
To learn a lot more about how the elements are formed, the original B2FH paper: http://rmp.aps.org/pdf/RMP/v29/i4/p547_1

Photo by John Christopher: http://www.flickr.com/photos/67382043@N06/6153955066/
Photo

Post has attachment
AI will transform many industries. But it’s not magic says the founding lead of the Google Brain team and former director of the Stanford Artificial Intelligence Laboratory.

According to Andrew Ng, Despite AI’s breadth of impact, the types of it being deployed are still extremely limited. So what can today's AI do?

The ruIe of thumb is "if a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future."

A lot of valuable work currently done by humans — examining security video to detect suspicious behaviors, deciding if a car is about to hit a pedestrian, finding and eliminating abusive online posts — can be done in less than one second. These tasks are ripe for automation. Figuring out how individual linkages apply in a business context is the important part.

Post has attachment
"Our machines are starting to speak a different language now." The best coders can’t fully reduce this new language to lines of defined variables, conditional statements and looping instructions.

"Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued" and deployed an approach to computing called machine learning.

With machine learning, coders aren't in control as they once were by feeding machines with coded instructions. Now, they have to train machines endowed with a neural network. If you want to teach a neural network to recognize cats, for instance, you show it a large number of photos and videos of cats—and eventually it figures things out by itself.

"With machine learning, the engineer never knows precisely how the computer learns or accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable." It's very much like a black box.

The author's metaphor to describe the way programmers interact with neural networks is the "mysterious relationship" of parent and child or that of a dog trainer and your pet.

According to Silicon Valley veteran, Andy Rubin, “After a neural network learns how to do speech recognition, a programmer can’t go in and look at it and see how that happened. It’s just like your brain. You can’t cut your head off and see what you’re thinking.”

In Rubin's view, the world of coding "is coming to an end."

Post has attachment
Changes in transportation and communication that started in 1920 became fundamental parts of daily life half a century later.

Air travel was a perilous, uncomfortable endeavor in 1920. Charles Lindbergh did not cross the Atlantic until 1927 and many died attempting similar feats. By 1970 jumbo jets connected major cities around the world and were quite safe. Indeed, in many ways flight in 1970 was more pleasant than today, with no security lines and larger, more comfortable seats in coach class — albeit at a much higher price than today.

Traveling from the West Coast to the East Coast went from being a multi-day affair by train to a trip made in less than a single day, for those who could afford it.

By 1970, cars were comfortable, with options like radios and air-conditioning. They were driven on  smooth, safe surfaces on the interstate highway system, most of which had been built by 1972.

Some of the biggest changes to everyday life since 1970 have been around information and entertainment. The cliché about TV going from three channels a generation ago to hundreds actually understates it and doesn't even take into account the internet.

Post has attachment
A study published in the latest issue of Journalism takes a look at computer-generated news articles.

 Researchers at LMU, a university in Munich, Germany modeled an experiment to rate news articles in various categories such as readability, credibility and journalistic expertise. Subjects were asked to read selected articles and to give them a grade.  

Each article for review came with a note that indicated whether the item was written by a journalist or a computer program. Many prominent media outlets regularly publish articles written by computer software. In a devious twist that was a part of the experimental model, the subjects were sometimes misled as to who the actual author was (man or machine). The study found that readers (the experimental subjects) rated texts generated by algorithms as more credible than texts written by journalists. This was true no matter who the readers believed the author to be.

The finding that computer-generated texts were consistently rated as more trustworthy surprised the LMU researchers. As a possible explanation, one of the researchers said "The automatically generated texts are full of facts and figures—and the figures are listed to two decimal places. We believe that this impression of precision strongly contributes to the perception that they are more trustworthy."

The subjects were found to have some bias in favor of articles attributed to journalists. "Articles which the participants believed to have been written by journalists were consistently given higher marks [ ] than those that were flagged as computer-generated—even in cases where the real 'author' was in fact a computer."

One of the LMU researchers took a stab at why readers "always rated articles attributed to journalists more favorably" even when the attribution was false. "Readers' expectations differ depending on whether they believe the text to have been written by a person or a machine, and [ ] this preconception influences their perception of the text[.]"

Post has attachment
There are those who like to point out that artificial intelligence is not a technology. But it might be on its way to becoming one. Modern operating systems respond appropriately to many voice commands including "schedule an appointment." Google increasingly delivers built-in AI services on its platform. Facebook is jumping on the bandwagon. Amazon meanwhile sells AI like electricity, a metered utility service.

Investments in AI startups are booming. It "reached $310 million in 2015, almost a seven-fold increase in five years."

"Machine intelligence is also evolving to the point where it can be used by more people to do more things." Deep learning machines allow a small team to "devise complex applications with little expertise in a given field. The hard part may be figuring out how to make money." David Malkin's company "intends to help Japanese schools grade papers—a prosaic exercise that may change the game in a country where tests are still handwritten."

 Unlike typical software programs built around rigid rules (also known as algorithms), deep-learning AI is modeled on how humans process information. Both humans and the new AI machines can figure out the context of new information and arrive at decisions based on stored information. The traditional approach to software doesn't allow it to process certain types of information, like recognizing spoken language or interpreting images.

Things are changing with rapid advances in machine learning. According to David Malkin, "Now you can be a reasonably smart guy and make useful stuff. Going forward, it will be more about using imagination to apply this to real business situations."

David Malkin has a Ph.D. in machine learning. He is one of a team of four engineers with almost zero knowledge of Japanese who created software, in just a few months, that can decipher handwriting in the Japanese language.

The Japanese writing system is generally considered to be the most complicated in use anywhere in the world. It consists of three character sets, (or scripts) one of which is used for foreign words.  Almost all Japanese sentences contain a mixture of the two main scripts. There are a few instances where one word contains all three scripts.

Most Japanese words are written in a script called Kanji. It has several thousand characters. Each can have a range of meanings and most have more than one pronunciation. It all depends on context. There are no spaces between words in written Japanese. School students need to learn over 2,000 Kanji characters, which comprises 95% of characters used in written text.

On a related note, a project director at Fujitsu Laboratories says "Deep learning for handwritten Chinese character recognition is already catching up to human capabilities and will probably eclipse them."

Post has attachment
They're coming for the rest of us. At first, it was the world champions.

The author, a former world champion, commiserates and welcomes another former champion to an elite club.

It’s the Future and they face the reality that their own individual talent, the thing that’s made them special is no longer so special. The best human talent is being replaced by machine intelligence. Lee Sedol, the latest victim, described himself as “very surprised,” and then “in shock” and “quite speechless.”

This new opponent, unlike all the other competitors the former champions faced in the past, can never be overconfident or become intimidated. There’s "a disorienting, airless vibe" to facing this type of challenge. There’s "no way to play it psychologically," because it has no feelings, no id, no ego.

The well-financed tech "labs full of anonymous nerds" are arrayed against us. After they're done making examples of the world champions, they're coming for the rest of us.

Post has attachment
The article identifies four economic trends that drive businesses to develop better and cheaper products and services.

1. Downward price pressures

2. Entrepreneurs can use the platforms of digital service companies to sell products and services

3. Platform operators leverage their own platform to create new services

4. The rise of zero marginal cost products and services

The authors cite convincing examples of free and cheap services that disrupted companies to the point of extinction.

The article concludes with the prescription in the title. If you want to succeed, make products that are better and cheaper.

Post has attachment
If this isn’t artificial intelligence, what is?

A neural network is a way of "structuring a computer so that it looks like a cartoon of the brain, comprised of neuron-like nodes connected together in a web." Each node performs a very basic function, but collectively they can tackle difficult problems. More importantly, with the right algorithms, they can be taught.

One of the difficulties of using the term artificial intelligence is how tricky it is to define. As soon as machines have conquered a task that previously only humans could do — whether that’s playing chess or recognizing faces — then it’s no longer considered to be a mark of intelligence. As one computer scientist put it: "Intelligence is whatever machines haven't done yet."

Computers aren’t replicating human intelligence. "When we say the neural network is like the brain it’s not true." "It’s not true in the same way that airplanes aren’t like birds. They don’t flap their wings, they don’t have feathers or muscles."

If we do create intelligence, it "won’t be like human intelligence or animal intelligence." It’s very difficult for us to imagine, for example, an intelligent entity that does not have the impulse towards self-preservation.
Wait while more posts are being loaded