// Fiction will only become more satisfying as technological progress continues.
> The explosion of information available to scientists has made specialization more necessary than ever, but this comes at the risk of losing sight of relevant research in other disciplines. Psychologists have taken countermeasures by engaging in studies that bridge related disciplines, such as neuroscience, linguistics, artificial intelligence, philosophy, and anthropology. Our point here, however, is that disciplines that ostensibly have little or nothing to do with either psychology or psychiatry—like parasitology, virology, gastroenterology, immunology, and embriology—can and do produce research that is extraordinarily relevant to both. We thus call for more collaboration among scientists who currently appear to live on different planets. We also call for greater exchange of information, or straight collaboration, among health-care professionals. It might even be desirable if some of us become specialists not in any particular profession or research area but in integrating information from different disciplines. We have shown that each of us is a superorganism; but to function at its best, our scientific and professional community should perhaps become a bit more like one, too.
Full text: https://goo.gl/FmL4ND
It appears that the cause of the SpaceShipTwo crash was precisely of this sort: the designers never considered the possibility that a particular switch might be flipped at an incorrect time. In this case, it was flipped only a few seconds too soon, at a speed of Mach 0.8 instead of Mach 1.4. (This under rocket power, where acceleration is fast) That caused the tail system to unlock too soon, be ripped free by acceleration, and destroy the spacecraft, killing the co-pilot and severely injuring the pilot.
Scaled Composites' design philosophy of "relying on human skill instead of computers" here reeks of test pilots' overconfidence: the pilots are so good that they would never make a mistake. But at these speeds, under these g-forces, under these stresses, and tested repeatedly, it's never hard for an error to happen.
There are a few design principles which apply here.
(1) It should not be easy to do something catastrophic. There are only a few circumstances under which it is safe for the feathers to unlock, for example, and those are easy to detect based on the flight profile; at any other time, the system should refuse to unlock them unless the operator gives a confirmatory "yes, I really mean that" signal.
(2) Mechanical tasks that can lead to disaster are a bad idea. Humans have limited bandwidth to process things: while our brain's vision center is enormously powerful, our conscious mind's ability to think through things works at language speed, a few ideas per second. Here, time was wasted with a human having to perform a basically mechanical task of unlocking a switch at a particular, precise time. This requires the human to pay attention, time something accurately, and flip a switch, at a time that they should be simply watching out for emergencies. Since the time of unlock is already known long before takeoff, a better design would be for the unlock to happen automatically at the right time -- unless the risks from having an automatic unlocker (perhaps due to a reliability issue, or having a complex part prone to failure) exceed the benefits of removing it.
What's important to learn from this accident is that this error isn't specific to that one mechanism: this is an approach which needs to be taken across the entire design of the system. Every single potential or scheduled human action needs to be reviewed in this way.
An excellent perspective on this comes from James Mahaffey's book Atomic Accidents, a catalogue of things that have gone horribly wrong. In the analysis, you see repeatedly that once designs progressed beyond the initial experimental "you're doing WHAT?!" stage, almost all accidents come from humans pushing the wrong button at the wrong time.
Generally, good practice looks like:
(A) Have clear status indicators so that a human can tell, at a glance, the current status of the system, and if anything is in an anomalous state.
(B) Have "deep status" indicators that let a human understand the full state of some part of the system, so that if something is registering an anomaly, they can figure out what it is.
(C) Have a system of manual controls for the components. Then look at the flows of operation, and when there is a sequence which can be automated, build an automation system on top of those manual controls. (So that if automation fails or is incorrect for any reason, you can switch back to manual behavior)
(D) The system's general behavior should be "run yourself on an autonomous schedule. When it looks like the situation may be going beyond the system's abilities to deal with on its own -- e.g., an anomaly whose mitigation isn't something that's been automated -- alert a human."
The job of humans is then to sit there and pay attention, both for any time when the system calls for help, and for any sign that the system may need to call for help and not realize it.
This wasn't about a lack of a backup system: this was about a fundamentally improper view of humans as a component of a crtiical system.
// I'm pretty skeptical of the so-called 'new vitalists'. I think emergent complexity can be given a thoroughly mechanistic treatment, in the style of William Bechtel and other "new mechanists", which I take to be the primary opposition to the new vitalists.
Still, this book looks interesting, and I'll probably read it despite my initial skepticism. Even if they draw the wrong lessons from these examples, they are definitely working on the right problems.
> You are shrewd, skeptical and tranquil.
You are philosophical: you are open to and intrigued by new ideas and love to explore them. You are calm under pressure: you handle unexpected events calmly and effectively. And you are unstructured: you do not make a lot of time for organization in your daily life.
You are motivated to seek out experiences that provide a strong feeling of prestige.
You are relatively unconcerned with both tradition and achieving success. You care more about making your own path than following what others have done. And you make decisions with little regard for how they show off your talents.
// Basically dead on.
The numeric breakdown is a separate criticism of, does this actually work? The numbers might, for example, be insufficiently granular (e.g. does everyone get > 90% for most or even just some set of desirable features)? Watson adds another layer of indirection by attempting to learn weights for a predictive model. As such, it is not comparable to industry standard methods. But even then, state of the art is not in counting words or even well crafted self-answered surveys but rather, well crafted surveys answered by those who know the individual well (i'm sorry to repeat again).
Burn, media, burn! Why we destroy comics, disco records, and TVs
Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c
Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks
DVICE: The Internet weighs as much as a largish strawberry
Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want
philosophy bites: Adina Roskies on Neuroscience and Free Will
Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that
Kickstarter Expects To Provide More Funding To The Arts Than NEA
NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O
How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
IBM's "hyperlocal" weather forecasting system aims to give government agencies and companies an 84-hour view into the future o
NYT: Google to sell Android-based heads-up display glasses this year
It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the