We've had a rash of health issues and injuries hit our family at a bad time and we need some help. I'm asking that people by my books so you can help out and still get something out of it.
If you like original premises in Science Fiction, I think you'll enjoy the story. These are the first two books in a three book series and the third is currently being written.
The Kindle versions are available at:
The Print versions are available at:
If you can't buy one and still want to help, share this post, write a review on the site if you've read it, or tell a friend about it. Anything you do will be much appreciated.
Signed copies are also available for local pick up and shipping from here in the hills of Arkansas.
Thanks for your support!
Open Hangouts, settings, SMS, disable Auto Retrieve MMS h/t
Definitely need to get Craig Ferguson on the show now. ;-)
Thanks for this!
It appears that the cause of the SpaceShipTwo crash was precisely of this sort: the designers never considered the possibility that a particular switch might be flipped at an incorrect time. In this case, it was flipped only a few seconds too soon, at a speed of Mach 0.8 instead of Mach 1.4. (This under rocket power, where acceleration is fast) That caused the tail system to unlock too soon, be ripped free by acceleration, and destroy the spacecraft, killing the co-pilot and severely injuring the pilot.
Scaled Composites' design philosophy of "relying on human skill instead of computers" here reeks of test pilots' overconfidence: the pilots are so good that they would never make a mistake. But at these speeds, under these g-forces, under these stresses, and tested repeatedly, it's never hard for an error to happen.
There are a few design principles which apply here.
(1) It should not be easy to do something catastrophic. There are only a few circumstances under which it is safe for the feathers to unlock, for example, and those are easy to detect based on the flight profile; at any other time, the system should refuse to unlock them unless the operator gives a confirmatory "yes, I really mean that" signal.
(2) Mechanical tasks that can lead to disaster are a bad idea. Humans have limited bandwidth to process things: while our brain's vision center is enormously powerful, our conscious mind's ability to think through things works at language speed, a few ideas per second. Here, time was wasted with a human having to perform a basically mechanical task of unlocking a switch at a particular, precise time. This requires the human to pay attention, time something accurately, and flip a switch, at a time that they should be simply watching out for emergencies. Since the time of unlock is already known long before takeoff, a better design would be for the unlock to happen automatically at the right time -- unless the risks from having an automatic unlocker (perhaps due to a reliability issue, or having a complex part prone to failure) exceed the benefits of removing it.
What's important to learn from this accident is that this error isn't specific to that one mechanism: this is an approach which needs to be taken across the entire design of the system. Every single potential or scheduled human action needs to be reviewed in this way.
An excellent perspective on this comes from James Mahaffey's book Atomic Accidents, a catalogue of things that have gone horribly wrong. In the analysis, you see repeatedly that once designs progressed beyond the initial experimental "you're doing WHAT?!" stage, almost all accidents come from humans pushing the wrong button at the wrong time.
Generally, good practice looks like:
(A) Have clear status indicators so that a human can tell, at a glance, the current status of the system, and if anything is in an anomalous state.
(B) Have "deep status" indicators that let a human understand the full state of some part of the system, so that if something is registering an anomaly, they can figure out what it is.
(C) Have a system of manual controls for the components. Then look at the flows of operation, and when there is a sequence which can be automated, build an automation system on top of those manual controls. (So that if automation fails or is incorrect for any reason, you can switch back to manual behavior)
(D) The system's general behavior should be "run yourself on an autonomous schedule. When it looks like the situation may be going beyond the system's abilities to deal with on its own -- e.g., an anomaly whose mitigation isn't something that's been automated -- alert a human."
The job of humans is then to sit there and pay attention, both for any time when the system calls for help, and for any sign that the system may need to call for help and not realize it.
This wasn't about a lack of a backup system: this was about a fundamentally improper view of humans as a component of a crtiical system.
- Nerdy/Geeky things in general
- Red Pandas (and all things so fluffeh you wanna die!)
- Women's Rights
- Human Rights
- ...and miscellaneous random stuff I feel the world needs to know!
- Monument Valley
William Shatner steals Space Shuttle to search for reborn Leonard Nimoy
An arrest warrant has been issued for Star Trek actor William Shatner, who is reported to have stolen the space shuttle Enterprise.
Dads Who Have Parenting Totally Figured Out ( pics) | Lost Assets
We unearthed a new and fascinating way to save money each winter on cough drops for you DIYers out there. With a little honey, some sugar,..
Bride hacks into website, decodes proprietary 3D model format to save we...
First off, welcome back to blogging, me! So glad you/I could join us, can't wait to see what you/I write and share!! I got married in July t
Computer Engineer Barbie Needs Men To Write Code, Can't Reboot Computer
It's taken the media at large many months to catch on, but Amazon reviewers have been up in arms over a book from Mattel's "I Can Be" Barbie