This article is both a great and a terrible example of writing about technical issues for the public.
The body of the article is the great part. What it's explaining is an idea called "formal verification," where you can construct a mathematical correctness proof that verifies that a piece of code does exactly
what it's supposed to, and nothing else. Formal verification has been an idea for decades, but until recently it was nearly useless, because the specification for "exactly what it's supposed to do" was nothing more or less than the entire program itself, which was just as prone to subtle errors. But the past decade has brought major advances, and there are now meaningful ways to specify what a small
piece of code is supposed to do in a concise fashion, and then formally verify that the code does so.
If you apply this technique to small but critical pieces of code, like the network protocols which underpin the Internet, or the central control logic of an armed drone, or the control code of a cardiac pacemaker, this can give you a tremendous improvement in system reliability and security, especially for systems where it really matters.
The terrible part is what was stapled onto this article by some overly-excited editor: a headline reading "Hacker-Proof Code Confirmed." To anyone who works in computer science, this headline is approximately as reassuring as "Iceberg-Proof Ships Confirmed" would have been in 1913. It is, in fact, palpable nonsense, and misrepresents what happened here.
The experiment done was a very important one: a drone was designed to be controlled by code, where its central logic was verified using these methods, and then a very skilled attacking team was given network access to the system and were free to use any number of methods to compromise it. Their failure is of great practical significance, because it means that the security of the system has indeed proved robust so far.
But the headline might lead you to believe that this system is somehow "hacker-proof," or worse yet, that this technique might make the Internet as a whole "hacker-proof."
The first rule of security is that a chain is only as strong as its weakest link, and a wall is only as strong as its weakest point. You can have a perfectly unpickable lock, but if someone can simply kick in the door or climb in the window, that doesn't make your system secure. Formally verified code is a component
of a secure system, but it isn't a secure system in its own right.
Here are some examples of things which could nonetheless make an armed drone with this code vulnerable:
* The part of the code which isn't
formally verified could be vulnerable, and someone could use it to compromise the behavior of the vehicle in some other important way -- say, to cause a gradual oil leak which makes the system seize up. This especially includes the connections between the formally verified pieces and the rest of the system.
* The hardware itself could be compromised, with backdoors in the behavior of the chips. And before you start asking whether chip designs could also be formally verified, take a look at this attack that I posted about a few months ago: https://plus.google.com/+YonatanZunger/posts/ayXVWrFpQus
In that attack, every single thing about the chip's behavior was unchanged -- unless you caused a particular wire in it to flip on and off rapidly enough, at which point for physics reasons which have nothing at all to do with the way the circuit is wired, another wire would magically flip on. You would have to model not the circuit, but the actual physics of the device, down to atomic resolution, to detect this -- and to do that, you would need to input not the physical model you started with, but the actual as-built physics of the thing.
* The signals leading into the system could be compromised. This is how chaff works to distract a heat-seeking missile, but it could be either more or less subtle than that. This could range from compromising its control channels over the radio (which means breaking some crypto), to confusing its IFF (friend/foe) detection by emitting false signals yourself, to simply hiding under a tarp where it can't see you.
* The people could be compromised. Nine times out of ten, this is the biggest vulnerability in any system, and it can range from infiltration by hostile actors, to "social engineering" that fools people into doing the wrong thing (look up "CEO email scams" for an example), to the highly effective cryptographic technique known in the field as "rubber-hose cryptanalysis."
(This technique works by finding the person who has the decryption key, and beating them with a rubber hose until they tell it to you. Surprisingly effective, and it has variants which are even more so.)
So whenever you read an article like this, read it with a careful eye. In this case, the meat of the article is quite accurate and interesting -- but the headline is meant to lead you astray. In general, if someone is promising you something that sounds too good to be true, it probably is: stop, think, and talk to people who might know when you hear that.
h/t +Don McArthur