I'm not a fan of OpenSSL myself, but the author's conclusions don't follow from his premise. Having 300 bugs in a 300KLOC program is an exceptionally awesome figure of merit which most other software products aspire to reach.
Moreover, it's not clear to me why killing OpenSSL will benefit anyone. If we do this, doesn't it follow that all-new code will result, which implies the (in the absolute best-case) 1 bug per 1KLOC figure of merit will persist?
I'm a Quality Engineer by trade; I study this stuff in "the real world." I've used all manner of different development processes to help ensure software development occurs with the highest possible quality. The most awesome tool I've used is called "Cleanroom Software Engineering," and I can say unequivocally that it works as advertised. However, even it isn't a perfect solution. Engineering relies on formal or semi-formal proofs of code, which is an error-prone process itself if humans conduct it, and QA testing occurs probabilistically, with weights adjusted for "most frequent use in the field."
That frankly means that the same development process used to ensure astronauts are kept alive in the Space Shuttle during lift-off, orbital maneuvers, AND landing alike, STILL cannot cover every possible opportunity for bugs to exist.
The fact is, if you want proven, bug-free code, you need formally-proven software checked against formally-proven specifications. (Surprisingly, it's rather easy to write semi-formally proven code that is about as good as formally-proven code checked by an automation tool.) And the latter bit is what kills you every time -- the specs are very often as much in error as anything else in the delivery platform.
So even if we do decide to abandon OpenSSL, can the authors of its replacement mathematically prove the correctness of its requirements?
I don't think so.