Abstract: In the mist of feeling insecure with our electronic communications, we can take solace in the advice that we can "trust the math" behind cryptography. Faced with concerns over the safety of fly-by-wire planes and self-driving cars, we should be able to find solace in the fact that some parts of our safety critical systems have been proved formally correct: that is, we should be able to "trust the proof". The way formal proofs are built today, however, makes trusting them difficult. While proof assistants are used to build formal proofs, those proofs are often locked to that assistant. Formal proofs produced by one proof assistant cannot usually be checked and trusted by another proof assistant nor by a future version of itself. Thus, one of the components of the scientific method that ensures trust---reproducibility---is missing from proof checking.
The field of proof theory, as initiated by Gentzen in the 1930's, has identified a great deal of structure in formal proofs. Given recent developments in this topic, it is now possible to apply what we know about proofs in theory to what we would like to have for proofs in practice. To that end, I will present the Foundational Proof Certificates framework for defining the semantics of proof evidence. Since such definitions are executable, it is possible to build proof checkers that can check a wide range of formal proofs. In this scheme, the logic engine that executes such definitions is the only thing that needs to be trusted. Since such a logic engine is based on well-known computational logic topics, anyone can write a logic engine in their choice of programming language and hardware in order for them to build a checker they can trust. I will also overview some of the consequences of defining proofs in this manner: provers can become networked far more richly than they are currently and both libraries and marketplaces of proofs are enabled in a technology independent fashion.