This post is 100% speculative. I am struck by the seeming contradiction between two seeming facts. 1) In software circles we consider it a truism that any sizable complex software system can never be 100% debugged. Even with infinite time and effort it seems preposterous to claim that all the bugs could be found. Think Linux, a word processor, an avionics app, a web browser, a health care exchange; anything non-trivial. If Microsoft had the policy in 1981 of perfecting MSDOS 1.1 before moving ahead, we would still be stuck there. 2) In hardware circles we consider apparent perfection the norm. I'm thinking of devices such as CPUs. I remember a time in the 90s when bugs in the Pentium chips caused lots of headlines. To me that proves how rare it is if discovery of a bug causes headlines. Yet devices like CPUs are merely software programmed with the alphabet of resistors, transistors, wires, capacitors, and so on. They are apparently completely debugged using simulations that are analogous to interpreters of programs written in this circuit alphabet language. -- So here's the speculation. If we restricted ourselves to a very simple "machine language" alphabet analogous to resistors, transistors, wires, capacitors, and so on, could we design large and complex software systems that are much more bug free than traditional software? In other words, I'm suggesting that the richness of expression in our programming languages is what leads to our inability to perfect the applications. It is interesting to note that a language of resistors/transistors ... has no jump operation, no arrays, no labels, although the language can be used to create a CPU that has all these things. See what I'm saying? We can perfect software that implements a CPU (or some other chip), but we can't perfect software that the CPU (or chip) executes. If not the programming paradigm that separates the two, then what?