Index  Comments

Computer programming is a form of applied mathematics, and so there's a strong yearning to prove the components of programs; in terms of proving, most software is at the stage of existence proofs only. It's unfortunate there's no justification for not testing programs, even if they be formally proven, at least under current systems. I've experienced that horror of finding a mostly insignificant, yet still major, flaw in a development tool of my design. The only way to trust a program written using an untrusted tool is to get multiple perspectives, and so seeing the program operate constitutes one alternative view. Even were the tool and its results proven and trusted, there are no guarantees of the system over which they stand. Unreliable bases and hardware are one cause for required testing.

Even disregarding these issues, there's still no justification for not testing, especially when this be cheap. I've written programs I've known to be correct, only to have them fail upon testing, then noticing I'd skillfully overlooked an error in inputting them; an example is writing trivial machine code, to failure, noticing at closer inspection that an incorrect variant of an instruction had been used, and with replacement giving the correct and expected result. Such a base mistake is haunting.

Testing which has been made particularly easy and cheap is a proper shortcut to careful thought. It occasionally works to mindlessly manipulate a program until it seems to give proper results, only at that point attempting to truly think about what it does more carefully, although this usually fails.

Even with more pure mathematics, finding a counterexample is a particularly easy method of proof. I know of schools of mathematics which had errored for years, using incorrect proofs, and more testing would've revealed the egregious mistakes earlier. Perhaps testing is practically required for human endeavour in everything, although it goes against ideals, to test that which is thought to be known.

Ultimately, I'd prefer to be able to trust my life to results of mine from the machine, and may tell myself I should be able to, before testing, but I remain wary. It's an affront to human achievement that wonderful techniques for creating machines so precisely are wasted on terrible machine designs, and even worse software, and with how others have been tricked into thinking this acceptable. Note, I'm not an advocate for automated testing as is common, manual testing after careful review being my message. The only reason systems work as well as they currently do is because they accrue over many years or decades; I don't use what currently passes for ``version control'' software, but the design of treating software as insertions or deletions of lines is queer to me, and the common method is to have automated testing run after such modifications, with failure resulting in rejection. That this would be considered acceptable is poor, as it's entirely inadequate for finding major flaws that are revealed only in obscure circumstances. One of my later articles will concern the value of entirely rewriting software in pursuit of quality, and the inability for other programmers to do anything but poke at or modify work so large no alone person will ever bother understanding it's opposed to this.