Suppose our program is doing something in a particular way. It works OK, or it works most of the time, but we've figured out a way to make it work better, or work more of the time. It seems like the standard thing to do is to replace:
try { old implementation }
except { fall flat on our face }With:
try { new implementation }
except { fall flat on our face }What occurs to me instead is:
try { new implementation }
except
{
log that there's apparently a problem with the new implementation;
try { old implementation } // we know this used to work!
except { NOW fall flat on our face! }
}What am I missing? Why is code that used to be considered important and reliable constantly scrapped, to somewhere deep in a heap of old dead code that can only be resurrected by human intervention, just because we thought of something a little faster or broader or newer or cleverer?
Interesting question. My first thought is that most code does not look like that at all. That is, there is no try-catch-structure to it, so you would have to introduce it artificially, which may work in some cases, but probably not most. But that’s just the first thought that comes to my mind. There may be something to your suggestion.
ReplyDeleteThanks for your comment, Daniel. I'll have to think about how to express my idea more clearly.
ReplyDeleteI've thought of a few different ways of keeping old code around in a latent state. The try/except pattern I described here is one example. Here's another: An expensive but one-time initialization, where various strategies are tested for validity and performance, and the one that actually works and runs best at the moment is selected as the one that will be used.
I've also thought for instance about artificial life or other adaptive systems, where various versions of a piece of code are the alleles that are selected between. I've had a lot of crazy ideas, and I'm looking forward to trying them out and finding out why they don't actually work how I imagined they would! :D