Interesting little article in the New York Times today about ‘Why Old Technologies are Still Kicking’. Having started shredding 80 column cards on an IBM 360 a very long time ago, seen the rise of Unix-based machines and then the flood of PCs (of all sizes), I have heard the cry ‘thats just obsolete, any right-thinking business needs to do… (fill in the blank with whatever technology or other practice you like). The point is made in the article that this technology has survived simply because it did its job and business decisions about throughput and reliability (and existing investments in working solutions) trumped fashion. Makes sense to me, glad somebody thinks along these lines.
Problem with all computing, even the stuff that hums away in my home office, is that it needs to fulfill some purpose. When I started writing code (Fortran and Cobol) pretty much everything was focused on the job at hand and creature comforts for the developers was pretty much a side issue. In fact, if I recall my history, that was why Unix was developed at Bell Labs — to provide a better working environment for writing code that would run someplace else. It was never designed as a production platform — and even today performance analysis and troubleshooting is pretty much a black art. In working on problems on a number of *nx platforms my experience is that one needs to understand the whole application stack from the surface of the disk drive to the user presentation to have a chance of sorting things out. My oldest, after a short career writing internals at M*sft is working with Linux-based commercial servers and has mutttered about how little critical info is available to debug production problems. Unix and linux have been pushed into areas, for economic reasons that it was not intended to fulfill. It was a fashionable choice… and some businesses will limp along and most likely pay higher costs overall as a result of their adherence to fashion. (The latest IBM mainframes with their commodity hardware core and multiple linux machines are an interesting exploitation of available technologies — someday I need to dig into this further.)
(As an aside, there was one other casualty of the rise of RISC architectures and non-task-oriented OS designs that most folks have probably missed — synchronous error reporting. At one point it was possible to examine a crash dump or error traceback and point to a specific place with some assurance as the source of a fatal error. But as processors became faster as a result of splitting execution up into multiple stages and running more execution streams in parallel the point of failure became lost — reporting problems might be delayed or not signaled at all. And as the number of layers grew the source of problems became more obscure. If an application messes up now, in all likelihood no one can tell me why or what needs to be done to fix it. The most common solution seems to be remove it (and everything else on the machine) and put stuff back until the problem reappears or doesn’t.. do I do this under the full moon or with my forehead painted blue??? In an application generator I wrote a very long time ago, intelligent condition handling was one of the largest and most complicated parts of the code. No wonder that this tends to get left out… so we find with computers being more ubiquitous that jokes about their reliability and stability are commonplace. No wonder…)
Most of the machines running downstairs are Windows — another interesting story. Back in the early days of PCs IBM and Microsoft worked together to develop an improved operating system. SO they hired David Cutler and some of the other DEC folks who developed VMS (my personal favorite as an OS) — and surprise, surprise, it came out looking a lot like VMS. When the partners fell out, one had OS/2 on the IBM side and Windows NT on the Microsoft side. A lot of the internals had similar names and purposes as their ancestor. My first NT machine was wonderful — applications could misbehave and not affect things running along side. There was one small difference, though, until recently Intel processors do not have the process isolation features of the VAX processors. So isolating different parts of memory was very much slight of hand and not as absolute as VMS. This is most exposed in what has been referred to as DLL-hell, when segments of application code are loaded into a common memory pool and can clobber each other and user applications. Like much of VMS, the original isolation benefits of loadable code segments has been lost. But I digress (again). The original design objective was a desktop operating system to service the needs of one person. But this has been hammered, stretched and prodded into being an all-encompassing OS servicing (?) the needs of multiple business users. But the single user origins are still exposed (perhaps the subject of a future rant) in the multiplicity of boxes most solutions entail. The advertising tells us that all businesses must embrace it to succeed.
In the end it all comes down to the eternal question of what are we trying to accomplish? (Or what was that point again?) The technologies that have survived have withstood the test of time and use… they just do their job (which is, after all, what we are paying for). Mainframes and Cobol just do their jobs. It may not be fashionable but it sure is successful. The stuff that continues to change (and sometimes evolve — we hope) is still trying to fit in and become the ‘fittest’. (Can anyone tell me the real reasons why there are thousands of computer languages — and more every day?) But as has been said here before, when one hears the challenge of ‘why are you wasting your time/money on that obsolete … when you should be using…’ one has to apply the test of ‘follow the money’. Precisely who benefits in bending to the pressures of fashion when business profitability is at stake? And how much will be sacrificed to be fashionable?