System scalability — a classic fail (again)

A very long time ago a firm I worked at decided to deploy a new, distributed application to the traders. And in keeping with all the latest ideas the application code was loaded from a common server connected to the trade floor. The idea was simple — code changes would be done to the one copy on the server so anytime anyone signed in they would get the latest and of course greatest…

There was, of course, a small catch in this idea. While loading an application like this from the server is fairly quick, 100 more or less simultaneous logons are a different matter. It took hours and there was serious talk of having some poor clerk come in at 4am to sign on all the workstations so by the time everyone got in their machine was ready to go. It was, I confess, fun to watch the very smart development team confront the fact (with some help) that their clever idea was actually pretty costly.

From talking to my brother-in-law over the weekend it sounds like the medical records system he is required to use for charting was designed by some of the same folk. Logon is 15-20 minutes (this is a doctor sitting there waiting for the system to respond, remember). Record changes takes minutes. The day starts early and ends late due to the ponderousness of the whole application and the multiple layers of signon security built in. Didn’t ask if a chart could be shared — I am almost afraid of the answer.

Problem is that computers are really poor at sharing anything but very good at faking it. Easy to forget, especially in these days of applications with little bits spread all over the landscape. And on local networks, no matter how fast, transfers happen one bit at a time.

The old and much maligned mainframe systems did one thing right — their design sacrificed pretty much everything that slowed down processing and allowed applications to shovel data through at great rates. But these systems required a real understanding of what was going on to design and operate. Not quite as light and fluffy as todays GUI (gooey?) development tools that emphasize simplicity and cute effects for down to the bone functionality.

So we save money on building the code, pat ourselves on the back for what a contemporary, glossy system we have built. And force the highly skilled and not inexpensive people who have to use it to fiddle while waiting for someone’s cleverness to work. Guess this is another case of cost cutting at the wrong end — somehow it seems to me that lifecycle costs for all those high priced users would have been a better optimization target than development. But then they did have a few massive fails getting it off the ground to begin with. Glad the sales folk and the well-connected consulting house management made their commissions. Looks like the rest of us will be paying for this for years.

Its all just Surface anyhow

Like many folks in IT, I have had a long love/hate relationship with Microsoft products. But I have also used many other computing products over the decades. A bit of background — I was introduced to computing when my stepmother, a librarian, facilitated my access to the adult book collections in the CPL. Among the many things my explorations took me to were the early books on computing. This was the days of computers designed for business that used base 10 fixed point bcd math and scientific computers that used binary floating point. Memory was spots of charge on storage ‘scopes’, acoustic pulses in mercury delay lines and of course magnetic cores and drums. Hard drives came along later.

My first hands on was a night school course in Fortran (WAT-IV, I recall), submitted to a remote IBM 360 via our schools’ card reader/punch and delivered by line printer. There was an online manual, but as it took two boxes of paper to print out, access by students was forbidden. Besides the paper the line usage would probably have been costly.

Over the years I have used HP 3000 and HP-UX, Decsystem 2060, Tandem, Sun machines of all sizes, Vaxen, PDP-11s, IBM 360, 370, System 38, AS-400, etc. My personal computer for many years was a PDP-11/03, the 11/23 built originally from a Heathkit, then grown from electronic salvage parts to quite the beast. To some extent I miss the old OS build from source — edit the assembler files then run the build. On the 11/23 updating the OS was an all day process — you did not want to make mistakes and did not change things very often.

And as for Windows — well, I first used it on PC Dos when it was loaded as an application. When I worked in financial services I had to switch back and forth between an IBM network (As400 & Sys38), Decnet and Novell. Did this with a command file that rewrite config and autoexec to change drivers and memory allocations — after a reboot. And had to move applications between machines, lacking the install disks, by copying files and editing the registry subtrees. Liked the idea that NT was modelled on VMS — traces still show in the internals that I learned about from writing device drivers and solving multi-user lock management problems.

As part of my consulting practice I have had a server-based personal network for many years. My first was BackOffice Server 4.5 but quickly replaced by SBS 2003. It was nice running my own Exchange server, local Sharepoint and file sharing. I migrated to 2008, then 2011 Essentials and currently 2012 Essentials. At each step some aspects worked better — the server runs central backup for all my machines and itself. And can FINALLY use and backup volumes larger that 2tb. Allows me to keep my personal media on the server — music, videos and images take a lot of room.

But a lot of stuff has gone downhill. I just finished removing a number of features from the server that I thought would be helpful. Windows Update services has been a feature of my server install since it became available — internet access is always a precious resource. It just made sense to download one copy of the updates and push them out to the half dozen machines here. And pretty much it always worked the same — enable it, wait for the software to inventory the network and change group policy. Start using it. But no more. After a few weeks of trying to get it to work I removed the feature — its there and active but uninterested in my connected machines. Lord knows why….

Lots of things are like this now. Remote admin tools find and connect to the server but don’t do anything.  I get error messages about missing drivers for non-existent devices, new virtual drives that need to be backed up but don’t exist. Best practice results that flag critical problems to configuration issues that quite frankly only Microsoft internals people would have access to (like TCP_Phase_Offset not having the right value is one I recall). Some of these go away as mysteriously as they came. Some do not.  Makes me think of a neurotic friend who complains incessantly about this or that but its really all about them — and they are not there for you. Be nice if stuff just worked — looks good on paper I am sure but the reality is a bit different. Superficially it is more turnkey than ever but fixing the broken bits is more like editing the old macro source — but there are no comments and what documentation is available reads increasingly like marketing fluff.

Perhaps the disastrous migration experience from Server 2011 Essentials to 2012 Essentials should have been a clue as to what was coming. With an AD forest of five machines I expected the migration to be fairly quick. And yes, I read and followed all the migration prep documentation. But after a day migrating I ended up with a target server that was unusable. What do do? Restore backups on the source server, wipe the target, rinse and repeat. After a few tries I contacted MS Partner support who ended up connecting me to a group in India. After a few weeks (!) of babysitting their remote dialin and being able to do nothing I threw in the towel and redefined the AD from scratch in the new install. Never found out what the problem was. Odd, because I prefer to stay pretty vanilla in house. If I had been doing this for a client I probably would have been kicked off the job and maybe sued.

Windows 8 and Server 2012 are pretty much the same. I was attracted to the technical features but repelled by the user interface. And for the life of me I cannot comprehend why my server needs an OS directory for custom ring-tones. And why with two 17″ screens the idea of a ‘modern’ application is a full screen with six lines of upper case 40 point text. What a waste of screen real estate — windows in my mind has always been about increasing information density, not decreasing it.  But the ponderousness of the UI was just too much, 3 steps to do everything  — at least Powershell brings back the old command line, but far too little usable online help.

Then Microsoft released 8.1. I tried it in a virtual machine and decided that it was pretty slick. So I bought a license and converted my principle PC to dual boot between Win 7 and Win 8.1. Dual screens are a bit weird — they are treated independently rather than as a single desktop spread across two monitors. And there is a duplicate task bar on each screen. I decided in the end that the start menu was not too bad, really, as I had already begin to have command group folders for desktop icons to declutter my Win7 screens. So topologically pretty much the same. And 8.1 finally resumes cleanly — so I can do things, wander away and let my machine go to sleep, come back and spin the track ball to wake it up, and do something else. Much better.

So like a fool I decided to buy a Surface to play with. A Pro was too expensive and the Surface 2 with 8.1 RT and Office was an attractive bundle. With the folding, detachable keyboard cover…

Well, it has turned out quite like the 2012 server. Looks pretty and sometimes is very nice. But sleep doesn’t seem to work — if I suspend it the battery consumption acts like I did nothing. And all too often when one tried to wake it up it wouldn’t unless I connected it up to external power. The support website gives a half dozen recovery processes for this one. I have tried pretty much all of them. Sometimes they work. Shutting it down and rebooting is more reliable — but not always. External power is sometimes the only way to wake it up — even if the battery says its full.

Same weird problems with the keyboard typing cover. Good key feel, built in illumination, magneticly attached to the device so it can be removed or folded under. But it is erratic and can stop working in mid-sentence. Remove the keyboard and reattach it and sometimes it will start working. But not always. But a hard reboot works… so far.

Swipes that should close an application don’t — the form moves, starts to shrink and then pops back. On the PC implementation I am always in ‘desktop’ with START a keyclick away. The Surface is a bit different. One has both ‘modern’ applications that consume the whole screen or ‘desktop’ applications that have the traditional ‘close’ button. There is no way to actually exit a ‘modern’ application — the START takes you back to the menu but the application lingers on. And I gather that in future the desktop is going to be removed. So the user aggravation level will  be even more enhanced. Meanwhile, my Android tablet (and phone) have a back key that lets me escape from what I am doing.

And the walled garden approach to only allow applications that are bought from the Microsoft store is a nice idea in a corporate environment where control is everything. But for a personal productivity tool it adds cost and complexity to every application developer — and one suspects some hidden costs. I have looked at the process, being highly technical, but it is just too much hassle and cost for moving a small number of tools to a dubious platform. Essentially every application I use regularly on desktop 8.1 is unavailable for the RT. And the vendors I have asked no plans to go there, because unlike Android, the potential market is very small. Now I understand why…

But the Surface does do remote desktop very well indeed. So it lives in my briefcase when I go to do support at the local radio station. But as a general purpose tool I am increasingly regretting having spent the money. Flirted with selling it on eBay but there are lots already there and the discounts are deep. And I hardly ever use it for other things as it is just too erratic. I use my phone, Nexus 7 and of course paper to supplement the desktop. My advise to others is to simply avoid it, especially if they need a device to do real work. It could have been great but between the designed in constraints and the flaky implementation in my judgement it is pretty broken. Too bad…

Just to add to the mirth and merriment, the MAIL, CONTACTS and CALENDAR applications collapsed on both my Surface RT and desktop 8.1 within a few days of each other. The Microsloth online support is full of advice that suggests its really your problem — firewall, network stack, antivirus and so forth. The fix is of course ultimately remove and reinstall. Did that on the desktop — worked for a couple of hours. Problem has been reported since early 2013, no fixes, no acknowledgement. Windows error log suggests that some common database process has problems — but search turns up nothing on these errors. Typical. I fell back to Thunderbird on the desktop, then made Win7 the default boot, not 8.1. Too bad I cannot do anything like that on the Surface. At least Outlook.com on android works.

And who do you complain to? The support line says many words but in the end won’t do anything. And unlike a retail store or normal online merchant, when you buy something from the Microsoft Store you are stuck with it forever. There is no return or exchange mechanism for folks who must, by reasons of geography, shop through the Internet. When I bought this thing I got the wrong keycover ($128) and thought, after a few hours on the phone with the store support, that I could send it back for the other — but they transferred me to another person who after a few minutes shifted from English to French and stopped understanding me. I suspect it was deliberate.

I have concluded that building a quality product is no longer important. The ‘Surface’ is aptly named in that it is all appearances with spotty substance. That mirrors my experience with Server 2012 Essentials. Some stuff works, some doesn’t. No one really cares anymore — its all just posturing, appearances and marketing.  Things should be in the Cloud anyhow they seem to say… regardless of whether that was appropriate, economically viable or even possible. Looked at putting my server backup in the Cloud, concluded almost $1,000/month (according to the calculator) was not affordable — even when I was making the big consulting bucks and not a thinly financed retiree. And my Internet service has limits with steep charges associated with going over my monthly quota of data transfers. And don’t even mention my cell phone data costs.

Problem is that I still make decisions based on function, benefit and affordability — the classical capital budgeting model.  After years of working with Microsoft I have concluded that they are drifting increasingly towards playing at making products for an increasingly imaginary customer base. At least around here there is still lots of need for the old Small Business Server that provided a lot of functionality in a small, reliable package. Cloud services are great if you have cheap, reliable network service — not here, not really, maybe not ever.

What I have works in a minimal way. I have another server running the free version of VMWare ESX to play with future solutions to my needs. Sure, Microsoft VMs were almost nicer, but would really only want to run Microsoft stuff. If I wanted to load Solaris or Linux I was mostly out of luck. Fortunately there were alternatives.

I guess what I am grumbling about is that computing is more like Facebook — all puffery and superficialities. Hard core business processing is just not as important as the appearances of doing something. And shiny new stuff is always more important than making the core product reliable. Looks like the end of the road for some vendors as they run out of ideas and try to create new cash cows to sustain their bonuses into the future. But at least from where I sit this is more of an illusion than ever. But then its only the ‘Surface’ …

Green Lies

Was reading the latest expression of dismay by the IPCC regarding Canada’s climate change un-policy. And the excellent columns by Parker Gallant in Wind Concerns about the Ontario curriculum changes and educational indoctrinations to support the current ‘green’ viewpoints. Reminds me of something attributed to the Jesuits about getting a young mind and owning them for life… a pity that education here about is more about job training and ideology than clear thinking. Part of what convinces me that climate change in the catastrophic mode is inevitable.

Entropy increases — unstable systems oscillate back and forth until a stable, but chaotic equilibrium is reached. The planets’ climate has been swinging back and forth for a long time — if people have influenced it then most likely we have just hurried it up. Rolling it back to some historic ‘golden’ age is likely impossible even if we were gods, rather than just thinking we are. The sun adds heat to the system continually and global circulation distributes it. Some energy is used to drive our ecosystem and some radiates off into space, more when the globe is covered in glaciers, right now not so much. But as I understand it, warming is essential to get the conditions we need to increase precipitation and start the glacier cycle again. Your climate models may be different — and reality, as usual, has the only correct model.

Making technical and cultural changes to ‘roll back’ or ‘slow down’ climate change means going against basic human greed. As long as making money and being environmentally callous has the enthusiastic support of governments (Canada on tar sands, pipelines and occasionally asbestos) doing the environmentally responsible thing is unlikely. And lets face it, conservation and higher prices for less are just not popular ideas. And from reading the tripe being passed out by Ontario to children, it really seems that rather than saving the planet the real goal is to roll back the clock to some sort of 19th or 18th century status, the last time we tried to run a society on entirely ‘renewable’ energy — wind, sun (and lots of manure).

Well, we have part of it — people working for less and less money, rollback of voter and worker rights, environmental rules, journalism. Politicians who break laws and lie to the electorate (who increasingly work for the government rather than the other way around). More one man rule, less freedom in many directions justified by ‘i want’ rather than provable evidence.  More corruption in public office rather than less. It goes on and on.

As I have mentioned before, all energy sources are renewable — but some on much longer timescales than others. Solar comes up every day unless there are clouds. Wind blows as long as there are temperature and pressure differentials — these shift daily and seasonally. (And as a result of climate change.) Burnable materials — biofuels, wood, coal, oil and gas, all come from carbon reactions of living things but require various geological processes to transform into the familiar forms over time (sometimes a lot). Fissionable materials, like just about every other part of the material world, come from supernova explosions — so if we ‘blow’ through what we have there may be a bit of a delay ’till the next lot comes through. But we can be a lot smarter with what we have — unlike coal or oil, nuclear ‘burning’ is a transformation that produces other ‘burnable’ materials that we currently ignore. Or we could master fusion — the only real generative process that takes hydrogen and squeezes it into everything else, even black holes. Neat trick if we ever figure that one out. And is the universe as a whole renewable or recyclable?

But on a more realistic note, since we discovered fire, leaps in civilization were driven in part by finding increasingly powerful sources of energy.  Fire gave us cooking, ceramics and metals. Steam gave us mechanical muscles to lift, shape and carry. Electricity brought the day to the night and enabled revolutions in countless other fields. High energy liquid fuels gave us mobility and flight, even beyond our atmosphere.  Nuclear fission, besides really big bombs, offered a reasonably clean and safe form of power generation — but fear mongering and a couple of really big screwups seems to have crashed that effort.  A pity, because it has real promise but needs a lot of engineering to work out the problems. And as for radiation, the fearmongers conveniently ignore the effects of the hundreds of nuclear bombs exploded world-wide on the general populace. The US war on Nevada and Utah dumped fallout all across the US over and over — Chicago, where I grew up, was right in the path. One head, two eyes, other parts in the usual numbers and places. The reality is that the epidemic of horrors they tell us a small leak would cause didn’t happen.

In my darker moments I think that the ‘greenies’,  some of whom have gotten really rich on this loose collection of ideas, long to take us back to a simpler time and assume that they will be the lords in the manor while the rest of us become serfs, ignorant and fearful.  The wind farm folk are getting rich on our money producing power when it is not needed and despoiling our landscapes. And without lots of gas turbines running in the background the lie of this ‘renewable’ power source would be clear.  We pour ethanol into our gasoline and burn more than ever. Imagine how much happier we would all be if it had been drunk rather than burned…

Meanwhile, the climate is changing. And we play games with ourselves to come up with ways to make money on ‘slowing’ or ‘reversing’ it and finding other people to push off the costs to.  While the real issue of identifying and preparing for the changes is simply ignored in most quarters. The military and insurance companies are worrying about it. Some cities like Chicago have actually started thinking about it. Too many just pretend it is happening to someone else.

But as the world warms the differentials that drive the winds will change as will the cloud cover.  Be interesting to see how the thousands of wind turbines and solar panels do then?  And how well our supplies of burnable materials are holding out.  But one thing seems clear, the ‘greenies’ are not doing any of this for us.

The alternate reality of wind power

A couple of days ago an article appeared in Slashdot and several other places suggesting that ‘wind turbine syndrome’ was a matter of mass hysteria — communicated by word of mouth rather than exposure. Following the entrails the source for this was a wind advocacy group. The discussion on Slashdot was the usual chain of viewpoints — offering opinions but few facts. And plenty of off-topic remarks about nuclear power and cell phones.

On the face of it this would appear to be a counter-strike to the rising rural opposition to the invasion of large scale wind power into unwilling communities. And a good study from last year where high levels of very low frequency sound were measured in the homes of people in a wind plant in Wisconsin. But part of the problem with this is the insistence that there is no evidence that harm is even possible. It is as though the wind advocates are telling people that noise and flicker from huge wind turbines obeys different physical laws than anything else in the world. In this alternate reality, low frequency sound from a wind plant will have different physiological effects than similar low frequency noise from a large industrial blower, military sonar or aircraft vibration. So if analysis shows that noise in the frequency ranges that the US military found made some pilots ill, or that weapons developers were using for crowd control — if the source is a wind turbine it will have a different effect… and we should believe them? And do any of the proponents (David Suzuki, for example) live anywhere near these things?

But there seems to be a lot of this willing suspension of disbelief going around. Our leaders and their propagandists are telling everyone we need more and more wind power — and soon rural areas (to say nothing about bird habitats and migration routes) will be covered with these things.  But the evidence is that much of this power is unusable as it is created when demand is low, so is sold off at a loss. And hydroelectric power is being spilled to make room for wind, as is nuclear. So with gas turbines on standby to step in when wind falters more greenhouse gases are being released than ever.  Why?

We are assured by these same folks that the Smart Grid ™ will losslessly move power across the province and successfully juggle the hundreds of randomly varying generation sources to support our electrical workloads. In the final report on the 2003 blackout one observation was telling – the scale of the blackout was due in part to an inability to manage the interactions within the grid. The power industry (plural, really) has been interconnecting things without any real grasp of the  complex dynamics of the resultant structure. So a failure in one place propagated in seconds across the grid causing multiple failures and overloads. Somehow I doubt that the folks who run the politicized and balkanized power system in Ontario are that much smarter. Listening to them it sounds more like a big drug dream than the result of any sober engineering work.

But it goes even deeper. A few days ago I had a discussion at the ‘public’ meeting with one of the wind company people who are planning on carpet bombing Amherst Island with wind turbines.  Tightly packed lines of these immense monsters will cover the landscape — guess they had to have a certain number to make the project. So they did… Reading the alternate energy press one gets the idea that the engineering objective is to provide enough spacing to minimize noise and multi-turbine wake losses. Even the Wikipedia article on wind plant engineering talks about this. The measure is in multiples of the turbine blade diameter — so if the target spacing (per Wiki) is 10-15 rotor diameters, a 150m rotor would space these things 1.5km apart as a minimum. Doing a little ruler work on their map shows some as close as 3 diameters. But these folks said their ‘science’ says they can do this… really makes me wonder if they use pi as 3 for convenience as well. Glad its not my money at risk… a pity I will be paying for it through one of the highest power costs in North America.

An aside on power costs: There was a recent column in the New York Times about the negative impact of soaring power costs on economic activity in Europe. What was interesting was the mention that manufacturers were starting to talk about moving their facilities to the US because of lower power costs. I must wonder if anyone thought through what the costs of the Ontario Green Energy ™ program would do to Ontario businesses? After all, to pay for all this one must have positive economic activity — same goes for tax revenues to pay for all the government. One does not build a prosperous society by driving out employers. But once again these folks seem to inhabit a bubble untouched by the same realities the rest of us inhabit.

Back in the days of the Roman Empire, when an engineer designed a bridge or an arch, he was required to stand under it as the construction supports were removed.  That way, the engineer had a real, personal interest in getting it right.  But in the alternate reality of Ontario’s wind invasion — are any of the designers or advocates at risk? Somehow I doubt it. They all seem to be pretty enthusiastic that someone else should take one for the team — but I don’t see any of them doing it. How many of these folks even live near a wind farm? Perhaps after construction the team should be required to live in and amongst the things for a while — a year would be a good start.

But short of that unlikely measure, the only thing that might help would be to simply admit that wind farms were part of the same physical reality as everything else. And that factors found to be harmful to some people when generated by other sources could be harmful when generated by wind plants. To expect anything different is really insane.

Be Careful What You Wish For

There was a fascinating post on Slashdot today describing the latest robotics work — agricultural robots. These things are designed to handle pots in nurseries that had been previously done by migrant workers. The alpha testers were commenting that people could then be used in harvesting and packing — jobs that require judgement. I suspect there is also a sigh of relief at having less to do with La Migra or not having to work so hard on their Spanish. But it is hard to not admire the folks who have the courage, determination and stamina to perform the hard, uncomfortable and sometimes dangerous tasks that keeps our society going. But such tasks really should be an afront to human dignity and are really better left to machines (in my view). But then what happens to all the redundant people, those displaced by advanced robotics or even outsourcing?

Somehow I suspect they will not quietly or obligingly go away. And not everyone has the opportunity, ability or inclination to become an advanced knowledge worker. So as the total population grows (and there really are too many people) there will be this growing population of the displaced. So how does one keep this growing underclass from generating social unrest or worse? Government handouts and reality TV? Turn them into Soylent Green? It is clear that if governments and employers do not come up with a productive and socially beneficial way of redirecting these people they will remind us of their presence. But so far our dear leaders seem glibly impotent about this issue. But the numbers keep growing.

I am suspicious that the much maligned ‘Occupy’ movement is just the start. My dark pessimism is thinking Russian revolution but there are probably other destructive forms of social unrest. But as I wait for my self-drive car to take me home, I hope it will not be across a countryside being ravaged by gangs of the unemployed and dispossessed. But so far there is nothing on the political landscape to suggest that this will not happen. I hope I am wrong.

The Problem of Errors

Recently I was reminded of what a problem error management poses and how much more expensive it is when it is poorly done or not done at all. I have been setting up a new piece of software but had some difficulty in getting one part of it to work. (The vendor and support organization should remain anonymous.) The operation I was attempting would fail but there were no clearly identifiable postings to the error log. And what events did seem coincident made no sense.

Back when I worked on Digital Vax machines there was a joke that the way DEC field service would fix a flat tire would be to check the other three tires first. This support issue seemed to go the same way as I was passed from person to person and retreading the same ground over and over. And of course, the folks I was dealing with were a long way from both me and the guys who write the code. Eventually the problem just went away leaving no clue as to what happened or changed to resolve the issue.

But this reminded me of how good the VMS error message convention was – DEC had designated a 32bit number for reporting errors. This was divided up into three fields – a three bit field for severity and two larger fields for facility and problem. Essentially, the error number told you who was complaining, what was being complained about and how bad was it. This concept seems to have gotten lost – current software uses numeric error numbers but only some of them are documented in public accessible form and one needs to know who was complaining to interpret the error number correctly. And then there are my favorite ‘fatal error – fault bucket xxxxxxxx’, which has no online documentation at all.

And having the error log entry display contain a nice user-friendly link that says ‘click here to learn more about this error’ – that takes you to an error page when you do because there is no index for that error. As a result, I have learned to do a search on Google and not bother with the vendor site at all. Why bother, it never works anyhow.

And along this line there are the health monitoring messages that complain about the health monitoring system, especially when the machine is starting up and the delayed start services haven’t as yet. After a while, like the boy who cried wolf, one stops looking at the health monitoring system at all. It may be doing useful things but since it seems more like a cranky hypochondriac aunt no one wants to associate with it. Probably not the design intent.

Now in the computer world, all of these errors were created by developer-written code. So someone decided to report ‘C0000005’ for a particular type of error and someone told them it was ok. There may even be a last chance exception handler that reports something before the program drops back to OS command level (preferred) or to bare metal if the problem is really bad. But what seems to be missing is the administrative step to collect this information, provide some additional support comments and put it someplace searchable. So costs were saved on the development side, but more than made up for on the support side. I spent a couple of weeks on this problem before it just went away and the folks I was working with put in a good week on their own plus communication time. Surely this was more costly overall than decent documentation?

So what happens is that everyone tries out their own personal ju ju – are we current with patches? Is your network up? How different are the clocks on all your machines? And so forth – if we don’t know what the problem is or why the problem went away then whatever we were doing, thinking or wearing may have been the reason. Lets do it again….

Back when I was a systems developer we took turns handling support calls from our customers world-wide. This was referred to as our week in the barrel and while there we were expected to not get anything else done. Our projects all waited for us to climb out. So we had a good idea what the issues were and had access to the source code as well so we could trace out what the programs were doing. I don’t think those folks I worked with had the same luxury. And besides, there are so many layers of code in current programs that finding the root cause might be problematic. And furthermore, modern pipelined processors don’t report fundamental errors synchronously any more – so the current instruction may not have anything to do with the real problem. One can understand the reasons for using interpreters and runtime frameworks – just to get control back for error reporting even at the cost of a bit of performance.

But in a sense the externalizing of customer support has another effect – the results of poor coding, or more likely the collision of multiple pieces of good code that just don’t happen to work together, is handled by people well removed from the perpetrators of said code. Their experience is probably summarized in some tidy management report that may eventually make it back to the developers but not necessarily. SO not only are costs enhanced but the learning diminished by the decoupling of support from development.

Then it struck me that there is a lot of this going on. Corporations and governments contract out their public-facing services and insulate the organization from the responses. You can rant or rave at your elected representative all you want but if that communication gets handled by their press secretary and never reported upwards it has no effect. Maybe they gauge public response by the weight of the mail and not the content. Or hold public meetings where attendee questions get danced around and then ignored. Or send out a mailing with a questions along the lines of ‘have you stopped beating your wife yet’? Each choice is really the same so only the form of listening to the public is followed. They don’t really want to know – it gets in the way of their plans and takes their minds of themselves.

A pity, this decoupling of action and support – so as actions at many levels get increasingly decoupled from perceptible reality, one wonders if this is how the Romans saw it towards the end?

Program Not Responding

Has anyone noticed how increasingly prevalent the message ‘not responding’ is when they try to do anything on a computer? I certainly have. Interestingly, the user interface notices that and gives us this little friendly message — but we are trapped there none the less. At least in the old days when the hourglass would slosh back and forth there was something to look at. But apparently no more. I guess that we are all supposed to be pitching our computers every few months and replacing them with ‘more’ to keep up with the current bloatware. We don’t.

Our typical computer has been in regular use for five or more years. We have one 64bit capable machine that runs Windows 7, the rest are XP or W2003. They are paid for and adequate for the job. Oh, it means replacing cooling fans (always fun in a laptop), hard drives and batteries.  But we are spared the horror of seaching for a familiar and regularly used command that has been either dropped or moved someplace else because the latest crop of college interns thought it would be cool.

I know larger firms regularly replace all their computers — has to do with the high cost of the labour to make any changes, despite all the sysadmin programs they have invested in. I have bought a couple of these castoffs over the years, work just fine with a bit of cleanup. Usually need to add memory to make them useful. Everything we have is maxed out in that department.

So when the computer tells me ‘not responding’ I am sure that somewhere deep down some piece of bloatware is lumbering through its appointed task — and if I were not so cheap I would be running it on a 16 core multi-teraflop machine instead of what I do. But then, with the industries relentless insistence on bloatware rather than making things work reliably — I think the message really is ‘The computer industry is not responding to your needs.’ And I hate that — another rich tool that is being lost in crudware, just like the 100’s of channels on TV with absolutely nothing to watch. What a waste.