System scalability — a classic fail (again)


A very long time ago a firm I worked at decided to deploy a new, distributed application to the traders. And in keeping with all the latest ideas the application code was loaded from a common server connected to the trade floor. The idea was simple — code changes would be done to the one copy on the server so anytime anyone signed in they would get the latest and of course greatest…

There was, of course, a small catch in this idea. While loading an application like this from the server is fairly quick, 100 more or less simultaneous logons are a different matter. It took hours and there was serious talk of having some poor clerk come in at 4am to sign on all the workstations so by the time everyone got in their machine was ready to go. It was, I confess, fun to watch the very smart development team confront the fact (with some help) that their clever idea was actually pretty costly.

From talking to my brother-in-law over the weekend it sounds like the medical records system he is required to use for charting was designed by some of the same folk. Logon is 15-20 minutes (this is a doctor sitting there waiting for the system to respond, remember). Record changes takes minutes. The day starts early and ends late due to the ponderousness of the whole application and the multiple layers of signon security built in. Didn’t ask if a chart could be shared — I am almost afraid of the answer.

Problem is that computers are really poor at sharing anything but very good at faking it. Easy to forget, especially in these days of applications with little bits spread all over the landscape. And on local networks, no matter how fast, transfers happen one bit at a time.

The old and much maligned mainframe systems did one thing right — their design sacrificed pretty much everything that slowed down processing and allowed applications to shovel data through at great rates. But these systems required a real understanding of what was going on to design and operate. Not quite as light and fluffy as todays GUI (gooey?) development tools that emphasize simplicity and cute effects for down to the bone functionality.

So we save money on building the code, pat ourselves on the back for what a contemporary, glossy system we have built. And force the highly skilled and not inexpensive people who have to use it to fiddle while waiting for someone’s cleverness to work. Guess this is another case of cost cutting at the wrong end — somehow it seems to me that lifecycle costs for all those high priced users would have been a better optimization target than development. But then they did have a few massive fails getting it off the ground to begin with. Glad the sales folk and the well-connected consulting house management made their commissions. Looks like the rest of us will be paying for this for years.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s