Process Optimization and the Current Financial and Economic Crisis

One of the main mantras I have heard over my working life is the phrase — ‘squeeze out the fat’. Process optimization has been a major contribution to profit margins in many businesses — the same businesses that are suffering in this current collapse.

Not only do we find companies that have moved from maintaining a cash balance to cover current payables such as procurement and payroll into more innovative approaches to fund these routine activities from a credit line or other borrowing. But places such as where my wife works that have systematically cut back on staffing and de-skilled shift roles so that even though education requirements are being raised for nurses — university is not enough, one must have advanced degrees, more jobs are being moved to less educated staff (registered practical nurses). In the past, staff levels were set based on average patient loads — so if the load dropped there was more time for training, and if it increased then folks were just really busy. Now the staff load is varied by demand — scheduled staffing is kept to a minimum and when workloads change everyone scambles and pulls in extra people, so personal lives take a toll. In the same way, many places changed their inventory practices to minimum stock and use of ‘just in time’ approaches to push the cost of supply assurance to someone else (often someplace else).

But when conditions change — increased border security slowing goods transit, sickness reducing the available staff, global credit collapse cutting off funding, the situation changes rapidly from ‘good management and cost control’ to chaos in very short order — with a liberal helping of gloom and doom. There are no reserves to draw on.

It makes me wonder if folks have forgotten what fat is all about — it is personal insurance against uncertainty, a buffer against the unknown. This is different from process inefficiency and waste, but do we discrimanate? Can we tell the difference? Seems we have lost the sense of uncertainty about life and decided to work without a net. And when life demonstrates that we are not in control, we prefer to be abused rather than dig into the storehouse.

I would hope that one of the results of the current chaos is a reinstated respect for buffers against uncertainty and why we are all provided with fat. We have survived because we are endowed with protection mechanisms against uncertainty, seems that it is time to put some of this back.

Virtualization — Problem Solver and Problem

I read with interest the article in the latest canadian computer dealer news about executive dissatisfaction with virtualization. While many companies are doing it — no doubt because the trade press screams that this is the wave of the future, etc, business executives are left wondering where the business case is for this technology. I share their concerns but will also share my experience with the technology and how I use it in the Technology Strategists computing environment.

The obvious use is as a test environment. Virtual machines for test or demonstration purposes can be setup much faster than physical hardware can be procured. There is a global limitation of virtualization in that the physical hardware must contain the virtual environment to get acceptable performance — so a 4gb memory size virtual machine will just not load in a 2gb physical machine. Copies of virtual machines can be made to test alternate configurations or capabilities — but be careful that the software licensing agreements are not violated.

Disaster recovery is another easy use — because virtual machines are files, they can be more easily replicated to offsite storage or alternate data centers. The gotcha (of course) is that the file needs to be closed to get a clean copy. Data is an issue that requires special handling — using database replication between sites to keep multiple locations in synch is the preferred approach. But be careful, standby replication schemes work but can have problems. An easier approach if the applications are not always in use is to close the VM and copy it to the second location — so the disaster copy is always one restart behind.

Using virtualization to run production loads is a tad trickier — we have done it here but find that there are issues. There are some management applications that are best kept offline to reduce their disruptive impact, starting them periodicly for a few days to check the state of the environment — then leave down so work can be accomplished. [I will not name names…] Issues we have run into fall into three groups — physical limitations of the execution environment constrain the virtual environment; software limitations of hardware emulations; and limited access to physical machine resources.

Physical limitations of the execution environment means that the machine one uses for virtualization must be larger than the sum of the virtual machines running on it. In practive, this means that older application environments could be resurrected by virtualization and consolidated, the host machine has to be bigger and faster. And hardware costs do not go up linearly as the capabilities of the box expand, unfortunately.

Software limitations of hardware emulations include problems like display constraints — have not been able to get a stable VM for applications that use DirectX-3D. And disk-intensive applications run slowly due to the double layer of IO emulation.

And no virtualization technology I have worked with appears to permit access to any more than a narrow range of physical devices — tape drives or other exotic peripherals are just not accessible.

And virtualization vendors in general are a myopic lot — not only do people want to be able to move execution environments in and out of virtualization, but they may want to move virtual machines to different virtualization environments. We dropped a promising virtualization technology when we found that there were no conversion tools available and the vendor got very vague when asked how to move existing virtual machines to their product. Its like the old days when relational databases were new — always an easy sell if the customer had nothing, but if they were already using product ‘x’, getting them to move to ‘y’ was pretty tough if there was no easy migration path.

Virtualization does solve business problems for us — it allows running of poorly written applications in isolated environments and makes it possible to have more operating systems available to use than there are physical machines. So we can get a lot more kinds of work done on limited hardware than might seem possible from an inventory of our hardware. And there is a lot that can be done with low cost software in this environment — but one does need the licenses to cover all those virtual machines.

I have seen it said that when vendors and their running dogs are insistent that a technology is inevitable, it really means just the latest application of the big lie (just like politics). Repeat something often enough and people will start to believe it is true rather than what you want them to believe. They may be right in some sense beyond their own need to ‘make the numbers’ but the business case needs to be sound for the expense to be justified.