Computer Infections Clobber Public Health

globeandmail.com: Canada’s National Newspaper
globeandmail.com: Public Health Agency computers infected by worm

It is interesting to note that the Public Health Service takes the cybernetic equivalent of hand washing seriously — not. I have been in too many businesses where despite written policies about personal use of computers it is too common to see staff surfing in their (hopefully) off minutes. Back when the Internet was first becoming an information source, the brokerage house I worked for had long discussions about how to make the service available without compromising the security of the internal network. There were no happy solutions. And over the years this situation has not improved.

Seems there are a couple of problems that intertwine, in my estimation, making this problem far more serious. One, people simply do not see the difference between a computer that they have on their desk at work and the machine at home — if they have the urge at the moment to surf for something they do it, regardless of whether it is relevant to their job. Two, enforcement by network monitoring, system policies and the like is messy and difficult and few places have the resources to do it. Three, the messiness of the web world makes it far too easy for malefactors to spread their stuff. And four, there is a general lack of appreciation as to how serious the problem is — it is invisible until it messes up a large scale system like the one in the article.

Over the weekend a favorite website of mine was hacked. It is an obscure religious site that has a practice of posting a new meditative quote daily. I find it very therapeutic to read these words of wisdom. The code did nothing overt, it just planted a small file on the users machine that would send information back to a site in China. What was it doing? Was this an attempt to plant the seeds of a cyber invasion? The problem was identified by accident and easily corrected. Registered users were all notified and appropriate scans were done by all — but since it was something new there is no guarantee that there is not something there, just that the scanners did not spot anything recognizable.

Makes me wonder how much stuff is out there that we are blissfully unaware of? Is some foreign power using our reliance on poorly protected and administered general purpose computers to lay the groundwork for a future invasion? (Just think what could happen if someone could shutdown all the computers in banks, utilities and hospitals on command. Complete control without firing a shot.) In the meantime we go on out merry way, cheerfully deploying ever more complex applications that we neither fully understand nor are capable of protecting. And then the users go surfing for porn or free music or shopping bargains in between updating drug inventories…

I don’t know about anybody else, but it makes me long for the ugly days of purpose-built 3270 screens that just did their job. At least then the users did not have the choice of opening their business systems up for infection. It was better than hand washing, though much less fashionable.

Bees, Frogs, now Bats … and Us?

I am not sure how many folks read the article in the New York Times today about bat die-offs. It is the latest in a series of species decimations that are characterized as ‘strange, puzzing, with no smoking gun’. The dead and dying bats are decorated with patches of white — possibly a fungus. I am sure the industrial wind farm folks will be happy that bats are dying off — fewer critters to be killed by the turbines. Pity about all the insects the bats eat that will now pester us and our crops…

There was another article in the same issue about frogs in South America dying off due to the spread of chytrid fungus — although the argument seemed to be more about whether global climate change was the driver than why the fungus was spreading and why the mortality.

And then there is the demise of domestic honeybees — which as far as I know is still happening and is still quite mysterious. I am not sure how many folk are aware of just how much of our food supply comes from the pollination done by these tireless little creatures. Especially since we are working hard to reduce the availability of migrant labor for other no less clear reasons the prospect of folks out in the fields with little brushes doing the work seems quite unlikely.

In what seems to be a related area, some months back I read a study of the long term correlation of atmospheric lead particulates with social behavior. The hypothesis was that the decline of large scale violent crime and social unrest in places was a lagging indicator [by 20-30 years] of the reduction of lead in gasoline. (I am not sure that the increasing popularity of school shootings is not a contra-indicator…). What I found fascinating was the suggestion that the places in the world where lead concentrations in gasoline are still high are Iraq, Afghanistan and most parts of the Middle East.

What comes to mind is the old idea of canaries in coal mines — the little birds, being more sensitive, would collapse first and give the miners, hopefully, a chance to escape when gas levels became dangerous. With all the chemicals we have been exposed to, and the cheerful tinkering with the genetics of our food and the blyth way concerns are brushed aside by business and government all eager to make a profit NOW — there is really no telling what the damage will be when it finally becomes impossible to ignore. (And I would add to the list the abrupt rise of serious allergies and childhood emotional problems.) Seems we are surrounded by dead and dying canaries — has anyone thought about how much longer before it becomes us?

WHo Is Right… Some Thoughts on Planned Obsolescence

Watched a video about deploying Vista today. Over an hour of folks talking about all the great new features and how large the computers were that they had to buy to run it properly. And one almost rant about how unacceptable it is that folks always complain about new operating system releases but eventually accept them and then don’t want to change, etc.

I have a problem with this. On the one hand, the vendors are right. They are not going to spend the time and money making software that runs well on current platforms because it would make the software more expensive (remember Trumans triangle?) and the box that ships next week will be faster anyhow, so who cares? And all of the features that they manage to pack in may well be for our own good — like allowing PC users to run unprivileged, a desirable goal almost as long as I have been a programmer. And how very few programs can actually be done that way due to a wide variety of factors.

On the other hand, it is my money and I expect to have to spend it only when it makes economic sense to me. Imagine what would happen if car makers were allowed to do what computer vendors consider their right and privilege? Millions of cars piled beside the road because the manufacturer decided to use sparkplugs with left hand threads or maybe changed voltages so none of the electrical parts would still work or varied the fuel type. So no spares after a very short time so once it breaks you just walk away and buy another one. The humor pages have had many better comparisons as to what driving would be like if M*sft made cars, so no point in repeating it here.

I guess the point is that vendors in the IT industry have gotten rich by keeping product lifetimes short and pressuring users (us) that it is in our interest to junk it all and buy new as frequently as possible. And we hear that computers are cheaper than ever (they are) so what is the problem? Well, there is the little detail of software… to say nothing of time to put it all back together and make it work again. If I buy a car, there is a familiarization exercise to be sure, but in general, the gas and brake are in the same place every year and one puts similar fluids in similar places — in other words, standardization of the user interface. Sure, each model has internal changes that may make it very different from the last one, but the user experience is pretty much the same.

But each generation of software is different, and there are very few upgrade pricing deals around — so essentially one has to relicense the entire pile of software again, and then patiently relearn it and sort out all the (different) bugs. So the total cost to a small business or individual home user can be substantial. And as in any technophile household there are a pile of computers doing various tasks that we deem (more likely me) essential. And having played with several generations of server operating systems one learns that different versions do not play well together, regardless of what vendors may say. R2 server versions have problems talking to R1 active directory, file replication and so forth. So there is no percentage in gradual change — it needs to be everything at once, like large business rollouts of the new desktops. But that is a lot of money and time, so we bumble along on the old stuff, knowing that eventually the patches will stop coming and when a harddrive or powersupply flakes out its probably curtains for the whole lot. Because of course there will be no spares available. (So I guess it all becomes part of another toxic waste shipment to some other part of the world…)

I guess what I am trying to say is that the technology firms would find their case a lot easier if they strove for changes that were not disruptive and tried to standardize what they offered rather than differentiating it so the new stuff was instantly recognizable. This goes for repairable and upgradeable PCs as well. I know on the software side it can be done — VMS used sharable libraries with transfer vectors to hide changes in code from applications. If an application was linked to a sharable library and function calls used the transfer vector approach — so same name, arguments, etc, the libraries and the OS could be upgraded with very little impact to the applications. I also know that Windows came from here, but with no hardware support for memory isolation it was a lot tougher. And then there was the legacy of games that wrote directly to hardware for better performance…

Just seems like the models for what is being done have gone entirely wrong — instead of emulating the electrical industry or the auto industry and making products that are reliable and repairable they have gone the route of cheap fashion and throwaway luxury priced products. I dont know about anybody else but I am tired of it.

Mainframe Survival

Interesting little article in the New York Times today about ‘Why Old Technologies are Still Kicking’. Having started shredding 80 column cards on an IBM 360 a very long time ago, seen the rise of Unix-based machines and then the flood of PCs (of all sizes), I have heard the cry ‘thats just obsolete, any right-thinking business needs to do… (fill in the blank with whatever technology or other practice you like). The point is made in the article that this technology has survived simply because it did its job and business decisions about throughput and reliability (and existing investments in working solutions) trumped fashion. Makes sense to me, glad somebody thinks along these lines.

Problem with all computing, even the stuff that hums away in my home office, is that it needs to fulfill some purpose. When I started writing code (Fortran and Cobol) pretty much everything was focused on the job at hand and creature comforts for the developers was pretty much a side issue. In fact, if I recall my history, that was why Unix was developed at Bell Labs — to provide a better working environment for writing code that would run someplace else. It was never designed as a production platform — and even today performance analysis and troubleshooting is pretty much a black art. In working on problems on a number of *nx platforms my experience is that one needs to understand the whole application stack from the surface of the disk drive to the user presentation to have a chance of sorting things out. My oldest, after a short career writing internals at M*sft is working with Linux-based commercial servers and has mutttered about how little critical info is available to debug production problems. Unix and linux have been pushed into areas, for economic reasons that it was not intended to fulfill. It was a fashionable choice… and some businesses will limp along and most likely pay higher costs overall as a result of their adherence to fashion. (The latest IBM mainframes with their commodity hardware core and multiple linux machines are an interesting exploitation of available technologies — someday I need to dig into this further.)

(As an aside, there was one other casualty of the rise of RISC architectures and non-task-oriented OS designs that most folks have probably missed — synchronous error reporting. At one point it was possible to examine a crash dump or error traceback and point to a specific place with some assurance as the source of a fatal error. But as processors became faster as a result of splitting execution up into multiple stages and running more execution streams in parallel the point of failure became lost — reporting problems might be delayed or not signaled at all. And as the number of layers grew the source of problems became more obscure. If an application messes up now, in all likelihood no one can tell me why or what needs to be done to fix it. The most common solution seems to be remove it (and everything else on the machine) and put stuff back until the problem reappears or doesn’t.. do I do this under the full moon or with my forehead painted blue??? In an application generator I wrote a very long time ago, intelligent condition handling was one of the largest and most complicated parts of the code. No wonder that this tends to get left out… so we find with computers being more ubiquitous that jokes about their reliability and stability are commonplace. No wonder…)

Most of the machines running downstairs are Windows — another interesting story. Back in the early days of PCs IBM and Microsoft worked together to develop an improved operating system. SO they hired David Cutler and some of the other DEC folks who developed VMS (my personal favorite as an OS) — and surprise, surprise, it came out looking a lot like VMS. When the partners fell out, one had OS/2 on the IBM side and Windows NT on the Microsoft side. A lot of the internals had similar names and purposes as their ancestor. My first NT machine was wonderful — applications could misbehave and not affect things running along side. There was one small difference, though, until recently Intel processors do not have the process isolation features of the VAX processors. So isolating different parts of memory was very much slight of hand and not as absolute as VMS. This is most exposed in what has been referred to as DLL-hell, when segments of application code are loaded into a common memory pool and can clobber each other and user applications. Like much of VMS, the original isolation benefits of loadable code segments has been lost. But I digress (again). The original design objective was a desktop operating system to service the needs of one person. But this has been hammered, stretched and prodded into being an all-encompassing OS servicing (?) the needs of multiple business users. But the single user origins are still exposed (perhaps the subject of a future rant) in the multiplicity of boxes most solutions entail. The advertising tells us that all businesses must embrace it to succeed.

In the end it all comes down to the eternal question of what are we trying to accomplish? (Or what was that point again?) The technologies that have survived have withstood the test of time and use… they just do their job (which is, after all, what we are paying for). Mainframes and Cobol just do their jobs. It may not be fashionable but it sure is successful. The stuff that continues to change (and sometimes evolve — we hope) is still trying to fit in and become the ‘fittest’. (Can anyone tell me the real reasons why there are thousands of computer languages — and more every day?) But as has been said here before, when one hears the challenge of ‘why are you wasting your time/money on that obsolete … when you should be using…’ one has to apply the test of ‘follow the money’. Precisely who benefits in bending to the pressures of fashion when business profitability is at stake? And how much will be sacrificed to be fashionable?

Language, Thought and Ideas

For some reason I woke up this morning thinking about how language makes it difficult to discuss aspects of the world that we do not really understand (re – yesterdays rant about climate…). This brought to mind the work of an American linguist, Benjamin Lee Whorf in the early days of the last century. If I remember this correctly, and right now I am too lazy to check my library or query the web on this topic, he was studying native american languages.

In English we can talk about the past and future with equal certainty. In some of the languages he studied, only the past was certain — the future had linguistic aspects of ‘hope’ and ‘expect’, expressions of uncertainty that can only be inserted in English discourse through circumlocution. This provides, I suspect, the illusion that we have more control or insight into what is going to happen than reality would probably support. We need somehow to differentiate between what has happened, what we expect to happen and what we wish/fear will happen — as we walk further and further out on the projected locii of future events.

Whorfs’ idea was that language is not only the tool of thought but in many ways frames and determines what we can think about. If we have many words discriminating between different types of snow, for example, we may see (and think about) the winter landscape in very different ways than just the four letter word commonly used. (I wonder how many international problems could spring from mismatches between the differing world views and conceptual frameworks of the protagonists?)

Being difficult, one might choose to extend this to encompass degrees of certitude about our knowledge — what we know, what we know we don’t know and what we don’t know that we don’t know. And of course what we know that is in fact wrong… but I digress.

The problem with so many of these debates is the way truth and fiction become tangled up — climate change being one of many. Linguistically we just cannot discriminate between what we know, what we don’t know and our expectations/hopes/fears about how this imperfect set of information will change as we move from now into the future. (This is ignoring those datums that fall into the know but wrong/inconvenient that are either surpressed or featured depending upon which axe we are grinding…).

So I guess as a species we will continue to blunder our way into the future, reaffirming with every day of survival ‘the true miracle is that anything works at all’. I wonder if this will ever change?

Arctic Ice and Change

The Globe&Mail had an interesting article about the latest NASA ice thickness measurements today. Seems they say that the older ice is getting thinner much more rapidly than expected — so even if the coverage looks ok there is much less there.

So far this is not new news. We have been seeing reports of this type of analysis for some time. What was more interesting were the comments — emotional and fully buzzword compliant on both sides. It is the usual conflict between ‘the sky is falling’ and ‘its a global conspiracy on the part of the…(pick your group)’. This is sad and obscures the reality of what is going on.

Change is the only constant — I would submit that the global climate in all of its subtlety is a tad more complicated than is appreciated by any of these groups, especially in public. Not only does one have changes in temperature and moisture in different places but those changes vary which species thrive or die out or move elsewhere. Differences drive the winds, which transport both heat and moisture to different places. This has been going on for billions of years and our brief existence as a scientific species has only seen an infinitesimal slice of it.

We can see that the glaciers that provide water for large parts of the
world are melting back and not being replenished. And in some places
(India and China and parts of the US) they have been pumping ground
water out at an accelerated rate — with the result that wells are
getting deeper and some are running dry. This should be a clue that the
resource is being exhausted. What happens to these people when it runs
out? Is this the result of human activity or natural change or both?
And does the answer really matter as much as developing plans for what
to do with those populations at risk?

There are some things that we can measure — the size of visible glaciers, weather and atmospheric gas content in specific places, sea temperature and so forth. These are a finite number of points in a very large and contiguous space. There are other things that we think we can measure — the amount of oil left underground, for example, based on changes in pumping rates and so forth. We assume that in between our measurements there is a continuum that varies smoothly between the points. This makes it easier to model.

Within the last while there has been much in the press about human-caused global warming, or the great freezeup — pick your poison. The global warming camp has been chanting about carbon footprints and the need to go to renewable resources for our power. So vast tracks of countryside are being despoiled with wind turbines. And quietly, in the background, the utilities are building new natural gas powerplants that run in to be able to step in when the wind fails to keep the grid from collapsing. So no real change in the amount of carbon dioxide being produced, just the locations are changed and we pay more (sigh).

Depending upon which model(s) one chooses to believe, we are past the point of no return, get there in 10 to 20 years or have no reason to worry because the current weather is just normal variations. And of course the policymakers take their own faith-based stances (or is it economic-based?) for or against. Our costs for power and fuels go up. Some places are getting hotter, some wetter, some drier.

So where does this leave us? I think there are two different sets of problems. One problem is related simply to acknowledging that the world is changing and that we must continue to adapt to its changes. If some places are getting to hot and dry or too wet, do we continue to live there and rail against fate? Or do we move the population elsewhere? I suspect that the changing climate will be the driver for the largest human migrations in history. Not the best time to be increasing border security and tightening immigration policies. Wars have begun from less… Who will survive?

The other set of problems is really two — trying to understand why things are changing (beyond the classic ‘because’) and determining what, if anything, we are doing to contribute to it and whether it matters if we change what we do. This is a much harder set of problems in that it requires us to understand a very large and complicated system of systems. I am not convinced, based on the precision of regular weather predictions, that we are anyplace close to being able to do that. Sure, the models are pretty scary. But then 10,000 years or so ago where I live was covered with 1km of ice during the last glacial age. Since this happened a couple of times in the past it may well come again — and I think that the theory was that the arctic becoming ice-free was the start…

But what to do now? Anything? Well, being a fiscal conservative I believe that one should make decisions based on costs and benefits. Most benefit, least cost and so forth. Recalling that back in the 1970s in the fallout of the Oil Embargo lots of businesses saved huge amounts of money just by reducing waste. I am sure that the same rules should apply now. Conservation rather than grandiose schemes to be green at breathtaking costs. Oil is too precious a chemical feedstock to burn — bring on the electric cars and give us back passenger rail service! (Sorry about the loss of vehical maintenance profits and liberal government subsidies to airlines.) Nuclear plants for electricity, stopping those was a mistake — and someone, please look at nuclear waste as an opportunity instead of a hazard (it is that too). Perhaps the energy this stuff will continue to give off for generations is an asset! Similarly, wind is a good if intermittent power source — encourage communities to build their own local power stations supplying the local grid rather than huge, costly farms with expensive distribution systems. Do it to reduce costs for everyone not increase them.

As for the global climate — it will continue to change regardless of what we do. Maybe we are speeding it up, maybe we are slowing it down. I am not sure we can tell beyond the obvious changes in glaciers and weather that has and will have real human consequences. But it will continue to change as the Earth changes and the Sun changes. I don’t think that we will understand any of this at the level to truly control it for many years. (And the widely divergent conclusions from common evidence suggests just how far we are from understanding it now.) Let us put our efforts into living better and more in harmony with our planet — not just here in North America but everywhere.

Solar Dawn

Last night we watched a program on Nova about solar technology and its benefits and costs. One thing that struck us was how different the approach is between the US and Canada. In the US there are tax benefits and subsidies that encourage individual homeowners to deploy alternate energy solutions to reduce demand on the grid. Here in Canada governments lavish huge amounts of money on ‘renewable’ projects — like the wind farms going up all around us and despoiling the local rural areas and bird sanctuaries. But there is nothing to help the individual put up a couple of solar panels to provide part of their electricity needs.

Funny, I would have thought that in the end it would be cheaper to encourage local generation than these huge and dubious projects — to say nothing of the expensive transmission line upgrades and complex management programs to keep from destabilizing the grid when the wind fluctuates at an inconvenient time. If there were a couple of solar panels on my roof making 2-4kw during the day, that would be less power that I would draw from the grid — and in general my peak demands have been when the sun shines.

But I guess that in the end my private power project would only benefit my family and the folks I bought it from. And none of these are likely close to the seats of political power — so would afford little possibility to transfer public funds into other pockets. But then, I guess that is the real purpose of a political career — as my Dad used to say “try and vote for the guys who will steal the least”. And this rush of interest in ‘Green Power’ is a political windfall to raid the public funds on an unprecedented scale.

Old Computers?

Reading an article in today’s Globe and Mail about where technology gadgets should go to die made me think about life spans vs useful lifespans and how technology vendors encourage a cycle of waste. Back when I was a software developer we used to joke about the tradeoffs between good/cheap/quick — and management could get any combination of two factors. If it was good and quick it was not going to be cheap, cheap and quick meant not good, etc.

The useful life of computers is getting much different from the actual life of computers. Like any machine, computers have a limited lifetime. Semiconductors gradually degrade and change their properties, other components age as a result of power cycles heating and cooling their components — solder joints eventually break, capacitors fail and so forth. Hard disks develop wear spots that eventually stop reporting their data content correctly — so logical data errors accumulate and eventually the disk fails even though mechanically it may still be ok. In my experience this process takes on average about eight years or so for a computer to die as a result of old age. But it may be put out to pasture (read landfill or recycling depot) long before this as a result of software-driven planned obsolescence.

When I started writing code a typical development computer had 1 mb of memory, maybe was 1 mhz in processor speed and would support perhaps 50 concurrent users, each with a 64kb workspace. This was enough to do many jobs, although the tools that we produced strained these limits. But most of the resources went to do useful work and relatively little to user-coddling interfaces.

Times have changed. Today a single user with a machine 1000x larger is told that his machine is too small and too slow. Why? Because the overhead that has been added to computing to make it prettier and more user-friendly and provide more background services consumes huge amounts of resources. Years ago just having color and reverse video areas on the screen was a big deal. Now users expect to be able to choose the colors of the screen and the decorative image trim (skins…) that they get displayed. All of this flexibility adds substantial overhead.

Today I read a trade magazine that suggested that all future operating systems and applications from Microsoft would be 64bit only. Well, what does this mean to a small business? It means simply that if your servers are not recent P4, Xeon or AMD64-based machines you will need to buy new hardware to run the new versions. Just as with Vista the real requirement was buy new computers as nothing you have can be upgraded.

What this means is that if marketing is successful for the new stuff, there will be a wave of discards going onto the scrap heap. I think this is deliberate. The real incentive for everyone in the computing food chain (except the end user — whose business benefit derived from computing really pays for it all) is to sell new stuff and push the last sale off the table and into the can as quickly as possible. It is great that computers have become so much faster and memory much larger and at much less cost (we think). And software developers make use of this to write code that is even more capable and convenient than the previous releases. But the path from one version to another is anything but smooth. And the more disruptive the transition, the more money that is made by the computing food chain — in consulting services and development.

But all this is expensive for the individuals and businesses at whom it is targeted. While it may be fashionable to have the latest car parked in the driveway as a symbol of how well we are keeping ahead of the Jones, the same cache does not work with computing. The software and hardware that keep the orders flowing and the accounts balanced are largely invisible except to the internal staff. So the expense of new hardware, software and services every couple of years (to say nothing of retraining) just reduces the overall profitability of the company. This is rather like a slow version of getting all the golden eggs from the goose. In the end, if really successful, there is no more goose and the eggs in hand are all.

So despite the hype, if sales are slowing on the new stuff the vendors have only themselves to blame. And as the economy slows the tendency of business to defer expenses grows, the computing industry will feel the pinch. Eventually, it will be generally realized that the marketing pitches for ‘buy the new stuff and your business will flourish’ will be revealed to be just that — pitches. Paul Straussmann, in his book ‘The Squandered Computer’, showed that there was little relationship between how profitable a business was and how much money was spent on computing.

So when will computing consumers wake up (and vendors)? All this waste and churn just gets in the way of the optimized business processes that can only be built on stable, evolutionary frameworks. It limits the real value of computing and keeps it as a business expense and irritation instead of a competitive and operational benefit.

Food vs Fuel

The BBC webnews today was writing about the drought in Australia and its impact on grain supplies. Seems that Australia, as the second largest grain exporter (after the US) is in the worse drought in a century. A normal wheat harvest would be around 25 million tons but in 2006 they only harvested 9.8 million tons. (thanks, BBC). So global grain stocks are at their lowest level in years. (And whether this drought is part of global climate change or just Australia being screwed over by Mother Nature is besides the point.)

But here in North America we find ethanol projects being launched based on a grain feedstock. Other countries have used sugar cane waste and there has been work done on using switchgrass and other non-food crops. But the capital and subsidies are going into corn-based ethanol fuel processes.

So pardon me, but I am confused. Here we have a growing international food crisis with rising prices and increasing shortages that will likely affect everyone, not just the folks we prefer to ignore in other parts of the world. And yet we are funding and developing projects that will basically be burning food to the exclusion of other alternatives. Has anyone thought this through? Not just their little project but how it affects everything else? Excuse me, but what are we trying to accomplish?

Government and Energy

Recently there have been news items both in Canada and the US about government and industry groups trying to pass laws to displace zoning, ecological protection and other inconveniences from the rush to implement wind turbines — the current snake oil solution that will save western civilization from itself.

Now don’t get me wrong, I am not basically opposed to harvesting the wind to generate power. It seems like a good idea in an intermittently windy area like where we live. Just when I do the economics for the turbine, tower, necessary engineering studies and of course the batteries to buffer the fluctuations, the payback time extends past the probable extent of my life — so what was the point again? But I digress.

Seems to me we have had these issues before when nuclear energy was touted as the savior of society. Everybody, especially government, seemed pretty happy to let the implementation drag on and on and the costs for all the legal filing grow until according to one study the fees were 90% of the cost of the plant. But despite Chernobyl and Three Mile Island it is still the cheapest and safest way of producing electricity with the least environmental consequences. Ask the French, or even Ontario… it is not the cost of nuclear that threatens to push electricity rates into the sky. But during all of this I do not recall hearing of government and industry trying to legislate all those picky little rules out of the way, to push projects through over the objections of all those NIMBYs. [And I am not ignoring the issue of nuclear waste, just think it represents an opportunity for some bright lad or lass to look at it as a resource instead of a waste stream. Could it be an energy source through (perhaps) thermoelectricity for remote areas?]

But here we have industrial wind development occurring all over North America — the number of projects and the speed with which they are being deployed reminds me of a teenager on his first heavy date. All that can be thought of is getting to home plate as quickly as possible. Everything else is secondary.

What is interesting about all of this is that pretty much everywhere the locals not part of the project are screaming, citing health issues, destruction of wildlife habitat, violation of international treaties on bird migration and so forth. And of course the only places these things seem to go is wildlife preserves and rural vacation areas. And this is slowing the deployment and causing other inconveniences for the developers and governments all eager to be ‘green’, or so they say.

But now we have things like the Ontario ‘substitution process’ (and similar in other places like Wisconsin) where the government is talking about cutting the locals completely out of negotiations. If a developer wants to put a wind farm in a local wildlife refuge the concerned and affected parties are just excluded — and all those picky little issues like zoning, compliance with environmental laws and treaties are just bypassed. Problem is that if these things do real damage, as some anecdotal evidence suggests, there is no long term research to help understand what the true costs and impacts are. So if 20 years from now someone realizes that thanks to this deployment we have just wiped out the rural owl and bat populations and now new insect pests are affecting our health and food supply — it will be too late. Extinction is forever and it seems to be something that man as a species is pretty good at.

So my question is simple — what has changed to make this stuff so important that the process and protection of law just gets in the way? It cannot be the cost and reliability — wind power on an industrial scale is much more expensive than nuclear (the wind is free but all the infrastructure to make it usable on a continental scale is not, especially when considering how little power is actually generated) — but much more uncertain (in Ontario, published results show there are times when the wind stops all across the province simultaneously). One reads of areas (Texas for example) having to scramble to keep from losing the grid (i.e. August 2004 blackout) when the wind drops suddenly during a period of rising demand. If I built this stuff there would be batteries for protection, but not in industrial generation — those turbines directly feed the grid. I understand that the utilities are quietly keeping conventional fired plants going to backfill under these situations, but if they have to do that, where is the greenhouse gas savings? But once again, I digress. I am still looking for the answer as to who profits now? And why is this so important that it trumps the rule of law.