Energy Manager

News
FEATURE – Utilization is the new green measure for data centre IT efficiency

June 26, 2013 - The data centres that keep our digital lives humming have recently become synonymous with digital sprawl and huge energy waste. How did we get here?

June 26, 2013  By David Drury IBM Canada



Two connected answers: cheap energy and commodity computers.

As demand for more digital connectivity, storage, and speed skyrocketed during the past two decades, companies scrambled to keep up, furiously piling ever cheaper and more powerful, but also inefficient, computers into data centres worldwide.

In our rush to respond to an always on, always connected world, to a tweeting, texting, video uploading frenzy, we handed ourselves a Pandora’s box, which is data centres that are massively inefficient.

The rising energy bill at these data centres—some of which use more electricity than their corporate manufacturing or other key business operations—isn’t the only problem.

Advertisement

Just as crucially, these energy-hogging systems are tying up money that companies could use to start new businesses or make themselves more competitive. More than 70% of the average corporate IT budget is spent on basic operations and maintenance, according to a survey of data centre experts and managers from IDC commissioned by IBM.

In fact, only 21% of data centres surveyed operate at the highest level of efficiency. But these companies are able to spend 50% more of their IT resources on new projects, according to the survey.

How do we fix this problem and help companies deliver new innovative projects? Counter intuitively, energy efficiency is a smaller part of the solution.

The real problem is utilization. It’s not about measuring energy used; it’s about measuring the amount of work performed from the energy used. Because the industry could actually keep pushing efficiency as it has been doing, with novel cooling systems and software that slashes wasted power. But what good are highly efficient servers and data centres if they’re only being used between 5% and 12% of the time?

That’s the reality. Because we overbuilt data centres using inefficient commodity servers, most of the servers in those buildings simply sit idle most of the time. The value costs alone are staggering. For example, if you pay $4,000 for a server and it’s only 15% utilized, you get $600 worth of value from its usable capacity. A similarly priced server that is 75% utilized obtains $3,000 of value.

We can solve this, but it means overcoming one big hurdle: entrenched habits.

In the past, we overbuilt because that was what we needed to do initially in the industry, because servers couldn’t handle the loads, the multiple applications, and the spikes in traffic. We over provisioned because we couldn’t effectively track and analyze in real-time what was happening within our centres.

But that’s simply no longer the case. Technology has moved ahead of conventional thinking. Sticking to the habit of overbuilding is a waste of money and isn’t sustainable for the future.

The first step we’ve seen companies take is consolidation and virtualization, which makes more efficient use of computers by shifting work among machines.

For example, last year IBM’s Ottawa lab site conducted a virtualization project using modern machines. The team consolidated 400 aged, legacy systems into eight. What once covered the span of a football field now takes up the same amount of space as a refrigerator. Surplus space has been turned into collaborative meeting areas and additional workspaces. This initiative earned an Ottawa Hydro RetroFit Program award and also saved more than $80,000 and 802,812 kWh over a 12 month period or enough energy to power a small neighbourhood.

Virtualization is certainly a good move in the right direction. Yet even applying the most advanced virtualization to systems built using x86 servers, which are at the heart of today’s inefficient data centre design, only gets us to under 50% capacity.

We need to rethink how we build and manage our data centres. When it comes to tackling utilization, mainframe and Unix server systems are the hands down winners. These systems were designed from the ground up as integrated systems, with infrastructure management crafted to manage them and utilization as a design goal, not an afterthought. Leading mainframe and Unix server systems have utilization rates of nearly 100%.

Other big advances include new analytics tools that monitor, understand, and dynamically react to system utilization and other operational events in real-time. So, instead of running servers at full tilt all the time—just in case—we now have the insight to know when usage goes down and up on different servers and adjust power accordingly. We can move applications from one server to another so that computers aren’t just spending all their energy running only one application. If the utilization on a server goes down to 20%, for instance, we can bring the power down to about half, which is a big change from a few years ago.

That’s just the beginning of the monumental shifts we’re seeing. Cloud computing drives efficiency and it is changing the way we live and conduct business. Cloud computing now underpins everything from how we enjoy music and movies to how we speed transportation or deliver large-scale enterprise applications more efficiently. In large organizations, there is enormous opportunity for this kind of approach. Cloud provides the platform that supports the reach, speed and scale required by the current mobile, social and Big Data trends.

By providing access to power and capabilities that are otherwise inaccessible, the cloud helps enterprises essentially eliminate the compute boundaries of their data centres, or those that access the new services required to rapidly compile and process data. Through cloud, servers can effectively allocate computing sources at the pace of business. Cloud computing places services and capabilities at the fingertips of business and IT users to provide unprecedented, yet secure, access. When this happens, the way people work, as well as the relationship between individuals and their enterprises, is transformed.

Real-time processing of data represents another shift. Rather than shuttling vast amounts of data to different servers and applications for processing, as is common now, different parts of corporate systems will be able to process their own data.

Hindsight is always 20/20. But the power of perspective gives you the knowledge you need to change. And what’s increasingly clear is that the data centre status quo isn’t sustainable. The world will continue to create mountains of data and the demand for power will not go away.

There’s no doubt utilization metrics should now provide the modern “green” yardstick for today’s energy efficient data centres. It’s time to end the era of “cheap” energy and commodity computers that cause skyrocketing data centre costs and move to the new green model: more efficient IT utilization to reduce the global footprint of data centres and help organizations innovate and grow.

***
By David Drury, IBM Canada general manager, Global Technology Services


Print this page

Advertisement

Stories continue below