Green IT often involves virtualisation, which in turn requires boosted data management.
As corporate social responsibility programmes gain prevalence in a more environmentally conscious world, various technologies play a part in Green IT strategies. Some researchers estimate that typical data centres consume the same amount of power in a single day as thousands of homes do each month. This suggests the data centre is significant to a company’s carbon footprint.
Leaders in the space of ‘cloud computing’ have spent extensive time and financial resources to address this issue. With tens of thousands of servers in existence, the effect of consumption and heat generation on the environment has to be considered carefully. Although numerous new technologies are undergoing tests in the hope of developing a greener data centre, many of these could take several years to become widespread. But there is at least one influential technology that can help today.
Virtualisation has long been a topic at the forefront of many IT organisations, however, the current financial meltdown has emphasised a need to further reduce costs and maximise existing investments. Virtual environments offer a great opportunity to lower the total cost of IT, but vendors have also touted the environmental benefits of this relatively new technology. Operating fewer physical servers results in less energy consumption, less hardware waste and smaller data centres that all contribute to a positive environmental impact.
While virtualisation can vastly reduce the physical requirements of the data centre, many organisations fail to realise that without proper management and monitoring, they can easily find themselves in a position where critical applications perform poorly and affect business productivity. Lower energy consumption and costs are important to IT strategies, but they can’t come at the expense of the availability and performance of IT services.
Managing a virtual sickness
In 2008, NetIQ surveyed 1000 enterprises worldwide on the topic of virtualisation. More than three-quarters were in the process of deploying virtual infrastructures, but in a shock finding almost 80 per cent of respondents had not considered any formal means of management. Of the organisations that had, only 21 per cent were gauging the performance of applications running on virtual servers. With such little insight into virtualised IT systems, it’s easy to imagine that issues affecting business continuity and productivity are not monitored or reported.
Virtualisation changes the way data centres must be managed and operated. Applications are no longer confined to individual servers, and running multiple applications on a single server is prone to performance conflicts. If critical applications experience peak usage at the same time, their performance deteriorates. The ramifications vary, but ultimately failure to meet agreed Service Level Agreements (SLAs) is almost guaranteed.
Virtualisation introduces a host of tools to build and distribute performance depending on various initiatives like handling peak usage, business continuity and disaster recovery. However, the ease with which this can be achieved has brought its own vocabulary. As some analysts call it, VMware’s Vmotion has brought on ‘Vmotion sickness’ – the ability to easily move virtual machines around hosts without approval leads to a fragmentation of the server infrastructure that can be undocumented and complex to manage. However, we mustn’t forget that this fantastic technology can be used very effectively. A little planning and preparation goes a long way.
Key virtual factors
No matter how hard you look for differences, aside from the tangible, virtual and physical servers are very similar. They run the same software, and for the end user it is a seamless transition to a virtual infrastructure provided application performance is well managed. These similarities mean virtual servers still have security and compliance requirements. For example, patch management can’t be overlooked, locking down the servers remains necessary, and the network traffic must continue to be analysed.
There’s no doubt that virtualisation is a cost-efficient and green technology, but it’s altered the way IT operations must view their infrastructures and applications. Let’s examine four key factors to consider when migrating and managing any virtualised IT infrastructure:
1. Thorough planning
It might sound obvious, but thorough planning is a crucial part of any IT project and something that is often given less attention than it deserves. If the plan is to migrate applications to a ‘green’ infrastructure using virtualisation, detailed consideration must be given to which systems and applications can be migrated with minimal impact to the business.
Identifying candidates for virtualisation should begin with servers that have simple functionality, such as file and print servers. Candidate selection can quickly escalate to various other servers including backups, domain controllers and web servers. Determining candidacy typically starts with the evaluation of four standard performance indicators: CPU/memory utilisation, disk I/O and network I/O. Once we get past the basics, we can focus on the specific and core metrics of a particular application, along with their performance as experienced by end users.
The right tools are needed to generate detailed insight on application performance, historical trending and end-user response times. This data provides a clear understanding of how an application has performed once it resides on a virtual platform. The market for hypervisors (also called a virtual machine monitor) is competitive; it has dominant players and niche vendors.
While VMware introduced this technology to the mainstream, healthy competition comes from Microsoft, Oracle, and Citrix. VMware remains the dominant player in an ever-changing landscape where vendors strive to differentiate themselves with new offerings in functionality and price.
The last piece of planning is to truly understand this technology and what it can deliver for an organisation. Simply put: don’t skip on the training! To get the most out of virtualisation, send staff to certification classes and research the technology through Webinars and white papers. We’ll talk more about security later, but it’s also import not to rely on ‘on-the-job’ training when it comes to adopting a technology that may soon become one of the major technical investments made over the next several years.
2.Metrics worth monitoring
Virtualisation brings new challenges around how best to monitor hardware and the hypervisor itself. Application performance becomes incredibly important, as it relies on the underlying physical and technical hypervisor infrastructure. While some products have in-built tools to provide visibility into overall performance, typically they lack basic features such as historical reporting and an intelligent analysis of a server’s behaviour (rather than only alerting on a threshold breach).
In-built tools typically only monitor limited pieces of the infrastructure and maintain historical data for short periods of time. This lacks visibility into the most important part of the organisation: the end user’s application experience.
To meet SLAs, it’s essential that you have not only a comprehensive view of both physical and virtual environments, but also correlating metrics on all layers of the stack (hypervisor, hardware, virtual environments and applications).
Finally, extensive visibility into end-user response times will give an end-to-end analysis during migration. After all, if you don’t know how well applications are performing, how will you set targets for SLAs and reduce support calls?
3. Virtual security is just as real
Virtual networks should be treated the same as physical networks. Configuration and patch management are all-important, but remember that virtual machines are often out of view from security architects, which leaves systems more vulnerable than physical servers. A physical hypervisor can host up to 30-plus virtual servers. Imagine all the traffic that can easily flow between servers without ever leaving the physical host, through firewalls and from one subnet or VLAN to another, unseen by network analysis tools or systems monitoring traffic. Therefore it is important to isolate network traffic. Don’t mix types, such as application and virtual management traffic that can enable ‘man-in-the-middle’ attacks. Ideally, you should physically isolate traffic types on separate network interface cards (NICs), switches and VLANs, or use a hybrid of VLANs and NICs.
All policies and procedures that apply to physical infrastructures remain critical to the virtual world; however, they must be tweaked to suit this new environment. Begin by implementing existing policies and then work closely with security and audit teams to introduce policies specific to the virtual environment. It’s surprising how easy it is to overlook security in the haste of deploying an exciting new technology.
During the planning stage, conduct a security audit on the servers to be virtualised. The configuration of servers is often modified during the migration process, so it’s crucial you have a pre-migration baseline to work towards once you have gone virtual. Repeat the security audit regularly after the migration to ensure security levels are always intact.
4. Virtualisation doesn’t mean less management
Based on the above points, it’s clear that less physical hardware doesn’t equate to less focus on management. In fact, management and reporting become more important to track the success of virtual environments against existing SLAs. This relatively new technology produces its own set of management tasks that can consume a substantial amount of IT labour, particularly if adequate training wasn’t provided in the first place. But don’t let this detract from virtualisation’s role in any Green IT strategy or cost-saving initiative. The cost benefits are proven and in many cases quite impressive!
Management doesn’t have to be a burden on finance and resources; in fact, technology can be managed very cost-efficiently. Just about every management tool available today does one main thing: warns you when something is wrong or abnormal. However, once that alert/email/SMS has been sent, somebody still has to make a site visit and fix the issue. In many cases, false positives arise, or we simply see too many alerts. Time and effort is wasted on issues that might happen quite frequently (eg: a disk that runs out of space), which means less time to focus on more important projects.
Until very recently, there were few solutions available to get around this issue. However, this has all changed thanks to automation tools. Automating IT processes is one way to lessen the amount of resources required to manage virtual networks. The ability to capture organisational knowledge and automate the response and resolution of issues, can easily justify the migration to a data centre built on green initiatives. How is that possible? Simply put, you can avoid wasting time and resources on mundane and inefficient tasks.
Maintain green performance
There is a definite case for virtualising data centres in the spirit of Green IT. Estimations vary on the reduction of energy consumption, but in many cases the impact is significant. Yet, in a corporate world where environmental concerns come head-to-head with key business issues such as cost and performance, it’s unacceptable for green initiatives to impact on IT services.
To meet critical SLAs, IT departments must arm themselves with the data they need to proactively manage hybrid virtual/physical data centres. Virtualisation is still a relatively new concept and as it expands to allow more complex applications onto the hypervisor, monitoring and management will only become more critical. Organisations have to realise that data correlation, fast diagnostics and quick response times will contribute directly to the success of virtual migrations and ultimately contribute to the success of their Green IT strategies.