Story image

Investment and Return on Investment in the Cloud

01 Dec 2011

Increased agility. Faster time-to-market. Increased business value. Phrases like these pepper cloud computing marketing literature. Cloud computing concepts are more focused on IT as a business enabler than on simply making IT cost less. But that doesn't mean the need for financial justification is thrown out of the window even if 'soft costs' play a bigger role than in other cases.

Let's think about the investments and benefits associated with building a private cloud. In some respects, this looks a lot like a traditional in-house IT project. You're buying software, hardware, storage and networking infrastructure and integrating these stacks into your environment, planning and executing any operational changes, training IT staff and figuring out the provisioning and billing policies — all of which collectively results in an IT environment and architecture that is more responsive to the business.

Virtualisation on x86 was initially implemented primarily for server consolidation—that is, to reduce the number of physical servers needed to run a given set of workloads. The first-cut financial analysis for such a project is fairly straightforward. Subtract the cost of the virtualisation from the cost of hardware and software licenses that no longer need to be purchased. For a more nuanced take, costs related to power and cooling or staff training on new software are useful to consider. However fine-grained the analysis though, it usually comes down to how much net cost you're able to take out of 'business as usual' IT infrastructure.

True hybrid cloud delivery models, while commonly leveraging virtualisation in various ways, offer a raft of benefits beyond cost savings. It's primarily focused on helping IT operations and the consumers of its services to do things faster, better and cheaper. This doesn't mean that traditional measures of ROI don't apply to cloud computing. (Of course they do, the 'cloud' label should never give a free pass to IT bloat or poor utilisation.) It does mean when considering cloud projects, that more traditional IT financial analyses should be complemented by other measures that are better aligned with the incremental value that cloud is bringing to an organisation.

The numbers associated with these sorts of benefits will often be estimates and will vary considerably from organisation to organisation. Nonetheless, they have a real financial impact.

Metrics that you might use for your analysis include:

Time to deploy a new service (application).

One of the main features of cloud delivery models is that users are given self-service access to computing resources. No longer does the business need to go to market to get pricing, select a vendor, consult with IT on hosting the new equipment, place a purchase order, wait for vendor delivery of the equipment, configure the technology stack and load the software – all of which can take an extended period and delay projects. On demand provisioning can dramatically decrease the time needed to kick off a new project or to ramp up work on an existing one. At the same time, self-service takes place under a managed, policy-based framework so the IT department maintains appropriate control over usage patterns. While a soft benefit, this speed and agility can be quantified through a combination of productivity measures many of which can be derived from the organisations balanced scorecard.

Standard Operating Environments. Research from market researchers Gartner Group shows that an average of 80 percent

of mission-critical application service downtime is directly caused by people or process failures. The causes of this downtime span a wide range of management domains. However, a significant portion can be attributed to change management and configuration management, which the centralisation of policy and workflow controls in a cloud computing infrastructure can help reduce. Gartner goes on to note that downtime can tarnish a company's image

and reputation. While this can be hard to quantify, downtime can also cause a company to miss out on orders or require overtime to make up for lost productivity—factors that can be more easily modelled in a financial analysis.

Admin to server ratio.

One of the big efficiency differences between a public cloud provider and traditional enterprise IT lies in how many servers (or virtual machines) that an administrator can manage. For traditional enterprise IT, a few dozen servers per admin is a fairly typical number. For a large cloud provider, a ratio of servers per admin into the thousands is not unheard of. Much of the difference can be attributed to the high level of standardisation that large cloud providers drive into their operations. While it won't typically be possible for an enterprise to adopt such cookie cutter practices, a private cloud can nonetheless provide a means to develop and deploy a more standardised catalogue of services to users, thereby reducing the amount of one-off work that admins need to perform to keep images updated and patched. Where an enterprise's hybrid cloud spans both on-premise and public cloud resources, an important goal should be to maintain consistent environments no matter where physically a workload is running. The ultimate objective isn't so much to reduce the number of admins but to enable rapid workload growth without linear increase in cost.

The focus here has been on some of the elements that can be quantified to provide business justification for a hybrid or on-premise cloud computing deployment. Organisations would be well advised not only to think of business cases focussing on opex and capex reduction, but to also attempt to quantify how agile and flexible delivery of IT services via the cloud increases the organisation's ability to deliver on its balanced scorecard.

TYAN unveils new inference-optimised GPU platforms with NVIDIA T4 accelerators
“TYAN servers with NVIDIA T4 GPUs are designed to excel at all accelerated workloads, including machine learning, deep learning, and virtual desktops.”
AMD delivers data center grunt for Google's new game streaming platform
'By combining our gaming DNA and data center technology leadership with a long-standing commitment to open platforms, AMD provides unique technologies and expertise to enable world-class cloud gaming experiences."
Inspur announces AI edge computing server with NVIDIA GPUs
“The dynamic nature and rapid expansion of AI workloads require an adaptive and optimised set of hardware, software and services for developers to utilise as they build their own solutions."
Norwegian aluminium manufacturer hit hard by LockerGoga ransomware attack
“IT systems in most business areas are impacted and Hydro is switching to manual operations as far as possible.”
ADLINK and Charles announce multi-access pole-mounted edge AI solution
The new solution is a compact low profile pole or wall mountable unit based on an integration of ADLINK’s latest AI Edge Server MECS-7210 and Charles’ SC102 Micro Edge Enclosure. 
How Dell EMC and NVIDIA aim to simplify the AI data centre
Businesses are realising they need AI at scale, and so enterprise IT teams are increasingly inserting themselves into their company’s AI agenda. 
Orange Belgium opens 1,000 sqm Antwerp data centre
It consists of more than 500 high-density 52 unit racks, installed on the equivalent of 12 tennis courts.
Time to build tech on the automobile, not the horse and cart
Nutanix’s Jeff Smith believes one of the core problems of businesses struggling to digitally ‘transform’ lies in the infrastructure they use, the data centre.