Utility computing is bringing on-demand IT closer. Brian Tinham looks at how it could change the way we run, and pay for, our infrastructure
Grid computing; utility computing; computing on demand: each is similar although not identical, and each is probably not relevant to you, right? Wrong. Although all, strictly, are founded on grid – the notion of dynamically sharing compute resources, and thus providing highly flexible, potentially lower cost, optimised IT infrastructures – there is important evolution here that could make a profound difference to the way we think about, design and pay for our IT. And for some, that's already happening.
Grid was originally predicated on feasibly providing for co-operative, distributed, compute-intensive operations, latterly with a single virtual system image to manage them. Utility computing takes that further and 'virtualises' (the clever bit: makes accessible/manageable online) as much of the IT landscape as makes sense. While pundits sex it up with the plug-in electricity supply metaphor, the real value is in shifting the emphasis onto flexible computing, potentially configurable on the fly. Hence, for example, Hewlett Packard's Adaptive Enterprise and Sun's N1 Grid.
Meanwhile, on-demand computing can be the same, although the term also covers several 'pay as you go' models now being offered for anything from storage to servers, even entire IT infrastructures – and we're getting into managed services and outsourcing. Indeed, Simon Gay, consultancy leader at Computacenter, says that utility computing "is in danger of meaning whatever you want it to mean." He accuses some hardware vendors of doing little more than "finding ways to ship more disc, processors or whatever they make, with finance deals," but he also agrees that grid/utility now adds another useful dimension to business computing.
Why now?
Why should we be watching these now? Because the concept of computing as a flexible 'service' is far more practical and affordable today than most realise, and it's getting better – with the central virtualisation and management software now effectively available 'out of the box'. It can generate compelling short and longer term ROI (return on investment), and not just for the big boys. And it can bring with it additional business benefits beyond the obvious. Today, grid, utility and on-demand are not just about gathering vast compute power for engineering analysis problems and the like; they're also about cutting costs by further consolidating IT infrastructures as well as greatly improving utilisation, availability and flexibility.
Most recent offerings, for example, bring the key dimension of enabling companies to turn up or down the IT power as business needs dictate. On the one hand, that renders IT much more a variable cost – something less for the capex balance sheet and more for the operational expense ledger. On the other hand, that flexibility can also radically change business thinking in terms, for example, of configuring (and paying for) infrastructure for business continuity and disaster recovery.
Then again, it can take the heat out of scaling up for existing initiatives or launching new services, which hitherto would have had costly and time-consuming IT implications. Conventionally, if anything in the IT infrastructure needs to change, you're into physical re-wiring, servers moving around, reprogramming of the network, the SAN (storage area network) and so on – all of which militates against doing it, so you limit consolidation to cope with peak capacity, and resist change.
With grid, as HP's utility computing programme manager Peter Hindle says: "The UDC [utility controller] fabric is there all the time – so there's no re-wiring, and it's completely flexible… It's right click and configure stuff on the portal, and you can reconfigure large amounts of the system from anywhere virtually in real time. [Also], it's got high availability, no single point of failure, full security policy, fibre channel storage networking, dual power grids..."
The bottom line: it can pay for itself, and, depending on scale, current infrastructure, requirements and the rest, it can do so quickly. Interested? The IT savvy in big industry know this. Platform Computing, among the founders of grid and with history in engineering IT (Pratt & Whitney, AMD, General Motors, Texas Instruments, ARM, Toshiba), released the results of its survey of 100 IT decision makers in large global organisations last month. It suggests that, for them, grid is now the sixth highest IT investment priority for 2004 – ahead of Web services, ERP, telecommunications, CRM (customer relationship management) and outsourcing.
So what can you expect? Ian Baird, chief business architect at Platform, cites "one large car manufacturer" as "spending a few hundred thousand dollars and saving $21 million." He suggests that users aren't willing to be named because of the sheer scale of competitive advantage. Be that as it may – and I haven't found many doing it yet, certainly not SMEs – his insight on where the ROI comes from is worth airing.
"There are two major contributors – top line and bottom line. The bottom line is capex and opex reductions. Most companies' compute resource utilisation is 30% at best, so 70% is sitting idle most of the time. Our technology aggregates all those 70%s and provides a virtual supercomputer… That means you can safely consolidate and save on hardware and software licences. You can also share storage and databases. The cost of your IT drops by 20–40%."
Similarly, with opex, since you've got a virtual single system image and all the automation and management assistance that goes with that, your infrastructure is cheaper to run. "Agent technology, self-healing systems and so on that automatically find alternative resources if there are problems part way through a job, mean less labour-intensive management and less waiting for jobs," says Baird. "Our experience shows between 10 and 20% savings there."
But the biggest contribution comes on the top line. "Since you can release your own idle compute power when you need it, you can get much faster 'time to result'," he says. He cites pharmaceutical companies getting drugs to market three months ahead of their competition, semiconductor firms taking 45 days out of design cycles and saving "$300–400 million". A rule of thumb: "Top line savings can be three times greater than bottom line ROI."
But there's a different top line contribution, and that comes from what Pierre Sabloniere, lead IT architect for grid computing in Europe at IBM, calls the infrastructure type grid, as opposed to the compute or data/information grids. "Compute grids are the classic vast number crunching systems used in the academic arena and in engineering for crash testing. Information grids are for where the goal is sharing very big data, as in global simulation."
Then there are business system grids. "That's where infrastructure grids come in, with smart servers able to come up on demand," says Sabloniere. Technologies like Blade servers and HP's UDC fabric are what make the difference here: virtualising the infrastructure is a bit like logical partitioning in a big server.
And the contribution now: "Businesses running ERP today will have dedicated servers and standby machines. Suppose you are running 20 applications: you could share the power of the standby servers… You also need less hardware to run your applications and to cater for business continuity and disaster recovery, test and development and so on." As Platform's Baird says: "The dynamic aspects of grid technology allow you to utilise your whole infrastructure more intelligently. So if you lose a node or any part of the grid, the system simply switches the load elsewhere."
Virtualisation
But to achieve that level of sophistication, you have to virtualise your system – which requires grid management software and more modern platforms that can take it. And you also have to grid-enable the applications – easier for some than for others. "Sometimes you can just do it by some scripting," comments Sabloniere. "At Cebit, for example, IBM showed SAP running in a grid configuration." But sometimes there's more to it.
Baird reckons you can grid-enable anything from Microsoft Excel up – getting very complex spreadsheets that would normally take 20 minutes to run, for example, to complete in seconds. But it's worth remembering that, while the serious commercially available grid options are supposedly platform-agnostic, you don't have to make everything dynamic: you wouldn't want to start a business grid project aimed at cost reduction, consolidating and flexibility by wading straight into some key legacy applications, for example.
Philips' Nijmegen, Netherlands semiconductor site was the first to adopt HP's UDC, providing a virtualised data centre with twin goals of reducing costs and making IT responsive to the business requirement in real time. Philips says the system covers "several applications", and the plan is to extend it to "other production services", following the industry's recommendation to start small and build the grid with experience.
Philips' ICT director Theo Smit says: "The HP UDC allows us to quickly and easily adapt to the business fluctuations of the semiconductor industry… We reduce our total cost of ownership by streamlining data centre management and reducing excess IT capacity, while also incorporating the industry's best platform for data centre consolidation."
The system runs from a single console that manages server and storage allocation, and Philips is also using HP's ITSM (IT Systems Management, based on its OpenView ITIL (IT Infrastructure Library) compliant framework and methodology) to manage its IT as a service. It's also implementing HP's pay-per-use storage environment. And there's perhaps a clue to the thinking required to get the most out of utility computing.
Who else can or should go for this? HP is clearly tackling the big boys; although IBM claims to be taking its on-demand computing to everyone, most of the action is in the big league; Sun sees utility as one among many on-demand options and focuses it at the larger corporates; and Platform is strongest in the Fortune 2,000 companies. But Baird makes the point that many of its successes start with departments of those big organisations, and says there are plenty of examples of much smaller engineering and manufacturing businesses successfully running grid technology for a bunch of different reasons.
Do you qualify in? Are you up for it? HP's Hindle reckons we should go for it because the rewards are so great. "We're aiming at a 40% cost saving on companies' IT budgets over a period of two to three years," he says. But then he concedes that, "a lot of that comes from the consolidation that companies have to do to use UDC, and if they've done that already there won't be so much directly measurable benefit." And he means conventionally cutting down on server numbers, disparate platforms, licence and maintenance costs, as well as storage consolidation and so on.
The flip side: if you haven't done all that, you should, although it's no mean feat. Analysis of the IT infrastructure across departments and sites; assessment of the business case; establishing which platform and application mix to standardise on: it all consumes resource. And there are issues around people and processes, the whole business of moving from departmental to more centrally-managed systems as services.
Nevertheless, there are grid's extra flexibility and availability arguments. Without utility computing, you need the 'just in case' model – so you'll have systems to support all the applications you need, fail-over systems, a test and development system, disaster recovery and so on. But with grid, since you can reconfigure quickly, the risk factors around consolidation go right down. You could, for example choose to reconfigure the test and development system for disaster recovery if the need arose, rather than having systems for that alone.
The limits? Sabloniere is good value. "At this time it's a very valid option, but it depends on the infrastructure you have now, the requirements and the tactical [business/financial] offering. Companies have to be very pragmatic: if the infrastructure they have is already consolidated and finely tuned, then leave it alone. One or two years from now, with the adoption of Web services standards, virtualising the infrastructure will be easier than it is now."
Baird suggests: "If a user has got IT spend maybe in the millions, then an enterprise grid inside the firewall is a viable option. But at the next level down, if you're considering buying an additional box, server whatever, don't do it; think about grid software instead, and cluster your existing resources. You're not going to get the magnitude of savings [of the bigger companies] but they're still significant, and you get the flexibility."
Opportunities for SMEs
Computacenter's Gay: "The opportunity exists for SMEs. We're starting to work with some now… Virtualising technology changes the IT infrastructure model: that's what enables much of the move to making computing provision and costs adaptable. Then services add further economies of scale."
Best advice: consolidate hardware and applications first, then revisit the infrastructure in light of what can be achieved with virtualisation to strip out some more. And absolutely include a review of your business processes and IT in light of that. Then look to outsource whatever makes sense, according to your comfort zone, future-proofing and core competence. The age of computing as an all-round service is closer than ever.
Hindle: "The lower limit depends on the business case, but probably 150–200 servers – although that will come down." Baird: "It could cost as little as $25–50,000 to implement a grid; it could even be lower… You can build a grid with two servers and some workload management for a few hundred dollars driving across Linux boxes." Sabloniere: "There is an entry point, but 40 servers … Why not?"