Grid computing, on-demand computing, adaptive computing: call it what you like, but it’s moving much faster out of the scientific, financial and high end engineering design and development arenas than most realise – and straight into commercial computing
Grid computing, on-demand computing, adaptive computing: call it what you like, but it's moving much faster out of the scientific, financial and high end engineering design and development arenas than most realise – and straight into commercial computing.
So what? So it's getting cheaper, more scaleable and more applicable to a much wider range of IT infrastructures and applications than used to be the case. In short, it's looking increasingly like a practical way to cutting the cost of your own, mostly under-utilised IT, while simultaneously providing additional resources for those cyclic or sporadic peaks in demand. Yes, we're looking at computing really becoming a service. At least that's what the protagonists will tell you.
Some background and update is in order. This isn't just about providing lower cost compute power through consolidation onto, for example, clusters; neither is it merely about being able to add more Blade servers into your data centre, centralised or not.
From a business perspective, getting grid-enabled is about cutting your IT down to a more affordable size by reconsidering it as a network of loosely coupled machines (that's storage and networking as well as processors), potentially from different vendors running different operating systems in different departments, and maybe on different sites. Some – for now a minority – even go beyond their own enterprises, and pool resources from different organisations, any of which can then be harnessed when the need arises.
Either way, ideally you get away from departmental or corporate computing ownership, and change to IT as an automated, dynamic but managed service that responds adaptively and continuously to your requirements, re-purposing itself as required. You get a single virtual resource. In fact, just thinking about it might push you to consider fully managed services or an ASP (application service provider) environment, with little or no IT located, maintained or paid for in-house.
The keys to grid are first in virtualising what you have (logically separating workload from hitherto dedicated compute resource, and making available capacity visible), and connecting the systems through middleware and today's much faster and more resilient networks. And second they're in buying into software for provisioning (getting the applications and operating system resource to the chosen computing facilities) and orchestrating (optimising resource allocation and load balancing) – which together automate the business of managing your and others' infrastructure as a whole.
There are technical and appropriateness hurdles to overcome in all of that: applications are traditionally designed to run in stable, often dedicated environments, not dynamic highly networked and parallel environments. But it's fair to say that the IT is there and in use – and that the difficulty is increasingly less in the cost and more in the management re-thinking.
Because the cost of the range of software to make most of this happen is already relatively low and falling: Sun's N1grid provisioning system, for example, which is all single-touch browser based, costs around $2,500 per fully managed server, with volume discounts.
And the business benefits include not only reduced cost of IT ownership, but unlimited scaleability and the right power for the right job whenever and wherever it's needed, with computing itself all paid for on-demand. Music to the ears of anyone fretting over how they're going to manage that clash of month end, business analytics and growing activity on the web.
But there are several different models of grid. At one extreme are the screen saver projects like Seti@home [search for extra terrestrial intelligence] where the radio telescope in Peurto Rico downloads huge amounts of data as packets to PCs at 500,000 individuals' homes for overnight analysis. It's a form of Internet computing or Public Resource Computing.
At the other extreme is the system that will provide data analysis from the largest particle accelerator on the planet, the Large Hadron Collider (LHC) being built at the CERN particle physics centre, due to go live in 2007. In its case, the grid will work on a 10Gb Ethernet network backbone provided by Enterasys Networks and providing access to data centres at 120 academic institutions and research laboratories around the world, all of which are installing the software.
They're both serious challenges, but the latter looks most interesting in terms of proving businesses' requirements. As Francois Grey, head of IT communications at CERN, says, it has forced resolution of issues around security, policy and sheer scale. "You have to have complex middleware and load balancing software between the sites. We're ramping up, simulating data output from the collider with what we call 'data challenges' – testing how this infrastructure can perform. Data rates are now at about 20% of the level we'll need."
Glimpse of the future
Incidentally, Enterasys is also providing the network at CERN's openlab for DataGrid applications laboratory, which is testing top of the range and experimental systems from the likes of Intel, HP, IBM and Oracle – pushing the envelope of what's feasible. And it all works. Says Grey: "Virtualisation and re-purposing are implicit in the middleware. You submit jobs to the grid and it decides where they are run, which is partly about where is the nearest duplicate of the data you need."
Between these extremes lies the real commercial world, and Mark Kerr, chief technology officer at IBM for the manufacturing sector, reckons grid computing has become a valid cost-cutting approach. "In the past it's been about consolidation and rationalisation. Now clued-up companies are looking at cleverer things, like physical partitioning – running many different applications on a single server – and logical partitioning." In other words, some form of grid.
What about cost? "For each node we're talking about costs of a few thousand pounds or a few tens of thousands – that's a few hundred pounds per Blade." Provisioning sits on top at around £10–20k for the software plus set-up and services, but he believes there are plenty of good ways forward for organisations with as few as perhaps tens or maybe hundreds of servers.
As for usage and limitations, Kerr says that "with legacy systems you're better off sticking with physical partitioning for now," but adds that modern grid systems and provisioning software will work well with existing hardware for many grid-enabled applications in Windows type environments. "For example, SAP has its adaptive computing concept, which is their own version. We've implemented that as IBM Dynamic Infrastructure for SAP."
So that's well beyond the 'traditional' compute-intensive role as in CAE (computer aided engineering) for running stress analysis or crash simulations. "It's about using provisioning and grid technologies to flex IT infrastructures and run different workloads according to business priorities rather than who owns what resource," says Kerr.
OK, so why isn't everyone rushing to 'do grid'? Partly because the technologies and standards are still emerging – and it isn't the case that everyone shares the enthusiasm of the protagonists. Michael Hjalsted, director of systems and servers at IT services company Unisys, says: "[There are] few suitable applications and no agreed standards or protocols."
As a result, he reckons that for the foreseeable future the software required to enable grid technology will remain proprietary and thus more expensive than makes business sense for the mass market. "Once the major ISVs start to build, endorse and release some grid-enabled applications, then the underlying hardware might become a commodity," he says.
For him, grid "is an attractive proposition but a distant and niche reality." And he continues: "Its success relies on standards, ubiquitous broadband and other kinds of high speed connectivity, such as Infiniband and Gigabit Ethernet, and the end of platform dependency which IT suppliers have a great interest in preserving."
Fair points, but recognising the investment the IT community is putting into this, the proof points and the rate of change in IT, it's worth checking this out.