Holy grail of utility computing is still a long way off

2 mins read

Utility computing, also known as grid computing, fabric computing and a host of other names – all describing different approaches to the ideal of always on, totally available anywhere utility compute resource – is still some way off. Brian Tinham reports

Utility computing, also known as grid computing, fabric computing and a host of other names – all describing different approaches to the ideal of always on, totally available anywhere utility compute resource – is still some way off. That doesn’t mean IT managers and directors can forget about it. Manufacturers, like businesses in any other sector, need to respond to the fast growing demands on their networks, and prepare for better up-coming alternative approaches and technologies that are likely to be highly influential in terms of solving real problems and reducing costs and complexity. And utility computing will: in due course it will deliver technology capable of behaving like a reservoir of virtually unlimited compute resource dynamically and automatically configuring and re-configured itself on the fly to as application and user demand dictates. Since for most of their lives, processors effectively stand idle – but capacity has to be sized close to maximum to meet application and user response time requirements – when it comes this will make a massive difference to very costly IT infrastructures. However, despite much talk in the IT community about utility computing, it remains for now mostly just that: talk. And the IT big boys remain mostly in R&D mode, considering how best to do it and with what systems, links and controls to provide appropriate resilience and compute service. When it comes it will be cathartic: it will mean far more efficient use of existing server resource, solving current processor and storage difficulties currently being addressed by, for example, expensive and complex clustering and SAN/NAS (storage area networking and networked attached storage) respectively. An approach currently nearest include so-called e-sourcing where your compute power is provided in web mode over a VPN by a hosting company with spare capacity looking after the applications, storage and the rest – although there are issues around trust and software licensing arrangements. Others remain in, albeit upbeat, development. And word on the street is that it probably makes most sense to press on with your own infrastructure solutions for the foreseeable future. Pat Leach, CIO for drug discovery firm Inpharmatica, a high intensity computing user that could certainly benefit, says: “I’m very sceptical: it’s a refresh of e-commerce, which was a refresh of EDI: the same old, tired products being refreshed. “There is novel technology out there enabling computing over large networks, but I don’t want to have to manage that scale of complexity. I want to buy compute power like electricity, as a commodity resource. “It’s not there yet. There are organisations out there starting to offer proposals, but we’re very specialised right at the high end. For the majority it’s years away.”