Making your plumbing dance and sing

9 mins read

The message in IT infrastructure circles is that it's no longer just about cost-cutting. You need to build a flexible foundation for change. Brian Tinham explains

Your IT infrastructure may finally be shedding its lowly 'plumbing' image and coming in range of the business radar. More companies are finally recognising that, on the one hand, their IT can be a target for significant, useful and relatively straightforward cost cutting, and on the other, it needs attention anyway because it's become a brake on essential growth and development. It's also the case that technology, software tools and the concepts that drive system architectures have moved on in line with business requirements. So they're increasingly able to respond very positively to the drive for re-use of existing investments and the need to support increasingly frequent and rapid change. All of which means some big and rather imminent issues for IT strategy on several levels. Many of us will need to look again at consolidation, rationalisation and standardisation of our IT and network infrastructures. In doing so we'll be forced to reconsider the landscape of legacy systems – and that includes the applications and the hardware, platforms and networks upon which they reside. We'll also need to have an eye to future-proofing and re-use, and that means thinking along the lines of software, or more accurately business processes, as services. And we ought to include the potential now of utility computing and virtualisation in our considerations too. So quite a bit to go at, but first, let's step back. How many of us have a good idea of what our IT is currently costing us? Rorie McGarvie, services development manager at infrastructure services company Computacenter, reckons 90% of CIOs he speaks to haven't a clue. If that's the case, how are you going to make a business case founded on cost reduction and/or service improvement, business development and opportunity support and the rest? Top solutions That said, the standard approach for all this is to get some consultancy – and we're into business assessment around IT, people and processes, alignment to the business strategy review, gap analysis and all of that. Big money, potentially consuming big resource and time – even with modern 'ready reckoner' tools that work at the business and IT levels. Yes there are almost bound to be surprises, and yes you should get a clear picture of priorities and savings potential – but it's a bit like getting your prospective house surveyed: why did I bother? It doesn't take a brain the size of the planet to work some of this out. So if you want to skip a bit and get on with improving IT's value-add, what do most organisations find works? Top of McGarvie's list is out-tasking. He suggests several areas for handing over, like the help desk and network, application and website performance and availability monitoring. "It's all the grunt work: companies have a high turnover of IT people at this level; it's expensive; it's a pain in the arse; and it's what Computacenter does best." Interestingly, he doesn't go for the needlessly emotive subject of managed services – although there's no reason to eliminate that from your longer term thinking, particularly if the judgement is that enough of your system is reaching the end of its life – or you're doing a grand job but someone else could do it more cheaply. McGarvie claims savings of around 25—30% over the life of a project, depending on how far you go. And he adds that businesses will also see improvements in the SLAs IT delivers. Some of that is necessarily achieved through what he terms "infrastructure transformation" – meaning some investment, updating and automation of your hardware and networks, the addition of remote management and so on. But some of it is about consolidation. Consolidate and centralise As Simon Gay, Computacenter consultancy practice leader, says: "Consolidating and centralising the IT, and getting away from departmental or site silos, brings hugely increased efficiency." And that's not just brought about by simplifying systems or standardisation, important though both are: it's also the discipline, people, skill sets and processes, and the potential for more efficient and better management, auditability and flexibility – all important in these days of mergers and acquisitions and business compliance. It's also one of the first steps on the way to the next level of IT productivity – that achieved through utility computing (real, not merely financed computing on demand), which in turn requires virtualisation (separating the physical compute platforms from the logical infrastructure – the applications and business processes they serve). Utility computing – allowing you to quickly re-purpose and adapt your consolidated data centre to match business requirements – is more achievable now than most realise. There is again an initial cost, not so much on the virtualisation software but the consultancy services and effort. However, if you've got more than, say, a few dozen Wintel servers, or 25 high end Unix servers and you see marked periodicity of resource usage resulting in high levels of idle time, this could be for you. Consolidating on some Blade or SMP (symmetric multi-processor) server boxes, followed by virtualisation of the new asset, may well be worthwhile. Incidentally, what you consider for virtualisation needn't be only compute-intensive engineering development and simulation systems, or web servers. It can also include your core ERP and billing systems – although there are bound to be diminishing returns on this kind of exercise, and your legacy systems are going to be another can of worms. There is one additional point here though: bear in mind that everything is going full circle: in many ways, you're recreating the modern equivalent of a mainframe, so you'll need to ensure data centre class operating procedures, change management and the rest – because everything now depends upon it. And finally, ideally the process of consolidating and virtualising should take you some way down the track of developing a services orientated architecture (SOA, see panel) – simply by getting away from departmental servers and making ideas like charge-back and IT capacity planning more meaningful. Then you can also start making your IT assets both more flexible and more future-proof. Rip and replace your legacy systems? What are you to do with your legacy systems? Rip and replace? It's horses for courses, but there are plenty of situations where the answer is certainly not. Replacement seems at first sight the logical way to reap the benefits of modern systems and to cut the cost of supporting the old, but as Mike Gilbert of Micro Focus says: "Your legacy systems are an asset, and companies should seek to unlock the value in the systems, business processes and people around them. Our philosophy is re-use as much as is possible and valuable to the company." And it's not particularly difficult to do: indeed Micro Focus has built much of its business on bringing legacy Cobol systems into the 21st Century. And another in a similar vein, although working specifically with IBM ZOS mainframe technology (formerly MVS System 390), is integration systems provider Neon Systems. Its vice president Andy Gutteridge makes the point that applications running on the platform are almost entirely that most difficult of assets – legacy systems containing key business logic implemented many years ago by consultants long gone and managed by people long gone. The classic poorly documented but revered monoliths we all worry about and no-one dares touch. "What we do is web services wrapper these legacy systems for their key business logic components." And what's encouraging is that even if little is known about the detailed programming, it can still be done. "Sometimes you can get to the business logic, but sometimes we have to go in through the screen interface to expose hardwired logic and data to an API. Our software drives the screen interface so you can drive to a SQL API or a web service," explains Gutteridge. He also insists that re-use itself is neither dangerous nor short-sighted. "These systems are very robust: they've been well tested and proven over many years. And you're not changing anything. You're re-using what works and you know you need because it's core to your business. It means, in effect, you can carry on with the old system for ever. It's automated as well, so you're saving headcount, but it's also now agile. And once you've done the work and achieved the current business objective, you're then safe to look around at new platforms." Here's another thought. Consider the drive for analytics-based operational and board level management dashboards, or for getting product catalogues and sales working on the web. Your ERP system may well be the main engine, but you need data and business logic to trigger from your old configurator, pricing and availability systems on that AS/400. And you can't wait for order completion: interaction is needed at various points. Yes you can wrapper the components you need, but you could also enable triggering of the new system from the old every time it performs a database update – effectively pushing the data out as required. "That's what we call event-driven value," says Gutteridge. "You take time out of the business processes and you get real time accuracy." What the analysts call a zero latency enterprise. Mainframe migration The Mainframe Migration Alliance (MMA) has been set up to support companies that want to move off their mainframe environments because they recognise there's an opportunity to reduce costs to one 10th or one 20th of those on older systems. The figures are compelling with the newer Intel architecture and the IBM PowerChip: there's a much better price/performance. And Linux is contributing to the falling costs. Anecdotal evidence suggests that most SMEs' mainframe systems could migrate well to a one, two or four Intel server system. As for the breakpoint of usefulness, the old measure is MIPS, and organisations running with mainframes of less than or equal to 500 MIPS are likely to successfully migrate. They can take CICS Cobol DB2, for example, and successfully recompile and run. It'll be a bit more problematic if it's in mainframe assembler or PL1, or the CA, Ideal, Datacom, Adabus or Natural 4GLs and associated databases, but there are specialists and specialist tools. So far there are 27members of the MMA and the website (www.mainframemigration.org) offers White Papers, consultation, news etc. Services orientated architecture Spare a thought for the value of a services orientated architecture – not necessarily SOA in the strictest and still future sense, but as in initially your IT moving towards an identifiable set of services to the business, as opposed to a set of applications or projects. We can think of an SOA as a concept: the next logical step after the early object-orientated revolution (which was tied to programming language) and component-based software (which was still a binary model tied to a platform, like Microsoft COM or Java EJB). Under the SOA, in theory you don't have to worry about the platform or the language because the underlying web services technologies and standards automatically look after the detail of discovery, integration and communication. As standards firm up, the technologies become more robust, experience grows, and business opportunities materialise, more services will come online, and we'll be able – again theoretically – to use best of breed services to run aspects, or indeed potentially all of our businesses. However, that's still the future; for now the value is primarily threefold. First, it's in taking the concept and the technologies already available, and using them to bring the aspects of our existing systems that are too critical, complex or unknown to interfere with – specifically including legacy systems – into the modern, supported, connected world: re-use. Second, it's in building an infrastructure that hangs together and is dynamic – able to be adapted in response to changing business needs much faster than can be achieved with conventional departmental IT system redevelopment. Third, it's in future-proofing pretty much all your IT. As Mike Gilbert, of integration firm Micro Focus, says: "The SOA is very important because it provides an ideal framework for re-use since it sits above the technical constraints of platforms, components, objects and languages… SOA applied properly means a company gets re-usable business services." And he's talking explicitly about legacy systems and exposing those as services. "It's about getting the granularity right, and that's about asking, 'What are the services I could use? What from my legacy system could I exploit to support my new business strategy?'." Roman Stanek, founder of SOA registry software company Systinet and former director of engineering at Sun Microsystems, sees the agility goal as key. He says virtualisation and SOA are "the yin and yang" of achieving that, and he's unequivocal in urging companies to start thinking that way now. "If you don't do it you'll never have an SOA and you will never have a flexible IT infrastructure," he warns. "You need to understand what you have and how your IT assets are related – that's what gives you the ability to re-use them quickly and solve your new business problems." A final thought: if you're embarking on a project to add, for example, online catalogue facilities or business-to-business services, consider the value of doing so with an SOA approach. Fujitsu Services is one company that's been working in this direction as a precursor to full SOA. The company has developed a templated infrastructure layer, dubbed Triole, that provides a series of robust services or a backbone into which such projects, or even whole ERP or CRM systems can be plugged. Says CTO Marc Silvester: "We've normalised repeated requests and designed an architecture, so there are templates for transaction clearing, Internet front ends, database web serving and so on. It's a series of Lego building blocks that we have invested 45 man months of testing and validation in. We've now got 72 solution templates that answer about 70% of all CIO requests." And the result? "We can now design, build and deploy a project that would have taken an average of 190 days – for example a web front end that needs an application behind it and data serving to react to Internet traffic – and do that in just nine to 15 days." For him, this is a powerful route not only to add-ons fast, but facilitating change. "With Triole, we have a structure that enables you to move to a virtualisation of the infrastructure so that applications can be exposed, added to, changed as the business need changes." Take aways
  • Legacy systems may be key IT assets
  • Mainframes may be best migrated
  • Services thinking is right here, right now
  • Consolidate first for good virtualisation