The resilience and adaptability of your IT infrastructure, so often dismissed as simply ‘the plumbing’, or ‘the network’, is central to your ability to function. Brian Tinham examines key issues for tomorrow
There are challenging times and even more challenging times in manufacturing IT! Certainly from an IT manager/director’s perspective that’s one way of expressing how it feels when you’re fighting fires, typically with inadequate resources and funding and in a blame culture. But similar could be said for the overall business view, with managers all too often wrestling with a perception of substantial money being spent on ‘IT infrastructure’ for questionable return.
This is a tough one. What’s beyond doubt is manufacturers’ total reliance now, for good or ill, on that infrastructure not only for ongoing, secure routine business, nor just for efficiency, but also for agility – being responsive to opportunities and to inevitable change. And it’s straining at the seams as businesses demand more of their corporate networks in terms of internal and external data crunching, querying and visibility and, increasingly, dynamic collaborative web working with suppliers, partners and customers.
All the talk is of building ‘real time enterprises’ – founded on communication, easy information access from anywhere, largely automated systems with event management and workflow embedded – ultimately relying on integrated, always on, high performance IT networks. So while it isn’t difficult to comprehend the widespread business edict that restricts much of IT spending to making more of the existing IT – founded on the fact of tight times and the cynicism that stems from lacklustre and painful ERP implementations – more than a challenge, there’s a disconnect here.
Proliferation of data, currently growing at 200—300% annually, puts immense pressure on networks. So the methodologies, tools, systems, technologies and resources needed to continuously build, develop and maintain a robust appropriate IT infrastructure that’s flexible and able to respond quickly to capacity changes and so forth with security and efficiency are irrevocably business critical.
And they’re not getting cheaper. Although, for example, server technologies like NAS (network attached storage) and SAN (storage area networks) are improving the price/performance of storage, cost reductions are far outweighed by data volume growth to the tune of several 10s of per cent. Technology advances alone aren’t capable of reigning in budgets.
Keys to change
Unfortunately, most of what IT action there is centres on implementing applications, upgrades and web-based systems, or integrating disparate and legacy systems. When it comes to the ‘plumbing’ itself, eyes glaze over. And in this respect in particular, IT departments are unwittingly being held back by management: recent research commissioned by Sun Microsystems confirmed that only 55% of IT directors believe their boards even recognise IT’s ability to confer competitive advantage. It also revealed that networks and their management are not only far from ideal, being plagued by integration, interoperability, scaleability and complexity problems, but that manpower and resource issues are a serious problem, limiting IT and leading to far too much ‘fire fighting’.
The fact is, current IT infrastructures are impeding IT’s potential: indeed, 51% of IT directors are buying IT just to keep existing technology going, and two thirds say that more than a quarter of their IT manpower is permanently deployed in non-productive ‘fix and maintain’ roles. How can they add value if they are struggling just to stay on top? As Mark Lewis, infrastructure manager at Sun says: “The board seems to view IT [infrastructure] as little more than a cost centre. Until we alter this misconception, what chance is there for IT departments to demonstrate real business value?” Certainly, if nothing changes, scepticism of IT’s worth and abilities will turn into a self-fulfilling prophesy.
Want to change all that? Good. David Roberts, CEO of The Infrastructure Forum (TIF), reels off a list of key issues: “Mobile computing, benchmarking, retention of IT staff, the open source revolution, security, desktop strategies, business continuity, Microsoft Active Directory, data protection, licensing, VOIP (voice over IP).” To these we can add network monitoring/management and ownership, connectivity/interoperability, availability, fault tolerance, disaster recovery, IT policies/metrics, bandwidth/capacity management, operating system and hardware proliferation, storage, clustering, grid computing, network technology alternatives, wireless, job scheduling, Web services, peer-to-peer comms, standards, skill sets and service providers.
Aligning your spending
It’s a big subject. Sun finds the most commonly cited problems are inability to integrate properly (46%, indicating mostly resource and complexity issues), followed by inability to scale systems quickly and cheaply (43%, suggesting older technologies and again complexity). Multiple operating systems are third at 36% – again rooted in complexity.
The key to all this is aligning the spend on network management tools, security and the rest – the architecture support – with the application investment, whatever that is, needed for the business, the engineering development and the manufacturing. As Martin Atherton, lead analyst with Enterprise Applications Group, says, the starting point is being aware of the implications for your business critical infrastructure and ensuring performance in line with its importance.
Simple enough, but the underlying issue remains one of budgets and perceived value. And to overcome that, IT departments need to step up a gear and assign values/costs to projects and services. As Mike Lucas, technology manager for software services firm Compuware, puts it: “IT managers need to articulate, through metrics, what business critical processes, what business efficiencies their infrastructure is enabling. Availability of that service has an estimable value.”
So what measurements? Ruth Kirby, director with IT infrastructure services organisation Systar, believes detailed analysis is probably unnecessary. “You can’t afford to be very technical with the board. To get their attention you need availability figures and events like an email system you can’t open that means business grinds to a halt.” While the detail can be enlightening, she suggests simple dependence is the key to investment justification. You’re going to have to make a judgement.
Pat Leach, CIO for drug discovery firm Inpharmatica, a very high intensity computing user, observes that, either way, linking IT investment directly to ROI is easier said than done. “You don’t see the value of the investments: users just flick the switch and expect the lights to go on… How do you show you add value and reduce costs?” And he alludes to several strategies here, most common of which is IT managers beefing up what they’ve got by hiding essential developments “under the covers”.
Lucas suggests an emerging alternative from the financial and services sectors that involves asking IT directors/managers to define what best benefit they could deliver from a given budget in a specified time frame. “It’s very tactical, very business-led, but it can bring a very different, managed and considered focus to the network that can also reverse some of the complexity trends.”
There is one other point: in the current economic climate, any case for investment needs a time frame of less than 18 months – some say three! Kirby gives the example of networks convergence (VOIP), which can show 18 months on a greenfield site, or one where the PABX has been fully written off, through reduced size of the IT department, skill sets and the physical plumbing – but still frequently fails financial directors’ tests. Businesses need to consider IT infrastructure investments in a different timeframe to others – and in terms of writing down as well.
Leach: “You can’t think in terms of five to 10 years; you need to think much shorter term and tactical – the business will change and so will the technology long before then.”
Monitoring at the centre
Moving on to the technology, network monitoring and management is the big one. Infrastructures have the complexity that history confers on them, and there’s no getting away from that, or the fact that they’re getting more complex, not less. So the fundamental issue is always the same – keeping the servers, storage, applications, databases, links, screens, devices, hubs, switches, routers and the rest, up, running and responsive with maximum availability 24x7. And this is precisely where many fail to invest. As Leach says: “If you don’t know what’s going on, how can you manage it, and for that matter how can you establish ROI.”
Enterprise-wide management and monitoring tools and systems, resident and web-based, are critically important to the 24x7 endeavour. As Ady Dawkins, IT leader at voice recorders manufacturer Thales Contact Solutions, says: “Without them, the first you hear that anything is down is when the users start to call.” Not good! Tony Martin, managing director of Computer Associates in the UK, says: “The challenge is alerting people and systems automatically to degradation on anything likely to lead to problems long before they become real problems … with alerts, impact analysis and so on. And to date very few [users] have.”
CA’s solution is monitoring software (intelligent agents) on all devices and systems, alerting IT to things like disks getting full, the network becoming overloaded, server performance degradation and so on. CA, HP, Tivoli, Micromuse, Mutiny and others (go to www.mcsolutions.co.uk) are good places to start. Remember also the software and hardware asset management – manual or automated, but assisted by advanced diagnostic tools – including software delivery, upgrades, refreshes. Having this enterprise class is key to maintaining a slick network.
These are the foundations upon which a good, resilient infrastructure should be built. Martin: “You need to design quality in: manufacturers understand that.” Getting this right releases IT’s potential to deliver. And in terms of cost justification, Martin reckons a good rule of thumb is just “four months” – and you can see how if, for example, an application upgrade could be entirely automated with automatic network discovery agents, instead of say £50 a pop per desktop.
It’s an aside, but manufacturers would also understand “JIT for networks,” as Compuware’s Lucas puts it. He suggests adding ‘usage tracking’ using network sniffers used primarily for troubleshooting – monitoring which users, what applications, when, what hardware, which devices and so on. Finding out, for example, that manufacturing planning is using certain applications that map to this server, these devices and the rest that were never specified for them, is useful. Follow-up could show where additional investment is required – or retraining. “Using the approach you can see when you need to acquire hardware and bandwidth rather than the old way of working it out with a whole load of assumptions and doubling it.” It means you can defer spending, reduce risk, till you really need it.
Consolidation to a point
There are, of course, limits – both to this and the clear desire to consolidate and simplify networks. Sun finds complexity rampant. Half of IT directors say integration projects are delayed or fail due to lack of interoperability, which isn’t surprising when 28% say they have more than 20 key IT suppliers, and 16% have 60! Not only does that hamper integration, it consumes training budget and staff/skills costs. But while Sun is keen to offer ‘solutions’ that focus on reducing complexity essentially by integrating systems before shipping rather than on site, for most of us consolidation, while making perfect sense from the IT perspective, is often unworkable when it comes to different departments’ project and functionality requirements.
Phil Dawson, infrastructure services consultant with analyst Meta Group, notes, “A lot depends on ownership.” Production control systems, capacity planning and the rest live on the edge of the IT domain. Legacy systems, AS/400 (iSeries) and so on are treated as ‘black boxes’, almost multi-function appliances with embedded functionality and proprietary code. IT might not even know they’re there, and if consolidation through J2EE Java components, for example, is a target, it’s going to be missed.
Additionally, much depends on physical site locations, numbers and types of users, applications, databases and tolerable latency and controls issues – all of which need to be assessed. As Dawson says, on storage, for example, “you might want to do a SAN (storage area network) to consolidate what you’ve got, but if you haven’t got the networking capacity or the platform for that, it won’t work – or at least not throughout.”
That may not matter, but you need to know the implications. “It may mean fencing off that part and understanding the requirements and cost implications,” he notes. Similarly with clustering, there will be network and other consequences: you’ve got to keep enough bandwidth to ensure that latency doesn’t kill the application infrastructure and user response.
Also, given that infrastructure is driven by the application – the standards it supports – there is an inevitability about proliferation of suppliers and complexity. The alternative is the proscriptive one of one size fits all: no good. However, everyone accepts the danger in simply continuing to build infrastructure piecemeal. As Kirby says: “You need to step back and recognise the processes you’re trying to support and invest accordingly with the short and long term vision in mind.” Pragmatism, not dogma, should be your guide.
And if you think the open source revolution (Linux, etc) or Web services and the connectivity and interoperability standards they’re built on are guaranteed to ease the path to consolidation, think again. Standards could make everything even messier as the ‘best of breed’ argument rears it’s head again. As Atherton says, “[the alternative] ‘lock-in’ may not be desirable from a business point of view, but it is easiest to manage.” Although in fact, paradoxically, as the big guns jump more wholeheartedly on the open source bandwagon (Sun and Oracle are the latest) they’ll get more of the business anyway. Plus ca change.
Security and resilience
We cannot close this, as space dictates, without a few words on security and in-house IT versus outsourcing. Security remains the most frequently mentioned big issue by IT people I speak to. As CA’s Tony Martin says: “You’re only ever as strong as your weakest link… Once a virus gets in, wherever it gets in, it will propagate. There’s no excuse for not getting that right these days.”
But people do. There’s lots of numbers floating around on network vulnerability, and the most worrying for many will be the well known classic of 80% of organisations in the US having been attacked, although most without damage – merely proof of the event left behind. The fact is, as connectivity to suppliers and channels via extranets and the rest makes your network porous, so virus protection, intruder detection and so forth become more important than many seem to realise. This is not hype. No one will speak openly, but privately many admit to being on the receiving end of serious disruption at the hands of hackers. One I spoke to said the company’s systems were down for five days – two days simply to rebuild the servers.
But if you do get this right, with proper network monitoring, you have a resilient and manageable foundation and now you can start exploiting it. This is what Martin calls “the real payback time.” Consider, for example, disaster recovery and business continuity – with typically massive investment in mirrored back-up storage. Do you need to spend that scale of money? If you’ve got the network secure, under control and largely automated you don’t – because it’s no longer about straightforward memory reflection at the gross level.
Smarter storage management
Intelligent monitoring systems, like CA’s BrightStor, can determine what should be in cache, what on disk, what on tape back-up, what paused and so on. Martin’s estimate: “I reckon many organisations could reduce the actual requirement to 30% of the first estimate.” Now that is payback time. And when you broaden the point to the coming mobile revolution, with a dependence on PDAs (personal digital assistants) you’ll need similar controls so that when the inevitable theft or loss happens, the system will automatically wipe sensitive data – and reload to a new device.
Intelligent systems, smart management – which leads us to outsourcing. Very few organisations can fund and manage their own maintenance in terms of skills, spares and logistics – the economies of scale don’t work, particularly with the technology range and complexity. As Kirby says: “Top end Cisco engineers don’t come cheap and are probably only needed one day a year by most companies, so pooling that resource is the only intelligent way forward.”
And there are other benefits – getting away from attracting and keeping skilled staff, smoothing investment profiles and IT budgets and receiving the big stick of SLAs, risk/reward and the rest. Firms are outsourcing at several levels – everything from break/fix contracts to the whole nine yards. Systar recently won the outsourcing deal for GKN Westland’s entire IT department, leaving only the IT director. It can work very well.
But there can be drawbacks. If your IT department is packed with good guys, you risk losing them to your outsourcer! And there’s always the trust, culture and history question. It has to do with perhaps perverse perceived criticality, but also difference and security – what’s special about my company. Compuware’s Lucas believes it’s worth considering the half way house of ‘partnership outsourcing’ where the service provider is specifically incentivised not only to meet SLAs, but to bring costs down and availability, robustness and service levels up.
Now there’s a good and useful thought.