Gambling on your IT future?

5 mins read

We bet our businesses on the technology we use to serve and save our lifeblood data. But all storage technology is not the same, so as the drive to consolidate gathers momentum Brian Tinham looks at the options

Consolidation of file storage in this data-intensive age is a must. But depending on how you do it you can make substantial improvements, not just to the performance and robustness of your IT infrastructure, important though they both are, but also to the cost of your administration and your ability to support ‘collaborative commerce’ across business, engineering and manufacturing. This is very important. Management in manufacturing SMEs in particular, many of which don’t have the luxury of a strong IT department, tend to think that storage is storage is storage. It is absolutely not. Birmingham-based £200 million turnover Fujitsu Telecommunications Europe, for example, which makes broadband network systems and cabling for the major European telcos, bet its engineering design, manufacturing, indeed its entire business, on networked attached storage (NAS) from Auspex – instead of conventional direct attached servers (DAS). 10 years of experience have proved its performance and reliability outstanding, but there have been other benefits that made good business, as well as IT, sense. On the other hand, Westland Helicopters in Yeovil, Somerset, which upgraded its SAP R/3 ERP system to v4.6c in November last year, went for storage area networking (SAN) in the form of Hewlett-Packard’s XP512 array for its data consolidation. Since system functionality was going to be expanded beyond its initial helicopter build project management and financials, with SAP business intelligence, document management and ultimately links into PLM and CAD, that was deemed the best way for it to go. The trap is that with the relatively low cost of seriously sizeable high performance storage, simply throwing general purpose file servers at a problem seems the obvious way to go. But this approach ignores the underlying complexity of managing and securing the resulting multitude of distributed DASs. And the notion of using DAS to ‘spread risk around a business’ is, in data terms, a myth – all you achieve is more complexity, additional management and higher costs. As Fujitsu’s IT director Ian Batten says: “I would strongly advocate that companies should move to the smallest number [of storage units] possible, albeit with some redundancy... In our organisation, if we had DAS around the business they would all have to be up and running all the time. If one went down we’d lose engineering instantly because of the scale of data dependence – and we’d lose manufacturing after that.” Fair points. And to these he adds: “Data outside the machine room is less safe in a fire, and what happens when someone walks out of the door with one of your projects under his arm?” And there’s data back-up. Batten backs up every single byte of data. “It may sound profligate,” he says, “but we know that, whatever happens, we will have that little bit of code that someone wrote that holds that business process together.” He can do so easily, of course, precisely because he has consolidated his storage, so everything is centrally stored and managed – and the back-up operations are trivial, fast and absolutely reliable. Secure, scaleable, strategic Meanwhile, Rob Giddings, Westland’s infrastructure and development manager, echoes the centralisation and consolidation sentiments, adding that his firm wanted modern, but also stable storage technology that would be highly available and scalable. “With 400GB of data from R3 and an additional 150GB for business warehousing… [SAN] provides us with a strategic way forward in terms of managing our storage in one area rather than having disparate storage facilities for each different project,” he says. So what are the differences? DAS is, as the name suggests, storage dedicated physically and logically to your machine: throughput rates can be 3—400Mbps, but there are obvious limits on flexibility and there are redundancy, administration and back-up issues. SAN, a relatively recent development of DAS, involves multiple servers connected via a network switching device to multiple storage, so that each server, theoretically, can talk to as much storage as it wants – although you fix the LUNs (logical unit numbers) against data needs. It has good throughput extendable to 600Mbps through gigabit switches and fibre channel, and it offers good flexibility, with large potential storage per user, and better functionality. However, although multiple clients, application servers and operating systems can use the same physical storage, they are all logically isolated so you don’t get direct data sharing. And that means multiple copies for multiple users and hence synchronisation issues. Also, higher end performance isn’t available if you want to go for IP (Internet protocol) networking. NAS, which has been around since the early ‘90s through several generations, removes the file system from the application server, and you run as if direct over the network to central storage. It’s optimised to look like a file server, and with one file system for multiple application servers you can run what you like and share one copy of large scale data. It’s ideal for today’s data intensive CAD/CAM where the drive is towards collaborative engineering. But it’s not so good for compute-intensive applications involving a lot of transactions and database querying and analysis – as in large scale ERP – mostly because of the classic software-driven Intel motherboard file system architecture which acts as a bottleneck, restricting throughput to around 2—300Mbps. As John Taylor, chairman of the Storage Network Industry Association (SNIA) and storage product manager for Dell, says: “You really need fibre channel fabric direct access between the storage, the application server and the client for that. You don’t want an Ethernet LAN getting in the way.” So then SAN is your man. There have now been several generations of NAS box designs. At the entry level, most recent units are based on Microsoft’s SAK (Server Appliance Kit), and are proving themselves ideal for workgroup operations requiring easy set-up, low administration file serving, back-up and the rest. Then in the mid range there are higher performance systems with more functionality, for example in terms of expandability, more storage, clustering, back-up to tape and so on. So NAS is becoming quite common: suppliers include Compaq, IBM, EMC, Sun, Hitachi Data Systems, Dell, Fujitsu Siemens, Network Appliance and Storage Technology. Last year, Bracknell-based start-up BlueArc ramped up the high end game by introducing the much heralded convergence of NAS and SAN, neatly removing the limits of both essentially by delivering the architecture in hardware on FPGAs. The claim is that it’s the best of both worlds, and the firm is seeing big name success. US chip manufacturer Altera is using it for ECAD and chip design; likewise, MRI scanner magnet manufacturer Oxford Magnet Technology; and there’s Lawrence Livermore National Laboratories in the US with storage of around 140TB for its massive 260 node Linux cluster supercomputer running ECAD and CAD/CAM applications. So it’s all happening. And if you’re thinking NAS must be expensive, it isn’t. Blair Innes, BlueArc’s UK sales director, says: “Our SiliconServer is 30% cheaper than DAS in total cost of ownership terms, and 60% cheaper than SAN. It’s also 30% cheaper than conventional NAS… 10Tbytes of storage will cost around £350—400,000.” He says it starts to pay for itself as soon as you’re up to three NT servers – and that’s without the value of operational improvements. Returning to Fujitsu Telecommunications for an idea of good practice, Batten hosts both his ERP (Glovia) and engineering development (Cadence Verilog ECAD, PTC Pro/Engineer MCAD, Rational Clearcase CASE and Flowmerics thermal modelling) on two Oracle databases both on NAS boxes, with 2TB of data. He has a further 2TB on SAN for back-up and archive. There’s also 0.5TB on DAS looking after legacy applications and point requirements, including the email repository. He says of NAS: “it’s fast, reliable and easy to back up and manage and all engineers run with the latest, single ‘version of the truth’.” His only caution, although NAS is now more accepted, nevertheless it takes you outside the envelope of ‘standard practice’. So “you have to ensure that NAS is clearly specified and that all relevant IT suppliers can genuinely support you,” he advises. But, that done, it’s absolutely worth it: Fujitsu has had less than an hour’s downtime a year in three years.