What would happen if your systems went down? In a today’s society, where almost every company relies on technology for the day-to-day running of their business, the risk of downtime is not one to take lightly.
Back in July, the New York Stock Exchange (NYSE) experienced a disruptive service outage, with key systems down for nearly four hours, causing chaos on the trading floor. This is something that happens to a lot of organisations, but few are left in such a critical position where time and money are so intrinsically linked. Originally citing a technical configuration problem, the NYSE later stated that it was actually the planned roll-out of new software to one of the trading units that had caused the problem.
The NYSE crash was widely reported and made all the more newsworthy when, by sheer coincidence, The Wall Street Journal’s homepage went down for part of the same day. Additionally, and again unconnected, United Airlines also suffered a technical problem on the very same day, an issue that caused significant disruption to passenger’s reservations, grounding some flights for up to two hours.
Headline hiccups such as those suffered by the NYSE, The Wall Street Journal and United Airlines may be newsworthy, but they aren’t as rare as the stories may suggest; technology, it would appear, sometimes just doesn’t cooperate when you need it to! The time it took for these organisations to correct their particular tech issues, however, is a little less easy to grasp, and this is where the real story lies. There is a particular fragility in using technology, and with such a reliance on ICT, the significance of data back-up and Business Continuity planning has never been more important
In today’s society, where almost every company relies on technology for the day-to-day running of their business, the risk of downtime is not one to take lightly; whenever there is a technical glitch, there is a risk that a company will lose money. Whether this is the inability to receive emails for a morning or a virus that attacks customer data, the effects on the efficiency of the business will always be felt.
When it comes to Business Continuity planning, prevention is not always better than cure – it’s actually a carefully aligned blend of the two that needs to be considered. Signing up to solutions boasting a 99.999% Service Level Agreement (SLA) may seem like a top insurance policy, one that will prevent your company from a ‘crash,’ though in reality it’s the remaining 0.001% that could be the problem; a downtime “time bomb” just waiting to cause trouble. The combined outages suffered by the NYSE, The Wall Street Journal and United Airlines highlights the fundamental importance of having robust recovery systems in place, in addition to preventative SLAs, the cure being the ability to invoke them in event of any business impacting ‘disaster’ situation.
With three high profile incidents occurring in a relatively small time frame, this also raises serious questions surrounding the approach that large corporate companies are taking to data protection, back-up and disaster recovery. If these globally renowned brands can suffer such public outages, how are smaller, less-equipped organisations coping with the day-to-day problems that technology can cause?
The NYSE crash highlights further the inherent value of utilising test and development environments to ensure any glitches are spotted before the changes go live. This can save the headache and associated risks of in-life testing and change management once a system is live, processes which become more complicated the larger your organisation or the smaller your IT department.
All companies need technology strategies that incorporate the development, maintenance and continued use of underpinning infrastructure. Whether large or small, this means that companies must have a back-up and disaster recovery plan if they want to ensure their business can continue as usual, just in case a crash does occur!