Is procedure the enemy?
As an ISO 9001: 2015 accredited organisation, we pay close attention to our procedures. They ensure things run smoothly, consistently and make sure quality is maintained.
However, recent events in the IT world and attitudes within IT departments we come across are making us question whether procedure is, in some cases, being turned into the enemy of good operations?
The recent system failure at British Airways (BA), affecting two separate systems for online check-in and flight departures, resulted in over 100 flight cancelations and more than 200 delayed departures. Unfortunately, BA is gaining a reputation for IT problems after experiencing a major outage over the spring bank holiday weekend in May 2017. On that occasion, a power loss led to hundreds of flights being delayed or cancelled.
Too much procedure?
This latest outage may have been caused by human error but could too much procedure, have caused the issue to persist for so long?
We see this often, albeit with less media interest, where bugs or faults slip through testing. This can be for a number of reasons, maybe due to test data not accurately matching real data or unforeseen patterns of system usage (and yes, sometimes just plain old software gremlins). While we minimise the chance of this happening, it is often impractical to create and follow procedures that rule out all possibility of a problem occurring when a system is released to hundreds, thousands or even millions of users.
If a problem does occur in the wild, the very procedures that failed to prevent the problem occurring in the first place often lead to the problem persisting for much longer than necessary, causing far greater damage.
Of course, we do not know the details of the IT woes at BA but we have seen situations where a known bug cannot be fixed because of an insistence to put the fix through a testing procedure that failed to spot the bug in the first place! You can end up in the frustrating position of not deploying a fix because no one in the deployment team has the knowledge needed to assess whether the fix might actually make things worse!
According to the Boston Consulting Group, a management consulting firm, the amount of procedures, vertical layers, interface structures, coordination bodies, and decision approvals needed in organisations has increased by anywhere from 50% to 350% in the last 15 years. According to their analysis, complicatedness has increased by 6.7% per year over the past five decades. The findings are based on surveys of more than 100 U.S. and European listed companies.
The effect of IT commoditisation
The issue is often made worse by the ongoing march towards commoditising IT services.
Considering each small facet of IT as a distinct commodity that can be run by a low-skilled team, responsible for just their part, often means you sacrifice skilful speed and flexibility for narrow-viewed procedure.
Given the prevalence of data and systems backup within any modern IT system, with multiple fault-tolerant disks running in clustered servers with off-line backups, It is hard to imagine an IT problem that could not be fixed, or at the very least rolled back, within a much shorter timescale than BA were seemingly able to do so.
Is procedure killing productivity?
Although it is unlikely to be remembered this way, perhaps it wasn't so much the bug itself that caused the damage to people’s holidays and the BA brand, but the extended period it took to fix it?
So, whilst good testing procedure will continue to be a crucial part of any IT project, perhaps it’s time to question whether you have the people with the skills and authority to deal with issues, rather than hoping your procedures will mean they never occur in the first place?
Talk to us today if procedure is killing your organisation's productivity.