Published in Computerworld UK
How to create credible cost/performance metrics tor successful benchmarking
Having spent almost £500m in the past 3 years on its internet services such as Directgov and Business.gov, the government finds itself unable to determine if they have delivered up sufficient benefits to justify the investment.
What public sector CIOs need is a credible way of generating and analysing the cost/performance metrics that will prove to taxpayers this is money well spent.
Being able to justify a £500m spend would be critical in any climate, but with government facing unprecedented demands for slashing costs, it has become imperative to lay down processes for measuring the return on investment of all IT services – from back office systems to critical applications – including public service websites.
CIOs should know at any given point how much a particular service or infrastructure component is costing, the benefits it currently delivers and the further efficiencies that may be gained through innovation or process re-engineering.
When it comes to new IT investment, procurement managers should have strategic hardware and software replacement/upgrade regimes in place that ensures they can leverage economies of scale, and have comparative market pricing data at their fingertips that will enable them to negotiate best price with suppliers.
As far as outsourcing is concerned, service level agreements (SLAs) with IT service providers need to be regularly reviewed and renegotiated based on specific measurements of the benefits that are being delivered. The measuring of value and benefits becomes an imperative when the services are delivered by a third party.
Measuring value for investment
Most of those responsible for government IT, if asked, will agree that to be accountable they must be in a position to accurately measure both the short and long-term ROI of any IT investment- whether traditional, virtual or cloud-based. Where to begin? Beyond logging hit rates and comparing pre and post-online service productivity results, trying to pin down the cost/performance benefits of a public service website like Directgov may not be as straightforward as measuring Desktop services or Data Centre hosting, however, there are some universal tools that can be applied to a value for investment review. The following are some of the successful approaches we have taken with regional authorities, NHS and other public sector clients.
The review process
First, create a comprehensive overview of the cost and effort involved in current services and future projects – the more specific the better. This needs co-operation from all stakeholders and should also include the complexity and qualitative factors that influence cost – for example does a service require full mirroring of data and 24/7 support? While this may enhance quality, it significantly adds to the cost – which a particular stakeholder may find desirable, but is it necessary?
This detailed documentation should include such elements as cost modelling, work breakdown structures, budget data, selected role profiles and task summaries. Spelling out all of these factors in a specific way, and then adhering to them, is key to achieving long-term cost efficiencies. Also, a service may be found to be efficient in terms of KPI statistics, but is it delivering the right things based on stakeholder requirements and/or taxpayer needs? Only the latter really achieves a value result.
Several Authorities we have worked with – including some large organisations in the South – discovered in this review process that large amounts of time and money were regularly wasted by end user staff members tinkering with desktop fixes instead of alerting the IT department; or additional and often redundant services being requested from outsourcers direct rather than being co-ordinated centrally; or new laptops or smartphones being purchased by individuals direct from the supplier rather than through the official IT procurement channel, thus missing out on volume discounts.
Comparing results with peers
There is really no way of knowing if a project is turning in best practice results, whether a service is over or underperforming or whether it is delivering up good value for investment unless it undergoes a process of comparison.
This needs to be made across two vectors: First by making a vertical comparison of similar sized organisations looking at A) the main tasks performed and B) the cost rates charged for a particular service by either the in-house IT or the appointed service provider. Secondly by making a horizontal, cross-industry comparison of costs and performance based on best practice market norms. This is particularly useful for the public sector CIOs who are sometimes surprised to find they may be outperforming private sector peers – a discovery much valued for management leverage.
Determining best practice
As mentioned earlier, automatic assumptions are often made about what service levels or IT processes are appropriate to support a service, and by appropriate we mean those that offer optimum efficiency and investment payback vs. those that are ‘nice-to-have’ but may be ultimately inefficient. To determine whether a service is achieving best practice status, it is advisable to compare these against – indeed model them upon – international best practice frameworks and standards such as ITIL, CMMI, PRINCE2, SixSigma, CoblT etc. Organisations with greater maturity of processes based on recognised standards have been demonstrated to operate more cost efficient and effective operations.
Good vs. bad complexity
One of the major factors impacting best practice is the complexity of any given operation. Complexity that reflects the multi-faceted processes of a modern and sophisticated enterprise using applications like CRM (customer relationship management) is one thing, but most infrastructure complexity is due to multiple generations of legacy systems and interfaces with a disparate array of workarounds and historical short-term decisions, all of which slows processes down and adds to high maintenance costs.
Government internet services like DirectGov, as an example, will be fed at the front-end by a series of complex back office data servers, networking and other components that typically require a lot of support. This means that, while the internet itself may be highly cost-efficient, providing an interactive portal that is populated with integrated and up-to-date information from back-end services is often very cost intensive. So the question is: what can be simplified, standardised and/or modernised, how much will it cost and can it be done without adversely impacting current operation?
Beyond that, the big question is whether a public website, or any other IT service for that matter, is best supported in-house or via an outsourcer? This is a complex issue that involves making not only peer comparisons, but having access to detailed, granular, up-to-date SLA and pricing data on specific service components offered by a range of service providers – the kind of information that is not always transparent.
Best practice procurement depends upon access to this kind of information, alongside knowledge about different cost models, having volume advantage negotiation skills and being able to make informed decisions on questions like: Are shared services the way forward? If so, which ones and who pays for what?
Is all this a complex task? Sure. Is it necessary? Absolutely! And while it may require some professional help to lay the benchmarking foundations necessary to generate the cost benefit metrics that can justify a £500m – or even a £50K – spend, it should be viewed from a long-term perspective. As with anything, it begins with the first step.
Let us know your details and we will get back to you soon.