Published in Computerworld UK
Key performance and customer satisfaction issues to keep the service desk relevant
Finding themselves competing with ubiquitous online help sources and ever more savvy customers, both corporate and public sector IT service desks are being challenged as never before to improve services and remain relevant.
IT service desks are at risk of being left out of the loop unless they give their internal customers a reason to make them indispensible. From a front-line user’s perspective, they installed their home PCs and have access to an abundance of online help with a mouse-click, so why should they bother with long phone waits to the corporate help desk followed by run-arounds and chase-ups? They often feel it’s faster and easier to ask a colleague, call the software vendor’s support line or do an online search.
The problem is that this uses up valuable staff time and leads to fragmentation of the enterprise IT support environment which means a loss of centralised control and multiple inefficiencies. Not to mention the fact that it leaves the support desk vulnerable to corporate cost cuts.
That said, many corporate IT support desks are excellent. Many others, however, have been measured and found wanting – whether they are inhouse or supported by an external service provider (ESP). The service desk will argue, with some justification, that users have unrealistic expectations because they only see the tip of the iceberg: their own desktop.
Behind the simplest activity, like an email or database search, sits a highly complex infrastructure comprising application and database servers, integrated networks and communications interfaces, legacy systems and a great deal more. Because these complexities are opaque, the customer doesn’t understand service delays or seemingly unnecessary bureaucracy. Be that as it may be (and there’s a case for better stakeholder education here) the question remains: how can the support desk remain relevant, if not indispensable, to their customers?
Virtual circle of improvement
The short answer is to embark on a programme of continuous customer satisfaction based on components like: improving response times; a higher ‘first-time-fix’ rate; fewer abandoned calls and a framework of proactive communications and customer feedback. The payback is happier customers, better productivity and a confidence boost for the IT support team. This is often needed since, as technology expands and customer expectation escalates, the service desk can end up feeling it is running just to stand still. A virtual circle of improvement, positive feedback and better morale can have an exponentially positive impact on the corporate culture and the bottom line.
However, to embark on such a journey of improvement requires knowing the starting point without which data it is impossible to measure success milestones. The problem here is that a surprisingly high number – some 55% – of companies do not actually know their current, or historical, per-call or per-incident support desk costs. Nor do they know incident or problem resolution times or how their desk is performing compared with last month or last year – let alone how they rank against peers or the industry average. Quite simply: without historical metrics it’s impossible to measure and achieve best practice.
First Measure it!
In order to get a point of reference, corporates typically look to industry benchmarks. Does the service desk meet or exceed the industry average of 90% of calls (or contacts including phone, emails and web portals), answered within 30 seconds and with an abandoned call rate of less than 5%? Maybe the desk will turn out to be better than the average, which appears to be a success, but at what cost? One of our local government clients had set a desk goal to 85% response rate within 20 seconds. But this aggressive time-to-answer risks a higher rate of abandoned calls and a lower level of customer satisfaction. To achieve this response time also means higher staffing levels to ensure the target can be met at peak times, typically mornings and early afternoons. But this level of resourcing put their quality improvement programme in direct conflict with their mandate to reduce costs.
The fact is that every environment has its own unique circumstances and challenges. Sometimes a support desk has so many variables that finding a handy all-purpose set of benchmarks is next to impossible. For example, a recent project for a multinational client required we do a feasibility study on how best to consolidate their many far-flung support desks into a single central geographical location, with a built-in framework to measure the ongoing cost, performance and customer satisfaction rates of the new platform.
Just to begin there was the question of where to locate the central hub based on a combination of cost, availability and infrastructure stability. Followed by questions like: How should the technology be configured and how best should they resource the facility given they had to span 10 time zones and with dozens of spoken languages.
All of this and more had to be factored in from the start if the client was going to re-engineer the global support desk in way that ensured a best practice facility with a high level of customer service.
Response or resolution?
One key ingredient in driving performance improvement is how the service desk is incentivised. Many external desks operate on a cost-per-call pricing model designed to ensure calls are answered and dispatched efficiently. On the face of it this seems to be the obvious way of pricing the volume of tickets handled and thus the amount of work being done.
However, there are some serious downsides to this pricing model that can impede productivity and frustrate customers. The main problem is that it encourages initial response rather than incident resolution which can result in customers being referred on to experts, who may need chasing up with the result that cases are left unresolved.
To overcome this, others choose per-incident costing based on completed cases and which reward successful resolution. But it’s a trade-off: the time it takes to talk someone through a problem, or to take over their desktop remotely to fix on call is clearly much greater than being a call-answering conduit and so runs the risk of a higher abandoned call rate. Taken overall, however, Fix-on-Call takes less time, and is therefore more efficient, since it reduces the volume of referrals, call-backs and user time re-explaining the problem (not to mention lost productivity while their desktop is non-usable).
There is a third dimension, called Problem Management, that addresses repetitive user problems (“Root Cause Analysis”), or deep-seated issues that takes even longer on a per-incident basis to resolve but arguably offers the greatest long-term efficiencies.
The right kind of self-help
It may seem counter-intuitive for a support desk worried about being cut out of the loop to actively promote a customer self-service programme that reduces call volumes. But creating a customer self-service portal, or a FAQ knowledge base, can effectively cut down on routine queries such as “password resets” and other simple “how-to’s”, which leaves the support desk free to concentrate on more challenging issues.
Ideally this is a win-win whereby customers feel supported with easy-to-use self-help tools and the desk achieves greater job satisfaction. In addition a self-servicing portal can be a great way to alert users to new application upgrades and to post workarounds for temporary system problems. It can also be a tool for soliciting customer feedback.
It’s one thing to embark on an improvement campaign and to succeed. But here’s where many IT departments fall down: they forget to tell the users. The customer portal is a great place to announce the fact one has beaten personal best, peer and industry averages for support desk productivity… there is nothing like public acclamation to reinforce customer satisfaction.
Let us know your details and we will get back to you soon.