Published in ehiEngage (now digitalhealth.net)
When it comes to re-engineering healthcare IT environments to achieve savings or best
practice, too often a trial-and-error approach is taken which can be both complicated and
costly. For some time now, business technology consultancy ImprovIT has been using
virtual modelling programmes to create ‘what if?’ scenarios that simulate real world
outcomes and take the guesswork out of finding the right solution.
Most Trusts today are caught between a rock and a hard place: Cutting IT costs on the one hand
while ensuring productivity and service quality doesn’t suffer on the other. In the best case, this is a
self-imposed challenge to try and increase profit margin. Just as often, however, it’s at the behest of
senior management or government mandate. Of course cost cutting pressures have been going on
now since at least 2008 and for many there is little blood left in the stone. The question now is:
“How and where to make further reductions without knee-capping the entire operation?” There are
plenty of apocryphal tales about organisations who axed staff and jettisoned efficiency enabling
technology projects mid-development, only to discover their actions had deeply wounded deliverables
and reputation, resulting in a panicked rehiring and/or re-purchasing exercise to redress the balance.
Finding the cost/quality balance
Wouldn’t it be great if you could work out the exact balance between minimising cost and optimising
output, without inflicting the damage resulting from real world trial-and-error? Also, wouldn’t it be
amazing if you could create a best practice IT framework drawn from the unique strengths and
challenges of your own organisation, not some off-the-shelf formula? These two goals may seem
unrelated but they’re not. It is possible to work out a ‘just right’ cost/productivity balance and best
practice platform without having to chop, change and rearrange people and processes on a
sound-it-out-as-you-go basis. How? By modelling ‘what if?’ scenarios on a virtual basis populated with real,
current and accurate data mined from your own operation. But before the modelling process can
begin, in-depth quantitative and qualitative KPIs and other business data must be generated across a
range of parameters to arrive at a usable baseline. This is what we’ll look at first.
Measure it first
As Lord Kelvin, the eminent physicist once said: ‘If you cannot measure it, you cannot improve it.’
Undertaking any meaningful process re-engineering must begin with knowing your current status,
whether in IT processes, business goals or anything else. Once your baseline has been established
future progress can be measured and trended over time. This also provides the tools with which to
compare your team or organisation’s performance against others of a similar size and complexity
(either in the healthcare-only or cross-industry) to see how you rate in terms of ‘value for money’;
‘quality of service’; ‘best practice’; and ‘competitive pricing’. Digging a bit deeper, you can also find
out where you stand in relation to best practice industry standards for staffing (quality and quantity),
process complexity, service providers (scope & service levels) and IT governance.
What is virtual modelling?
Following the analysis of internal and service provider KPIs these are then measured against peer
groups and industry standards to provide the baseline data set. This baseline data is then used to
feed into our virtual modelling construct. But first – let’s explain what virtual modelling actually is and
how does it works. Virtual modelling uses a series of ‘what if?’ scenarios to simulate real world
outcomes. Providing they are mapped with current, correct and thoroughly researched data the
results can provide a highly accurate view upon which to base business decisions. The building blocks
of a model typically include Cost/Price, Volumes, Staffing, Quality & Service Levels, Service Scope,
Complexity, Project Efficiency and Process Maturity.
The opportunities for using virtual modelling are virtually unlimited, below are a few of the most
The ImprovIT approach
Below is a diagram of the modelling system developed by ImprovIT. Besides its comprehensive
nature, it has been designed to accurately pinpoint the impact of one, or a group, of parameters upon
any or all the others. For example: If I change Service Quality (SLAs) and/or ‘Service Scope’ what
effect will this have on ‘Cost? Or: If I reduce ‘Complexity’ what effect will this have on ‘Processes’?
It also provides a view of the changing balances that occur within the whole picture when one or
more parameters are altered. For example: ‘If I want to increase ‘Volumes’ or ‘Service Quality’ what
changes do I need to make to all the other segments and how will this impact the enterprise as a
The “Goldilocks equation”
Now is the time to apply modelling to our original question – finding the exact ‘not too much, not
to little but just right’ balance between IT Cost and Service Quality. We’ll start by feeding
our staffing metrics into the simulation model. Unfortunately, this isn’t as straightforward as it may
at first appear because it’s not just about numbers, it must also allow for a range of ‘soft’ factors such
as varying levels of knowledge, skill sets and the specialist expertise that can make an individual or
team difficult to replace.
Building up a model
Next we factor in complexity – often the highest contributor to an IT department’s spend (costing
even more than staffing!). A variety of factors contributes to the complexity of an IT environment,
this can be any and everything from security, data confidentiality and high availability/redundancy
requirements to legacy system integration and the number of locations that must be supported by the
enterprise network. Typically, the greater the complexity the higher the cost. A ‘what if?’ analysis
will determine which factors can be influenced and where simplifications can be made without
jeopardising mission-critical applications. Once we have established that changes are advisable, an
estimate must be made on how long these will take, what they will cost and their on staffing and
Then there is the question of outsourcing. This is a ripe area for modelling since the debate
continues about whether or not it’s a viable choice. Will it save money? What services should be
outsourced? And if we are to outsource, what kind of provider should we use? A traditional service
provider? A migration to G-Cloud? Iaas, SaaS or PaaS? Internal, external or hybrid virtualisation?
Whatever the strategy, given the complexities of SLAs and evolving disruptive technologies it’s hard
to forecast the downstream cost and savings. To assess the likely ROI of each option, data sets on
each option are fed into a virtual modelling schema. This may not be an exact science but given it is
based on deep-drill, granulated metrics and the latest market data, it provides a realistic outcome
that beats most alternatives.
To assess whether process maturity (a major influence on the cost vs. quality balance) can be
improved let’s compare our baseline against A) peer groups and B) best practice standards and
guidelines like ITIL (IT Infrastructure Library) ‘Agile’ and ‘Lean’ – a production practice that looks to
reduce resource expenditure down to the minimum required to deliver value to the customer. This
exercise will tell us if and by how much process re-engineering will save and whether it’s worth the
disruptive impact on operations. It’s worth noting here that achieving process maturity isn’t a quick
win: it takes time and requires clear, unequivocal goals and plans led from the top.
In summary, Virtual Modelling promised to be an increasingly important decision-making resource to
enable Trust executives to determine their IT and enterprise-wide strategies in the most time, cost
and operationally efficient manner.
Let us know your details and we will get back to you soon.