Measuring Success


A white paper ‘Measuring Success’ provides insight into how to show that real benefits are being delivered from investments in IT service management.

 

How do you measure the success of an implemented process?

I’ve recently been working on an ISO 20000 assignment in Saudi Arabia, requiring an initial assessment to be performed, followed by process workshops, and finally process implementation activities.

The customer had previously confirmed that in 2007 they had implemented four processes (with the assistance of another consultancy organisation): Incident Management, Problem Management, Change Management and Release Management; and that these processes were to varying levels of maturity.


download this white paper on highlighting service management benefits

The assessment that I performed confirmed the ‘varying levels of maturity’ that they had stated, but it actually showed that things in certain areas (e.g. Change Management) were not as good as they thought they were, and it highlighted a number of gaps that either the previous consultants had left or were gaps that had appeared over the intervening period of time prior to my arrival.
The IT Director was open in his criticism that he had yet to see any real benefits from the money that was invested the previous year in designing and implementing four processes; and here I was planning to implement five more! After emphasising that I wasn’t here to fail and that I would deliver some of the business benefits that he was after, it led me to thinking more about the project deliverables and how I could show him that he would receive a positive return on his investment.

All of the items listed below are common ways of measuring performance and the success (or otherwise!) of improvement activities:

  1. Perform an Assessment
  2. Perform a Customer Satisfaction Survey
  3. Measure the Volume of Incidents
  4. Measure the Volume of Failed Changes
  5. Measure the Availability of Services
  6. Calculate the Return on Investment

1. Perform an Assessment

As assessment provides a baseline measurement as to how effective your current Service Management operation is. Looking at each individual process together with the surrounding management framework (such as roles and responsibilities), it helps to answer the question “Where are we now?” and provides a percentage score or maturity level rating. The output from the assessment can then be used as an input into performing a gap analysis exercise and then into developing a road map in order to get to “Where do we want to be?” and which can form part of a larger project plan.

This actually fits neatly into the Continual Service Improvement model of:

  • What is the vision?
  • Where are we now?
  • Where do we want to be?
  • Howdowegettowherewewanttobe?
  • How do we know when we get there?
  • How do we keep the momentum going?

Having been involved in numerous assessments covering ITIL v2, ITL v3 and ISO 20000, I can safely say that they are a great method of putting a ‘stake in the ground’ that, when performing the same assessment again at some stage in the future (e.g. in twelve months time), enables progress to be measured and reported upon. This also helps to ensure that the organisation doesn’t rest on its laurels and continues to progress and strive for further improvements – we could even say Continual Service Improvement!

I’ve also had a number of customers interested in how they compare to other organisations that have been assessed, even if they are not necessarily from the same market sector. It is a natural human instinct to want to see how you compare with others, and even though customer names remain confidential, it is still a curiosity that grabs the attention of readers of the assessment report.

This also helps to put things into perspective. Every organisation usually has one or two processes that score really low the first time an assessment is carried out, and it is sometimes helpful for a customer to see that they are in the same situation as others. Where appropriate, customers can contact one another to swap experiences, tips, etc.

Whilst getting an external agency to do an assessment provides an independent view, a simple search on the internet for ‘ITIL assessment’ will provide you with a selection of simple online self-assessment tools that you can use free of charge. Whilst not as comprehensive as consultancy-led assessments, they can still provide you with a basic indication of how mature your processes are today.

2. Perform a Customer Satisfaction Survey

Performing an assessment is all well and good, getting the perspective of process owners, process managers and process practitioners (i.e. those carrying out the process activities on a day-to-day basis); but what about the views of the customer?

Meeting or exceeding all of the targets in a Service Level Agreement can become pretty meaningless if the customer still has a poor perception of the service being delivered. So it is just as important to gauge the customer satisfaction as it is to use tools to measure the performance of a service, such as its availability.

Make sure that feedback is taken regularly. This could be an automated survey that is sent each time an incident is resolved, although be careful that users do not get bombarded with survey emails such that they just immediately delete them on receipt! It’s usually much better to survey a random selection, such as every 1 in 20 incidents, that helps garner a much better response rate.

Specific mail-shots can target a specific group of users using a particular service, and getting them to fill in an online survey form is a lot easier than filling in a survey form by hand, and it also enables the results to be collated and fed back much more efficiently.

Another excellent method of measuring customer satisfaction is when meeting face-to-face, such as during regular service review meetings. During my time as a Service Manager, I used to ask one of my customers on a monthly basis what their perception of the service being delivered was, and to score it between 0 and 5. Okay, it wasn’t very scientific, but it helped to support the metrics coming out of the various tools that were in use.

Obviously, once you’ve measured and reported on customer satisfaction, the findings need to be discussed as an agenda item at the relevant service review meetings. When necessary, actions need to be taken to rectify any deficiencies in the service being delivered, and these actions should feed into the appropriate service improvement plan.

3. Measure the Volume of Incidents

The volume of incidents being reported to the Service Desk on a regular basis usually provides a relatively good indicator as to the stability of the infrastructure, the quality of training being delivered to users, the effectiveness of the change process, etc.

Let’s not forget though, a percentage of incidents will never get reported to the Service Desk. For example; a user’s PC hangs, they re-boot it, and it loads successfully and they carry on working – but they never report it to the Service Desk. So it is important to remember that the data provided by the various Service Desk reporting mechanisms is likely to be incomplete but still provides a useful benchmark from which to measure progress and success.

The goal here is obviously to have as few incidents as possible, but the volume will constantly fluctuate as business conditions change, new services are introduced or retired, peaks and troughs will occur depending on the rate of changes (and the types of changes) being implemented, etc.

A useful Key Performance Indicator (KPI) for this is to see a reducing trend in the volume of incidents being generated over a period of time. It may be necessary to show this as a percentage of the number of users being supported, the size of the estate, etc. in order to provide a suitable month-by-month comparison.

One of the key factors for reducing the volume of incidents is to implement and operate an effective Problem Management process. It’s therefore important that you can extract the data from the relevant Service Management tools, and that you have the ability to sort and analyse the data to highlight the main areas that should be targeted for root cause analysis and then rectification.

If you’re just starting off on Problem Management, don’t try and rectify too many issues at once so that resources get diluted and analysis takes too long. Start by concentrating on the top 10 issues that are causing the user community most pain, once they have been analysed and plans to rectify them are in place then go on to the next 10, and so on.

It’s also worth remembering not just to concentrate on the high profile incidents that occur. For example: a particular priority 3 incident that occurs dozens or hundreds of times each month may be causing more pain for the user community than a priority 2 incident that has occurred maybe half a dozen times. Concentrate your resources where the biggest benefits can be delivered.

If you are struggling to analyse the incident data then don’t be afraid to ask Service Desk staff for what they believe are the key issues that need addressing. They are at the front-line dealing with them on a daily basis and therefore they are in the best position to provide the most relevant feedback. Similarly, the users (or at least key representatives) can be consulted. This can be relatively easy if user communities or forums are in place, and can be very positive in letting the users know that you are doing something to try and improve their use of IT.

4. Measure the Volume of Failed Changes

If no change process exists, or a process does exist but isn’t used or adhered to, then the likelihood is that there will be a high percentage rate for the volume of failed changes. There is also the risk that many failed changes are also not being reported and logged. This in itself will be responsible for a proportion of incidents being reported to the Service Desk, so a reduction here can see significant benefits further down the line.

Whilst on assignments in the past I’ve had interesting conversations about measuring the success or failure of a change. What’s the definition of a successful change? What’s the definition of a failed change? Well, all I can say is that you have to decide what those definitions are, but decide you must, and this then needs including in your change policy document.

Let’s look at an example. One evening, after service hours, a technical team implements a change to MS Outlook in the production environment and they experience no issues. The following morning, though, the Service Desk receives hundreds of calls about MS Outlook, reporting that all of their emails have disappeared. So an apparently successful implementation has actually had serious consequences for the users and the business as a whole – therefore this cannot be categorised as a successful change. The next step is obviously to urgently review the reasons for the incidents being reported, and then to take the appropriate measures to rectify the situation such as backing-out the change, making an emergency change to the production environment, etc.

When measuring the success rate of changes being implemented, start by defining what is and what is not a successful change, and the criteria being used for each definition. Then put in place a process where all changes are reviewed for success. If you have a high rate of change and will initially struggle monitoring all changes, then consider delegating someone to review certain categories of change, or perhaps to begin with just review changes of a certain priority – but remember, all changes should be reviewed for success and this includes looking at the volume of incidents generated as a result of a change occurring.

A Key Performance Indicator (KPI) that can be used here is to see a reduction in the volume of failed changes, and show this as a percentage relative to the volume of changes being implemented over a weekly or monthly period, and trend this over time.

5. Measure the Availability of Services

Measuring the availability of a service or system may be relatively straightforward if you have the right tools in place that can automate the reporting that is required. Of course, in order to report accurately on the availability of a service, you should have in place Service Level Agreements (SLAs) that have been agreed and signed-off with the business or customer.

If you don’t have SLAs in place, then this is a key area that will need to be addressed. It’s no use reporting on availability based on what you think the customer’s requirements are. Their requirements have to be gathered, SLAs have to be drafted and agreed, whilst making sure that the supporting Operational Level Agreements and Underpinning Contracts are also put in place.

By definition, increasing the stability of the infrastructure should bring improvements in the availability of services. We’ve already seen two key areas that can deliver significant increases in availability: Problem Management and Change Management.

Implementing these two processes, or enhancing their effectiveness if the processes are already in place, should see benefits derived of increased availability for the relevant services that have been targeted. Again, if you have many services being delivered but resources are tight, start by targeting improvements for the top 5 or top 10 services.

If you’re not already, then start measuring the availability of your key services and then put in place a KPI that will help drive towards improving the availability of a service or services. Items 3 and 4 that are discussed in this paper will have a direct influence on the achievement of this KPI, but all three together will also have a significant impact on item no. 2, customer satisfaction, so improvements should also be seen in this area as a consequence.

6. Calculate the Return on Investment

Seeing a reduction in the number of incidents and failed changes occurring, as well as increases in availability, is all well and good, but many an IT Director or a CIO will also be interested in how it affects the ‘bottom line’. Being able to show a positive financial impact is a great way to generate buy-in and obtain the necessary resources with which to make any improvements.

Therefore, when it is appropriate, try and show that there will be some financial return for investing in the improvements that are required. If you can show that not only will availability and customer satisfaction be increased, but the organisation will also save lots of money at the same time, then it’s often a lot easier to make a business case for getting the initial resources for the project. Although remember a note of caution; it won’t always be about reducing costs – sometimes an increase in costs may be necessary, which may in turn generate increases in revenue and hence increases in profit in the longer term.

So being able to show a return on investment (ROI) can provide a very powerful argument for introducing change. There are a number of ROI calculators available in the marketplace, all concentrating on different aspects, but all are capable of showing you the potential financial benefits that can be achieved by making even small improvements in service delivery. A simple search on the internet will provide you with more details.

Fox IT has one of these, called FoxROI™, and I’ll use this as an example. An initial information gathering phase takes place, where key details are captured about the core business, such as revenue, basic working hours; number of staff and average salaries; IT budgets and overtime costs; etc. More specific information is then gathered about the IT department, including details relating service levels and current availability; the volume of changes taking place along with the volume of failed and unauthorised changes; and the volumes of incidents resolved at 1st, 2nd and 3rd line support.

Once all of this data has been entered, three targets are required from the organisation:

  • % reduction in time to resolve incidents
  • % reduction in number of failed changes
  • % increase in availability levels

The tool then processes all of the data that has been input and provides a detailed analysis as an output that also forms part of a formal report for the customer.

As can be seen below, percentage improvement targets in the three key areas of incidents, changes and availability shows significant potential savings in the overall cost of the IT Department. Obviously, we should also expect to see a positive impact on customer satisfaction if these types of improvements are achieved.

measuring success img1

One of the nice features about these tools is that the figures can be easily tweaked and the results re-calculated. So, for example, if the customer thinks that a 10% reduction in the time to resolve incidents is ambitious, and would be satisfied with just a 2% increase in availability (let alone 5%!!), then it’s very simple to produce a revised set of results.

measuring success img2

Alongside these key figures, various charts are also produced that highlight specific areas such as the impact on productivity as a result of incidents occurring, the costs of failed changes, the cost of unavailability, etc.

measuring success img3

All of this information can be very powerful in persuading an IT Director to invest in a service management improvement programme. I would then advise re-calculating the figures again in the future, maybe in a year’s time and then again after two years. This will help you show, in financial terms, the value of the benefits that are now being delivered.

Summary

As we have seen, it is possible to show the potential value of implementing best practice or improving the current Service Management operation. I’ve highlighted six key areas for consideration, all of which should be relatively straightforward to act upon, but you don’t have to do them all. If some are easier to implement than others, then start on those.

Do some upfront planning so that you are able to baseline ‘where you are today’ and then look at where you want to be in a year’s time, in two years time, etc.

If you’re not already, start collating key information about incidents, changes and availability. And don’t just rely on tools to give you the baseline; don’t be afraid to go and ask your customers for their perception, and consider utilising third parties to do an independent assessment of your existing operation. Then look at calculating the potential ‘return on investment’ from implementing or enhancing best practice processes.

All of this will help your planning and ensure that you target the areas of most need first. Then go ahead and submit your business case. And if you’re in my position, mid-project and looking to show some benefits being delivered, start measuring as soon as you can so that you’ll be able to show the positive impact that service improvement activities can generate.

Want to speak to a Fox IT consultant today? Contact us now →




Fox IT trademark

Fox IT® is a registered trademark of Fox IT SM Limited

ITIL trademark

ITIL® is a trademark of AXELOS Limited

Fox IT is an ITIL ATO with Peoplecert

The ITIL® ATO logo with PeopleCert is a trademark of AXELOS Limited.