Q & A: The Meaning of Performance Metrics

Q: My court recently completed Measure 1 of the CourTools and found out that 73% of the court users who completed the survey “agreed” or “strongly agreed” that the court was accessible, prompt, respectful, courteous, and fair. What does the score mean? Is that good or bad?

A: It is true that without links to and comparisons with other referents a score of 73% has a very limited meaning. Fortunately, it is relatively easy to find and to link this specific score to a number of referents – the breakouts of the aggregate metric, the same metric over time, and the same metric in other courts -- so as to imbue the metric with meaning. (A metric refers to the numbers that a measure uses to describe the attribute being measured. In this case, the measure is the satisfaction of the court users with the court and the metric is the percent of users who “agreed” or “strongly agreed” with the items in the survey.)

Breakouts as Referents

Let’s assume that 73% agreement is the average (aggregate) score of all court users across all 15 items on the survey. A simple but meaningful referent is the breakout of this score for each of the 15 items. For example, assume that the variation around this average of 73% ranges from a low of 43% for Item 5 (“I was able to get my court business done in a reasonable amount of time.”) to a high of 87% for Item 3 (“I felt safe in the courthouse.”). True, even with these referents, we still don’t know what’s good or bad, but we do now know something about the baseline from which we started measurement (73%) and the range of scores from a particular high and low score. We know something very important that we did not know before -- that it’s possible to reach 87% agreement and to get as low as 43%, and we know that 87% is “better” than the low of 43%, as well as the average of 73%.

Similar meaningful referents are the breakouts of the average score for each of the background categories identified in the survey (e.g., the type of case that brought the person to court, or how often the person typically is in the courthouse) and the different courthouse locations in which the survey was conducted. For courts or court systems with multiple locations, comparisons of survey results across locations can be a useful basis for identifying successful improvement strategies. Different locations might be compared, for example, on the percent of users who felt that they were treated with courtesy and respect. Follow-up queries can then be made that probe the comparisons. Why do one or more locations seem to be more successful than others? What are they doing that the other locations are not? Asking staff in both the most successful and least successful locations these simple questions can help to identify “evidenced based” best practices.

Trends Over Time As Referents

Of course, this measure should be assessed on a regular and continuous basis – preferably quarterly. By tracking the average and the breakouts of the survey over time, court managers will be able to ascertain trends or changes associated with improvement initiatives. When the baseline performance of 73% is compared to a hypothetical target set by the court -- let’s say 80% (four out of five respondents agree) -- the metric enables a control function of performance measurement by answering the question of whether performance is at acceptable levels or within tolerable boundaries established by the court.

Other Courts as Referents: Comparative Performance Measurement

The referents discussed above focus on internal (or within) court performance measurement including comparisons of the court’s performances over time, or among a court’s different geographic or functional divisions. Comparisons made in internal performance measurement are restricted to within the court. “Comparative performance measurement,” on the other hand, focuses more broadly on a court’s performance in relation to other courts. By comparing a court’s performance with that of other courts, court managers can determine how much improvement realistically can be made and what strategy or practices may hold particular promise.

As more and more courts collect organizational performance data and build court performance measurement systems (CPMSs), court executives increasingly will ask – and will be asked by their stakeholders -- how their courts are doing compared to other courts, how they “stack up” against other courts. Comparative performance information is likely to become a major part of performance measurement.

Comparative performance measurement involves:

(1) obtaining performance measurements from comparable courts;
(2) comparing their court with other comparable courts on one or more performance measures;
(3) identifying differences in performances;
(4) determining the possible reasons for those differences;
(5) learning from the differences about how performance can be improved; and, finally,
(6) applying this learning to improved policies, strategy and practices.

Comparative performance measurement enhances internal performance measurement in several significant ways: (1) by providing “benchmarks” of achievable results; (2) by identifying best policies, strategies and practices; (3) by motivating improved performance; (4) by improving courts’ accountability for results; and, generally, (5) by supporting management practices and improvement initiatives.

Copyright CourtMetrics 2006. All rights reserved

Comments

Popular posts from this blog

Top 10 Reasons for Performance Measurement

The “What Ifs” Along the Road in the Quest of a Justice Index

Q & A: Outcome vs. Measure vs. Target vs. Standard