Step 2 – Identifying Desired Performance Measures (Sub-Steps 2.3 and 2.4)

This is the sixth posting in a multi-part series exploring the Six-Step Process for Building an Effective Court Performance Measurement System (CPMS) first summarized in Made2Measure (M2M) in October 2005. See below for previous postings in this series.

As noted in the last posting, the amount of time and effort the design team devotes to the four sub-steps of Step 2, and how deliberately the team takes all the sub-steps, will vary depending on how much prior work a court has done to define its mission, direction and strategic goals. Identifying the right measures is hard work if done correctly. One challenge is avoiding the tendency to hurry up and to select a few reasonable measures the court may be already taking or, alternately, simply “adopting” measures prescribed by state or national authorities.

In most strategic plans and scorecard systems I have seen and reviewed, the development of performance measures is not taken very seriously.
- Howard Behn, Balanced Scorecard Institute

Step 2.3 – Identifying Core Measures

After determining a court’s key success factors (Step 2.1) and establishing preferences for types of measures desired (Step 2.2), Step 2.3 entails identifying a set of core performance measures aligned with the key success factors. It begins with a consideration of three sources of possible measures: (1) the inventory of performance measures currently used by the court produced in Step 1. Assessing Currently Used Performance Measures; (2) models developed by state and national authorities; and, (3) core measures developed by comparable courts. Attractive measures drawn from these sources should then be compared against criteria for effective core performance measures including:

(1) alignment with key success factors identified in Step 2.1;
(2) focus on outcomes, i.e., an emphasis on the condition or status of the recipients of court services or the participants in court programs (outcomes) over that of internal aspects of court processes, programs and activities (inputs and outputs);
(3) contribution to a balance across success factors;
(4) consistency and relevance across entire court;
(5) driver of success, i.e., it is both an incentive and a tool for improvement;
(6) Emblem or symbol, i.e., the court and its stakeholders easily understand its meaning and significance.

The three sources of possible performance measures and indicators mentioned above are a good starting point for identifying desired performance measures and indicators. They can be helpful not only in identifying specific measures but also in determining how many measures should be identified, what types of measures should be dominant, what specific measures are critical or core measures and what measures are subordinate, and how measures should be tiered in hierarchies of core and subordinate measures.

Why reinvent the wheel? Why not simply adopt a prominent model, the set of 10 performance measures prescribed by the CourTools or by other authoritative sources and dispense with Step 2 altogether? Effective performance measures are those aligned with a court's unique mission and compatible with its jurisdiction, structures, operations and key management practices. No turnkey operations yet exist for court performance measurement. While models and examples are helpful starting points, none can tell you what to measure. None will align precisely with your court’s strategic priorities and operational environment. Only the members of a court’s design team may be aware of “snoozing alligators,” important measurement topics (e.g., jail overcrowding attributable to pre-trial detention) that may be unique to a jurisdiction and perhaps even outside of the court’s current strategic focus, but ones you can ill afford to ignore. With that caveat, let’s examine two overlapping models.

Beginning in the mid 1990s with the promulgation of the 68 performance measures of the Court Performance Standards -- the first groundbreaking court measurement model -- a movement has grown to establish a limited set of performance measures generally applicable to all courts. Two simultaneous efforts, one launched in 2003 by the computer company ACS, Inc. under the name of CourtMetrix, the other by the National Center for State Courts’ (NCSC) under the name of CourTools, spearheaded this movement. CourtMetrix is a patent-pending software application that displays court performance measures and data in an easily understandable display using graphs to illustrate trends. It was pilot tested in Washington State in 2004. The NCSC’s CourTools, introduced in tentative form at approximately the same time, describes ten core court performance measures and instructs court managers how to use them.

Together, CourtMetrix and CourtTools prescribe 12 discrete performance measures, referred to as “key” in CourtMetrix and “core” in the CourTools:

1. Court User Satisfaction – The percentage of citizens and court users (including attorneys, jurors, judicial officers and all other court employees) giving favorable ratings to the court’s access, procedural fairness, equality, courtesy and respect, and equal timeliness.

2. Case Clearance -- The number of outgoing cases (i.e., resolved, disposed, or concluded) as a percentage of incoming (i.e., filed or re-opened).

3. On-Time Case Processing – The percentage of outgoing cases (i.e., resolved, disposed or concluded) in accordance with established local, state or national time standards.

4. Backlog (Ageof Pending Cases) – The percentage of pending cases that are “older” than established local, state or national time standards.

5. Trial Date Certainty – The average number of times cases scheduled for trial are rescheduled before they are heard.

6. Employee Opinions -- Court employees’ ratings of their knowledge and understanding, commitment, motivation, and preparedness.

7. Cost Per Case – The average cost of processing a single case, by case type.

8. Collection of Monetary Penalties -- Fines, fees, restitution and costs collected as a percentage of those ordered by the court.

9. Reliability and Integrity of Case Files – The percentage of files that are retrieved within established time standards and that meet established criteria of accuracy and completeness of contents.

10. Juror Yield – The number of jurors who are qualified and available to serve expressed as a percentage of those selected.

11. Juror Utilization – Percentage of jurors selected to participate in voir dire or serve on trials as a percentage of those who are qualified and available to serve.

12. Jury Representativeness – Parity in race and national origin between jurors who are qualified and available to serve and the eligible population of jurors in the jurisdiction.

With some differences in definitions, CourtMetrix and CourTools, share the first 11 measures. Jury representativeness is not included in CourTools. Both CourtMetrix and CourTools combine some measures into composites or indexes yielding seven and ten “core” or “key” measures, respectively. For example, CourtTools joins the four case processing measures – Case Clearance, On-time Case Processing, Backlog and Trial Date Certainty – into an index, the Caseflow Expedition and Timeliness (CTE) Index, and allows courts to assign different weights to the four measures ( see, “Technical Note: The Caseflow Timeliness and Efficiency (CTE) Index,” Court Manager, Volume 18, Issue 4, 2004).

By 2006, at least 25 courts and court systems had used these models to identify specific measures as part of initiatives to develop CPMSs. With the exception of the Family Court of Delaware, which may be the first court to attempt the building of a CPMS, most of the courts relied on the models, while tailoring the measures and adding others not included in the models to fit their unique needs.

The April 2004 proposed plan for a CPMS for the Seattle Municipal Court may be typical. The court identified seven core performance measures considered to represent a balanced scorecard of performance measures: (1) citizen and justice system partner satisfaction; (2) case management efficiency and effectiveness; (3) workforce strength; (4) case record integrity; (5) equity/fairness; (6) enforcement of court orders; and (7) custody confinement. Only the seventh measure, custody confinement, defined as the number of defendants/convicts placed in custody/confinement by the court, is not represented by the measurement models. The first, fifth, and sixth measures are modifications and expansions of those found in the CourTools and in CourtMetrix. For example, citizen and justice system partner satisfaction includes as part of the court user satisfaction a measure of the quality of the court’s justice system partners. The remaining measures are virtual adoptions of the measures in the two models.

Step 2.4 – Defining Performance Measures in Operational Terms

The last sub-step of this step is an operational definition of the identified measures in terms of its rationale, necessary data elements, data sources and methods -- concrete and specific processes or operations -- required to obtain the measurements. At a minimum, the operational definitions of the measures should state the rationale of the measure, identify all necessary data elements, data sources and data collection methods. Data elements are the various pieces of information that must be collected in order to calculate the value of a measure. Few measures are derived from a single piece of data. For example, for a measure of effective restitution collection and distribution, the following data elements must be operationally defined: (1) the types of crimes and victims to be included in the measure; (2) dollar amount of restitution ordered, collected, and delivered to victims; and (3) the time interval of time between the order and delivery to recipients considered the "effective" payment period. Data sources and data collection methods also need to be described. These include the types and locations of sources of data such as dockets, computer databases, case files, financial records, and the perceptions and opinions of court users and employees, and data collection methods and techniques such as court record reviews, opinion surveys, and observations. The descriptions of the ten measures of the CourTools serve as very good examples.

Although the detailed descriptions of all data elements, data sources and methodology can be delayed and accomplished as part of Step 4, Testing and Demonstrating Measures, at the very least this step should result in a synopsis of a simple definition of the measure, its purpose or rationale, and the methods it will be collected. The operational definitions of the measures will help clarify the requirements for data collection or retrieval, database design and development, and other measurement requirements. Poorly defined measures -- for example, measures identified only in terms of broad concepts like "increased public trust and confidence" or "increased understanding of orders given by the court" -- may cause unrealistic expectations followed by missteps and wasted effort.

Next: Step 3, Measurement Hierarchies.

Previous postings in this series:

(1) Introduction to the Six-Step Process for the Design of an Effective Performance Measurement System (Part 1), Made2Measure, June 6, 2006
(2) Introduction to the Six-Step Process (Part 2), Made2Measure, June 12, 2006
(3) Step 1 -- Assessing Currently Used Performance Measures, Made2Measure, June 17, 2006
(4) Q & A: Can Step 1 Be Taken At the State-Level, Made2Measure, June 23, 2006
(5) Step 2 -- Identifying Desired Performance Measures (Part 1), Made2Measure, July 2, 2006
(6) Step 2 – Identifying Desired Performance Measures (Sub-Steps 2.1 and 2.2), Made2Meassure, July 10, 2006

For the latest posts and archives of Made2Measure click here.

Copyright CourtMetrics 2006. All rights reserved.

Comments

Popular posts from this blog

Top 10 Reasons for Performance Measurement

The “What Ifs” Along the Road in the Quest of a Justice Index

Q & A: Outcome vs. Measure vs. Target vs. Standard