Posts

Showing posts from January, 2006

Curling Performance Measures – An Olympic Sweep

From the What-Will-They-Measure-Next Department comes news from the world of curling – the Olympic sport in which players brush the ice in front of a sliding, 42 pound granite stone. According to the latest issue of Wired , UK researchers spent $89,000 developing a sensor-laden broom that measures the velocity of the curlers’ strokes, the strain they put on the brush, and the precise temperature of the ice at the time. “Do you want to be sweeping your heart out and not know what you’re doing? asks Mike Hay, the UK’s Olympic coach. It’s certainly something to think about! Copyright CourtMetrics 2006. All rights reserved.

How Do You Measure Up?

A court unwilling or unable to measure and to account for its performance rigorously and responsibly is unlikely to achieve independence and accountability. Since 2003 my colleagues and I have been experimenting with a tool to assess whether a court's capabilities for performance measurement are “measuring up.” Using a simple self-administered test, one version of this tool allows a court to assess its readiness -- in terms of demonstrated proficiency, present and future capacity, and political will -- to take the core court performance measures prescribed by the CourTools . Inclusion of this measure among a court’s core measures sends a powerful message that a way to improve independence and accountability is for a court to develop its capacity to measure and account for its performance. The Performance Institute , a private, non-partisan think tank seeking to improve government performance through the principles of performance, competition, accountability, and transparency, has d

Q & A: The Meaning of Performance Metrics

Q: My court recently completed Measure 1 of the CourTools and found out that 73% of the court users who completed the survey “agreed” or “strongly agreed” that the court was accessible, prompt, respectful, courteous, and fair. What does the score mean? Is that good or bad? A: It is true that without links to and comparisons with other referents a score of 73% has a very limited meaning. Fortunately, it is relatively easy to find and to link this specific score to a number of referents – the breakouts of the aggregate metric, the same metric over time, and the same metric in other courts -- so as to imbue the metric with meaning. (A metric refers to the numbers that a measure uses to describe the attribute being measured. In this case, the measure is the satisfaction of the court users with the court and the metric is the percent of users who “agreed” or “strongly agreed” with the items in the survey.) Breakouts as Referents Let’s assume that 73% agreement is the average (aggr

Q & A: Outcome vs. Measure vs. Target vs. Standard

Q: Those of who write and speak about court performance measurement rely on their share of jargon -- and it's often confusing. What are the differences among the terms “outcome,” “measure,” “target” and “standard”? A: Socrates said that the beginning of wisdom is definition. We’ll see. Outcomes are the benefits or changes for the intended beneficiaries of a court’s programs and services. Outcomes may relate to knowledge, skills, attitudes, values, behavior, condition or status of the program participants or recipients of services. Examples include litigants' satisfaction with a court's courtesy and responsiveness, success of probation, time to case disposition, clarity of orders, integrity of case files, percent of mediation agreements, enforcement of orders, and perceived fairness of proceedings, as well as percent of expected expenditures and percent of revenues received. Outcomes should not be confused with the activities of those who run the court. (See the October 1

Making the Most of Performance Measures

People must sign on to the purpose of the performance measure, the key results it indicates, not just the metric. Performance measures are derived from the mission and strategic goals of a court and the factors important to its stakeholders. Decisions about what to measure are, to a large extent, collective judgments that reflect the intended use of the performance information (e.g., major reform, public accountability, program improvement or resource allocation) and the needs and desires (e.g., efficiency, equity, quality, or improving the public confidence in the courts) of the court’s stakeholders. Kathryn E. Newcomer, professor and chair in the Department of Public Administration at George Washington University, aptly noted in Using Performance Measurement to Improve Public and Nonprofit Programs (Josses-Bass, 1997), that, ultimately, the performance of programs and organizations is a socially constructed and not an objective reality. Even when we have identified a performance mea