Standardized Measurement Reports Generated by Automated Systems: An Afterthought?

Automated case management systems -- as well as other automated systems for finance, jury utilization and management, fine and fee collection, and other court functions -- typically have a “reporting” functionality. Users are able to view various standardized reports generated by the automated systems.

An Example

For example, using the Odyssey case management system, developed by Tyler Technologies Inc. of Plano, Texas, the Clerk of the Twentieth Judicial Circuit in Ft. Myers, Florida (and, at least theoretically, any individual or agency with inquiry access to the system), can access over 60 standardized reports under the following headings: (1) case analysis (e.g., cases without activity, listing cases without base events that have been filed for a specified period of time); (2) case administration (e.g., case load activity report, a summary report of court activity indicating changes in counts and percentages of cases filed and disposed from the start to end dates for any date range); (3) Florida State Reports (e.g., collection rate report, percentage of collections based on the assessed amount over a series of quarters); (4) workflow (e.g., scanning activity report, displays daily or hourly totals, per employee, of documents and document pages scanned; (5) financial reports (e.g., cashier accuracy, shows cashier till counts and totals over a date range); (6) financial activity (e.g., GASB 34, calculates how much money was assessed during one date range and how much was collected against those assessments in another date range); (7) registry and trust (e.g., account journal, a listing of one or more registry account type transactions during a user-specified time period); and (8) bond and warrant reports (e.g., bond activity and outstanding warrants, detailed information about bonds with a particular status within a specified date range or a list of all currently active bonds).

A Shopping List

How are such standardized reports and their data elements identified and developed? The short answer is not very well. Too often the process is an afterthought that produces a virtual shopping list of reports, most of which are not used. Typically, here’s the way it’s done. (With a little hyperbole to drive home the point.)

At some time during the development of an automated system -- maybe as long as a couple years before installation and launch -- the vendor convenes a group of prospective court users in a brainstorming “requirements” session to address the question, “What kind of reports would you like to see generated by the automated system?” The group typically has no organizing or conceptual scheme to guide them except, perhaps, an example of a list of reports other jurisdictions have generated. Also, no one in the group will have seen the automated system in operation, except during a short visit to a jurisdiction that currently is using or testing it. Because the reporting functionality of the system is seen as less important than functionalities related to production and operations, and because “reports” are seen as a specialized requirement, the members of the group tend to be operational level managers and analysts (the bean counters). The strategic or even tactical decision makers of the court, its leaders and top managers, are conspicuous by their absence.

Nonetheless, a tentative list of reports, described in a sentence or two, is generated by the group based largely on the interests of the individual members. An indirect nod to any strategic measures may be given by requiring the vendor to include “state compliant” reports, meaning that the system should be able to generate the reports currently required by the state. Other than that, the list of reports tends to be mostly operational level listings and statistics – a virtual shopping list of individual interests.

The group sends the tentative shopping list to court executives and managers for their review, comment and approval. The response is meager and superficial. The group makes the final revisions, and the reporting functionality of the automated system is built and installed. Not surprisingly, a year after installation, only a fraction of the shopping lists of standardized reports is ever downloaded for use.

A Better Way

While it is, of course, reasonable that the primary emphasis of automated management systems is the production system itself, i.e., the creation of documents and guides of the particular business process on a day-to-day basis, it is unfortunate that the reporting functionality seems to be an afterthought divorced from strategic thinking. Do car makers think about the design and development of the car’s dashboard only after all the other pieces of the car are built?

It is surprising that even courts that embrace the Court Performance Standards and Measures or the CourTools have not used the concepts of core performance measures and measurement hierarchies to develop the reporting functionality of automated systems. Even courts that have not developed comprehensive court performance measurement systems (CPMS) based on the Standards or the CourTools, can greatly facilitate the identification of desired reports generated by an automated system by using an organizational scheme of strategic, tactical and operational level of reports of performance. This is the scheme that is at the heart of core performance measures and measurement hierarchies.

Measurement hierarchies in which lower-level subordinate measures, and corresponding standardized reports, “cascade” down from core measures help align the overall goals of the court with the goals and objectives of its divisions, units, and programs. They help make it clear to all court employees precisely how their actions help fulfill the court’s mission and strategic goals. For example, a clerk in a court’s jury commissioner’s office, will recognize that an improvement in the percent of undeliverable mail – a lower-level subordinate performance measure which may have its own report generated by an automated jury system -- drives juror qualification yield, which in turn drives the overall juror yield, which drives juror representativeness, a high-level core measure of fairness and equality.

Using the concepts of core performance measures and measurement hierarchies as an organizing scheme to develop the reporting functionality of automated systems can transform a shopping list divorced from the strategic goals of a court into a powerful tool to manage performance.

For the latest posts and archives of Made2Measure click here.

© Copyright CourtMetrics 2006. All rights reserved

Comments

Popular posts from this blog

Top 10 Reasons for Performance Measurement

The “What Ifs” Along the Road in the Quest of a Justice Index

Q & A: Outcome vs. Measure vs. Target vs. Standard