Email us : info@waife.com
Management Consulting for Clinical Research

Measure Twice, Re-Engineer Once

Measure Twice, Re-Engineer Once

Experienced carpenters will tell you, “measure twice, cut once.” This is always timely advice, especially for research sponsors and CROs who are re-examining their work processes. We often see companies who have undergone the wrenching and expensive experience of process re-engineering, only to do it again just a few years later. This is much like having to cut that piece of lumber over again and throwing away the first board. Metrics ¡© fashionable to talk about but usually poorly understood ¡© are a way out of this wasteful use of resources.

In this column and elsewhere, it is often repeated that the implementation of a new technology, or improvement in clinical operations cycle time, requires a change in process. The question is, how do we know if the change has been a good thing? We have to measure something. And most importantly, we have to measure what we do before the change, in order to know how the change has affected us. This may seem obvious, but it is not always done.

Generally, we see companies who plunge into technology adoption, or pursue high-level, abstract business goals, and sometime well into the project, management and staff alike have an uneasy feeling that maybe this was not worthwhile. In the worst cases, skepticism and resistance set in, even at relatively senior levels. We have found it is more critical to understand how you do business today, than it is to anticipate in detail the changes that will (may) be incurred by new technologies. The latter you will learn by doing; you won’t know if what you’re doing is any good unless you fully understand where you started.

Using and Abusing Metrics

There are clear principles on how to use metrics correctly:

– Keep the number of things you measure small ¡© focus on the “vital few”

– Ensure the metrics chosen are valid measures of your work

– Ensure collecting the necessary data is feasible

– Ensure the data is in fact collected, in a timely manner, by those who know the data

– Involve everyone in measuring them

– Show the data to everyone in the organization

– Ensure and demonstrate that management is committed to acting on the data

– Ensure the data is used to create a learning organization, not an atmosphere of fear

– Ensure that individual contributors, who are usually asked to generate the most critical data, get something back for their effort which is meaningful to their daily work.

The examples of using metrics incorrectly abound:

– Collecting data on so many parameters that a) no one reads the reports; and b) no one can tell how one’s efforts to improve have affected the organization

– Mismatching measures and project objectives (such as using Internet browser page turn times as a measure of EDC effectiveness)

– Picking measures for which data can’t be easily gathered (such as CRA satisfaction with a clinical trial management system)

– Keeping the data only in the hands of top management, so that the providers of the data never see the results

– The absence of any commitment by management to use the data (so the data goes up, and silence rains down).

Measure Before You Start

The worst abuse of metrics, however, is to not measure how you perform today. Very few clinical research organizations really know how long it takes them to clean a CRF, how long it takes to get from a draft protocol to an authorized protocol, how expensive a protocol amendment is, how fast their patient recruitment performance falls off from target, how many CRAs they need per study type.

Measuring twice ¡© before you change and afterwards ¡© has two benefits: it will be instantly informative, in unexpected ways, and it will ensure you measure your impending changes objectively.

Start with understanding how you do business today. When organizations measure themselves on how they work before change, they are likely to discover problems, issues and competencies which will alter the nature of the re-engineering or technology initiative originally planned. This is not a sidetrack: this is good. It ensures you are cutting the lumber the right way the first time.

Then, before you change, decide how you will measure the success of the change after it is completed. Otherwise, you will be affected by the change itself ¡© your biases, pro or con, will influence your perception of the change ex post facto. When the CTMS has been rolled out, or the EDC pilot finished, or the clinical department reorganization is completed, take out those pre-defined metrics and measure how you’re doing now. The result will be a much more objective appraisal of what may have felt like a painful experience.

It has been said, “not everything that can be counted counts, and not everything that counts can be counted.” Use metrics correctly, and you can make your operations innovations count.

Sorry, the comment form is closed at this time.