Email us : info@waife.com
Management Consulting for Clinical Research

Mismeasured, Misunderstood (Monitor, 2009)

If we do not know why, correctly, we are using these tools, then it is much easier for someone to come along and stop us using these tools, incorrectly

 

Today’s financial conditions make software investments more closely examined and measured, which makes it essential to get the measurements correct. To a great extent, biopharma companies use the wrong measures, and too many measures, to evaluate their clinical research information technology investments. The result is a fundamental misunderstanding of the benefits that can be expected, and of the real value these applications bring.

 

Misunderstanding through Mismeasurement

The challenges in measuring the value of clinical research software like electronic data capture (EDC), clinical trial management systems (CTMS), electronic patient-reported outcomes methods (ePRO) or adverse event systems (AES) have many similarities.

 

The first and most serious misunderstanding is a forest-for-the-trees phenomenon: in the search for precise metrics, we forget the context in which these applications are used:

• EDC, for example, affects nearly every aspect of clinical development. Consequently, what should a valid measure be? Some micro data management measure comparing eCRF and paper CRF data volume? What does that have to do with remote monitoring, source data verification, knowing patient recruitment data earlier, enabling adaptive trial design, or improving drug supply logistics?

• For a CTMS, is the correct measure of its effectiveness how many users enter their data on time? What data, to be used by whom, and why? And at what cost (to customize the application and its reporting tools to handle this data)? Most enterprise CTMS’ are trying to give top management the proverbial “dashboard” view of their huge clinical development investment. Should we measure that by how many days have passed between the monitor’s site visit and the filing of her report in the CTMS?

• ePRO tools change the very nature of the kind of clinical research we can do on our therapeutic candidates. Many study teams question the unfamiliar added cost incurred by ePRO use, but how do we measure that value against being able to prove a competitively superior endpoint effect hitherto unprovable?

 

On the opposite end of the continuum, poorly defined micrometrics are measuring things irrelevant even to daily operational impact or effectiveness. When EDC is left to data management and biostatistics alone (instead of being a shared enterprise with clinical operations), the risk of irrelevant micrometrics, and too many of them, is high. But it applies to how we track CTMS use as well. I have heard of companies tracking literally dozens – nearly hundreds – of micrometrics, but to what end? No one can derive meaning from so many metrics, no matter how well designed, but of course, by definition, to have so many metrics equals metrics that are too narrow. Examples abound: time from query generation to site response; CRF pages “closed” per day; patients recruited per week; ePRO device failures per site; etc.

 

The classic EDC metric is the interval from LPLV (last patient last visit) to DBL (database lock) – probably still useful, if only for its ubiquity. But as years of experience are accumulated, one can see vast disconnects in this reported interval among EDC-using companies. Why would that be? Because clinical research is not mechanical, EDC or no, and the human-based processes designed over, under and around EDC can accelerate, or screw up, the database locking time without regard to the eCRF intervention. In other words, we in the clinical research business, who are supposed to be expert in the design of measurement, allow a multitude of confounding variables and uncontrolled findings to tell us the value of our software investments.

 

We also misunderstand basic components of the evaluation: who are the real users of the software? Who are the true beneficiaries (not always the same as the users by any means)? Across what time period is a reasonable measurement taken (usually executives are expecting data unreasonably quickly)? What represents true workflow or throughput (especially important for adverse event systems)? Without understanding the fundamental business purpose of the application, we cannot measure its business value.

 

Further errors of understanding are introduced when criteria used for strict financial analyses (like Return on Investment – ROI), or for CRO services evaluation (like output per hour billed, or average daily rates), are applied to software. Software delivered as a service offering (SaaS) confounds these measures ipso facto, since the product/service identity is blended purposefully, and perhaps advantageously. As for ROI, it is hard to find any purely objective and financially-based metrics applied to a scientific enterprise (which, we forget, clinical research still is) which tell the whole story, or any chapter thereof, in a way meaningful to corporate management.

 

The “maturity” of EDC and other well-established applications at several biopharmas has done little to avoid the misunderstandings, suggesting that the maturity label is still premature. Most companies who have committed to EDC or CTMS, for instance, can only show fundamentally irrelevant micrometrics, misunderstood metrics, or none at all. The use of the software has been institutionalized on gut feel or a loathing to examine an expensive commitment. These may be good cultural justifications, but they leave you vulnerable in tough financial conditions, in times of executive turnover from mergers and acquisitions, and when resistance to change among middle management grows louder.

 

But this is the problem: if we do not know why, correctly, we are using these tools, then it is much easier for someone to come along and stop us using these tools, incorrectly, because there is no proof to justify them. Misunderstanding through mismeasurement – like assuming EDC will reduce field monitoring time by XX% and failing to prove that – will undermine the likelihood of funding and resourcing the continuing support and enhancement these applications will always need.

 

Re-measuring Today; Understanding Tomorrow

There is a way out of this dilemma, and it is as simple as stopping mismeasurement immediately, right now, as soon as you’ve finished this column. Even without a correct measure to replace your mismeasures, no measuring would be better than wasted measuring. Then start work on meaningful metrics, a vital few, that are:

• Tied to your company’s strategy

• Traceable to the business purpose of the software

• Useful to those who you depend on for the raw data

• Feasible to obtain

• Valid to measure, free of confounding variables

• Gathered with sufficient time to produce meaningful results.

 

Meaningful metrics will yield understanding, and properly evaluate whether how we are using our expensive tools is producing the results they most surely can.

Sorry, the comment form is closed at this time.