Email us : info@waife.com
Management Consulting for Clinical Research

X X X

Now that my title has grabbed your attention, let’s talk about something people don’t think much about when purchasing information technology: the cost of using it. When you start to implement the clinical research software package you purchased, you may find the implementation costs to be as obscene as a XXX website. There are good reasons for the high cost of implementation; you should be prepared for it beforehand, and you should factor these potential costs into your strategy for acquiring the enabling technology.

The title of this column is based on a pretty useful rule of thumb: the actual first-year cost of acquiring and implementing a new clinical research software application is about three times (3X) the price of the software license alone. Sometimes it’s less, sometimes it’s more. But if you keep that figure in your head, it’s a pretty good guideline. So if you pay $300,000 for a new CDMS (clinical data management system) or CTMS (clinical trials management system), you are well-served by budgeting $900,000 in total costs.

“Wow!,” I can hear you (and the vendors) screaming. This is a bit like revealing the secret of the pain of childbirth before a couple gets pregnant. Maybe now none of you will buy any software, not believing it can possibly be worthwhile. This is certainly not my intention, but instead I mean to alert you to what is ahead, and help you plan accordingly.

Sources of Cost
Where does this cost multiple come from? Most research professionals can immediately guess two key factors: validation and training. Validation of the software application involves many components, not the least of which in our industry is regulatory compliance. The responsibilities inherent to the sponsor in using a software tool for EDC (electronic data capture) or CDM ¡© that the clinical data as recorded by the investigator has not been altered in anyway during the process of capture, storage and analysis ¡© mean that a significant validation effort is required for any such software. And this responsibility is not alleviated by some “validation pack” from the vendor. Sponsors, for whom this software will carry the data upon which their discovery and development investments depend, are well-advised to design and execute the validation plan themselves or find a reputable third-party to assist them.

Training is just one of the aspects of change management that come with any enterprise software introduction. Training is always the last item in the operations budget and the first to be cut. Under-fund training at your peril. Nearly everyone has a story of how poorly trained staff led to underutilization, misuse, or simply no use of the software so expensively obtained. Training done right addresses each of the multiple audiences according to what they need to know, when they need to know it, and how they learn best. If you start to add up the number of people who need to be trained, how many sessions it will take, how many different courses it will require, where you will have to go to deliver them, and how often it will have to be repeated, you can see the dollars mounting.

But there’s more to this picture than validation and training. Every new piece of clinical research software changes the way people in your organization work. In our industry, this needs to be thoroughly understood and documented. This in turn means new SOPs, working practices, clearly defined roles and responsibilities, perhaps process maps or workflows documented, and a change control process governing it all.

Then there is legacy migration: most sponsors and CROs are purchasing a new software application to replace an older one; some or all of the old data needs to be moved to the new application, which can be a particularly complex technical task. This is one area where careful thought should be given to what is truly necessary to be done; while validation, training and SOPs are unavoidable, a well-crafted migration strategy may save thousands of dollars.

There are numerous other “soft costs” to software implementation. An important point is that these costs do not necessarily appear in budget line items. Simply the amount of staff time spent in meetings, teleconferences, workshops, exercises and briefings, adds up to a considerable amount of dollars in salary and “opportunity cost” (the cost of being taken away from work that could be more productive to revenue or operations). Soft costs can be mitigated by using excellent implementation planning and procedures.

There are some “hard costs” too, in computer hardware, network infrastructure, security infrastructure, and other technical costs. You may have more of this already in place than you realize, but depending on what you are implementing, this may be an unexpected added expense. This is another point where alternative strategies can possibly reduce your costs.

No Worries
Recently some vendors (and their pharmaceutical customers) have announced publicly that their products have been used with little, or even “no”, process change, implying that the 3X formula described here is not immutable, or perhaps out-of-date. These claims are disingenuous at best. When the sources of these claims are investigated, you will find that while the vendors and executives may believe their processes didn’t change, those who actually do the work will tell quite a different tale ¡© of heroics performed to make the software deliver. Or you might discover that the announcement is premature, and that much work lies ahead to ensure compliance and reproducibility of the software’s success.

Mitigation
There are a number of implications for sponsors and CROs of this 3X phenomenon. The first strategy is to anticipate it and budget for it. But you can also seek to mitigate this problem in a number of ways. Include the implementation items in your list of requirements that is used for your RFP or similar dialogue with the vendors. Evaluate their answers and see if one vendor may in fact offer products or services which will burden your internal resources less.

Secondly, seek help from those consultants who are already experienced with clinical research software implementations specifically. This will save time and money in the implementation by providing you a jumpstart, and a repository of “lessons learned” to apply to your own project. You can also get such information by leaning heavily on your colleagues in other departments or even at sister pharmaceutical companies, if they are willing to share their learnings.

In addition, make sure to consider alternative means of acquiring and using the software. For instance, it may make more economic sense for you to outsource the application’s function (use a CRO for your CDM instead of buying your own CDMS, for instance). The tradeoff in outsourcing, of course, is the lack of control, and the fact that no outsourced supplier can ever care as much about your data as you do. Another alternative is to use an ASP (application service provider) model, where the software vendor or a services company can host the application for you, alleviating some hardware, network and maintenance costs, or at least spreading them out over time. Some vendors offer software operation (such as EDC trial design and setup) along with the application hosting. Again, this may be efficient for some period of time, but may be much too costly if you ramp up the use of the software widely in your enterprise.

The implementation costs for clinical research software may seem high, but they are a necessary evil. What’s really obscene is what happens when you don’t budget properly for implementation, and you get caught with your pants down.

Measure Twice, Re-Engineer Once

Experienced carpenters will tell you, “measure twice, cut once.” This is always timely advice, especially for research sponsors and CROs who are re-examining their work processes. We often see companies who have undergone the wrenching and expensive experience of process re-engineering, only to do it again just a few years later. This is much like having to cut that piece of lumber over again and throwing away the first board. Metrics ¡© fashionable to talk about but usually poorly understood ¡© are a way out of this wasteful use of resources.

In this column and elsewhere, it is often repeated that the implementation of a new technology, or improvement in clinical operations cycle time, requires a change in process. The question is, how do we know if the change has been a good thing? We have to measure something. And most importantly, we have to measure what we do before the change, in order to know how the change has affected us. This may seem obvious, but it is not always done.

Generally, we see companies who plunge into technology adoption, or pursue high-level, abstract business goals, and sometime well into the project, management and staff alike have an uneasy feeling that maybe this was not worthwhile. In the worst cases, skepticism and resistance set in, even at relatively senior levels. We have found it is more critical to understand how you do business today, than it is to anticipate in detail the changes that will (may) be incurred by new technologies. The latter you will learn by doing; you won’t know if what you’re doing is any good unless you fully understand where you started.

Using and Abusing Metrics

There are clear principles on how to use metrics correctly:

– Keep the number of things you measure small ¡© focus on the “vital few”

– Ensure the metrics chosen are valid measures of your work

– Ensure collecting the necessary data is feasible

– Ensure the data is in fact collected, in a timely manner, by those who know the data

– Involve everyone in measuring them

– Show the data to everyone in the organization

– Ensure and demonstrate that management is committed to acting on the data

– Ensure the data is used to create a learning organization, not an atmosphere of fear

– Ensure that individual contributors, who are usually asked to generate the most critical data, get something back for their effort which is meaningful to their daily work.

The examples of using metrics incorrectly abound:

– Collecting data on so many parameters that a) no one reads the reports; and b) no one can tell how one’s efforts to improve have affected the organization

– Mismatching measures and project objectives (such as using Internet browser page turn times as a measure of EDC effectiveness)

– Picking measures for which data can’t be easily gathered (such as CRA satisfaction with a clinical trial management system)

– Keeping the data only in the hands of top management, so that the providers of the data never see the results

– The absence of any commitment by management to use the data (so the data goes up, and silence rains down).

Measure Before You Start

The worst abuse of metrics, however, is to not measure how you perform today. Very few clinical research organizations really know how long it takes them to clean a CRF, how long it takes to get from a draft protocol to an authorized protocol, how expensive a protocol amendment is, how fast their patient recruitment performance falls off from target, how many CRAs they need per study type.

Measuring twice ¡© before you change and afterwards ¡© has two benefits: it will be instantly informative, in unexpected ways, and it will ensure you measure your impending changes objectively.

Start with understanding how you do business today. When organizations measure themselves on how they work before change, they are likely to discover problems, issues and competencies which will alter the nature of the re-engineering or technology initiative originally planned. This is not a sidetrack: this is good. It ensures you are cutting the lumber the right way the first time.

Then, before you change, decide how you will measure the success of the change after it is completed. Otherwise, you will be affected by the change itself ¡© your biases, pro or con, will influence your perception of the change ex post facto. When the CTMS has been rolled out, or the EDC pilot finished, or the clinical department reorganization is completed, take out those pre-defined metrics and measure how you’re doing now. The result will be a much more objective appraisal of what may have felt like a painful experience.

It has been said, “not everything that can be counted counts, and not everything that counts can be counted.” Use metrics correctly, and you can make your operations innovations count.