Email us : info@waife.com
Management Consulting for Clinical Research

There’s something missing from the eSolutions discussion. In fact, there are two key missing links:

— eSolutions that are no more than bridges between silos will not meet the needs of a rapidly changing clinical development environment; and

— strategies, pilots, and initial use of eSolutions are skipping over the hard work of operational change management, as most industry-specific IT innovations do.

 

Serving the Changing Environment

Just as new information technology in the world at large brings a myriad of innovation possibilities to all industries, new market, business and science developments are generating the need to approach biopharma clinical development quite differently. If eSolutions strategies and tools do not enable the changing business structure, eSolutions will be expensive wasted investments.

 

For instance, although CROs have been around for over 30 years, biopharma is again reinventing how it uses these services. The near future will feature complex relationships between sponsors and CROs , with high variability even within a single sponsor, and the traditional lines of responsibility and function will continue to blur, even as some sponsors begin to pull back from knee-jerk outsourcing.

 

Because this is happening at the same time that government and the public demand more accountability and transparency, the need to gather, integrate and report knowledge (not just information or simple data) places sophisticated pressures on the clinical development function. eSolutions need to be part of, or indeed lead, the design of new means of clinical development. If we do not re-imagine clinical development, we will fail to meet these challenges in a business environment of reduced financial resources.

 

Does it make sense for us to continue to organize clinical development in silos that reflect the paper-based, pharma-centric, linear workflows of the past? Or do we need to have our work (and the technologies which enable it) reflect the patient-centric, multi-dimensional, nimble realities of today? How could we re-imagine clinical development? Instead of thinking in a linear workflow, should we focus our talent and technologies now by who our customers are (internal or external)? By customer need? By business objective (“project”)? By distinctive competency? These choices are hypercritical, because each leads to dramatic changes in human resources, inter-business dependencies, and therefore, eSolutions designs.

 

If eSolutions to you means automatic reconciliation of the EDC and Safety System databases, that’s not re-defining clinical development – although it is a solution, to a real problem. Tying investigator performance to payment timelines without having seven sets of hands touching the process is a solution, to a real problem, but it’s not re-defining clinical development. So as useful as these solutions are, and challenging in their cross-silo integrations, in many ways these efforts simple are fighting the last war.

 

Implementing for Success

The gap between technology innovation and successful implementation is growing again. In many ways this is a repeat of the late 1990’s, as the Internet brought great technical innovation but pharma had little understanding of or cultural tendency toward exploiting it. Just as the gap seemed to be closing sufficiently, technology and market forces have broken accepted wisdoms again.

 

eSolutions undoubtedly will proceed, step by step, but we will stumble, losing years of progress, if implementation is not respected and understood. Have we learned from the early years of difficult EDC adoption? Have we learned from the extended, expensive CTMS projects? Have we learned from our underutilized adverse event systems? If we have learned, we know now that careful attention to organizational impact, sustained change governance, thoughtful process efficiency, and creative technology exploitation will be the keys to realizing the benefits of eSolutions we will require.

It is not inaccurate to say that using paper diaries in research today borders on the unethical, and if nothing else, it is a manifest waste of time and money.

 

As I was writing this column, we had a sign from heaven – or at least Rockville: the FDA had issued its Final Guidance for Industry on Patient-Reported Outcome Measures. If ever there was a “pro” for ePRO (Electronic Patient-Reported Outcomes), this is it. There is nothing that garners the attention of biopharma executives like a statement from one of its key regulators, and the Guidance is welcome news to those of us who have advocated for a wider use of PRO data with the reliability that an electronic means of PRO collection brings. I couldn’t have asked for a more timely coincidence.

 

Pros over Cons

Let’s look at the “pros” of ePRO in three ways: pros, pro’s, and prose. Let’s assume you have a product in clinical trials that needs data recorded directly by the patient, either at home or in the clinic. You will use this data to investigate a primary or secondary endpoint, and as such, this data is essential for your understanding of the disease, or your product, and of patient experiences of both. Simply put, if this data is needed for this kind of research, you cannot use paper-based methods of collection anymore. The worthlessness (inaccuracy, untimeliness) of this data is now well accepted, beginning with anecdotal evidence experienced by researchers decades ago and proven in controlled examination like that published in the British Medical Journal in 2002 (Stone, et al., Patient non-compliance with paper diaries, BMJ 324: 1193, 18 May 2002). It is not inaccurate to say that using paper diaries in research today borders on the unethical, and if nothing else, it is a manifest waste of time and money. Use “e” methods for PRO or nothing at all.

 

But there’s more. What’s often missing in the discussion in favor of ePRO is the change in science that is enabled by knowing you now have methods of data collection which will support the validity of asking research questions which you could never do before, with any scientific rigor. So while safety and disease efficacy will always be important quantitative goals of clinical trials, you can now explore if and why your candidate may be superior to other treatments based on the qualitative dimensions commonly lumped under umbrella terms like “quality of life”, “health outcomes”, “health economics”, or “evidence-based medicine”. In some respects, these non-specific terms have misled the listener and undersold the impact that an imaginative study design can now bring to the research program. Creating new market opportunities, and new means of improving patient health, await.

 

While this column is focusing on pros, not cons, perhaps one of the biggest challenges to widespread substitution of paper PRO with ePRO is the perceived disproportionate cost of “adding” ePRO to the study budget. Without going into great detail, a simple characterization of this issue is that ePRO services today rely on a (yet another) third-party vendor, with their own bid, their own quotation of cost, which is thereby easily identified by study teams as “incremental”. Besides missing the understanding that paper PRO also costs something, I think there is a way around this: ePRO is another means of EDC in clinical trials (a characterization I have resisted in the details because I am too close to the trees to see the forest). In this sense, it becomes comparable to the (now forgotten) debates over electronic central laboratory data. So the chain of reasoning is the same: if ePRO data is really important to you, you should (and perhaps must) collect it electronically, just as you would, in 2010, collect most any other trial data. In other words, it is not a discretionary cost. If you need it, you need to spend it. If you don’t need it, don’t spend it (and live without that data).

 

The Pro’s at Work in ePRO

Are there “pros” (professionals) in ePRO? This question has been a legitimate concern of the biopharma industry as we have watched ePRO vendors struggle to create a new market from scratch, learn the processes and logistics of a new research methodology, correlate the important scientific component, and try to scale to the volume of studies where ePRO use is possible.

 

Although the vendor community is still maturing and does not compare in top-to-bottom professionalism of say, the EDC vendor community (or that of vendors for statistical analysis tools or document management or safety monitoring), the ePRO vendor universe has several admirable, time- and trial-tested providers for biopharmas to lean on. As always with a technology market, older ePRO companies are getting much more professional in the “soft” side of their offerings (services, logistics, science consulting) while their software and hardware technology tends to lag, and the new ePRO companies sometimes have more exciting technology but insufficient experience in the soft side.

 

It’s important to keep the technology and the services separate. Unlike other clinical development IT spaces, where the technology has become very similar vendor to vendor, technology still matters in ePRO. And it is a bedevilment that we still don’t quite have the magic, one-size-fits-all, ideal hardware platform and may never. This is challenging for the vendors, but also for their customers. Sponsors looking to make long term commitments to ePRO are understandably confused about the myriad of hardware choices and accompanying software platforms, and this leads to a reluctance to commit to any one approach, while accepting ePRO conceptually.

 

The service component is particularly critical in ePRO because, for now, it needs so much support. This support primarily comes from the vendor since sponsors are not, or cannot, absorb ePRO capabilities internally. Services are also critical because, as with most clinical development IT, sponsor processes are not mature, tailored to the tool and optimized for individual sponsor efficiency. This affects all aspect of ePRO services including logistics, supply, support, helpdesk, workflow process, and project management. Most vendors today show an uneven professionalism in services – some are true pros in logistics, some are rapid study designers, others have been lucky with their project managers.

 

On the other hand, one area where a large handful of providers are truly professional is in the science of ePRO – the development, tailoring, selection and validation of the “instruments” (questionnaires) that are chosen for administration on ePRO devices.

 

Where is the Prose?

Perhaps what is missing the most at the moment is the prose about ePRO. That is, we are still not talking enough about ePRO, to the right decision-makers, with the most appealing contentions and data, to have ePRO become as widely used and uncontroversial as central lab data, or even EDC. Clearly, not enough clinical development personnel understand the horrors of paper (since we are still doing paper PRO studies), nor do they understand the potential for new scientific explorations possible with valid patient-reported data. For some reason, ePRO remains at best the province of biopharma’s marketing guys, or “outcomes nerds,” and PRO is viewed as a “last resort” and not “real” research. This is an antique attitude, and it contributes to the resistance to funding ePRO services. The vendors have probably talked enough; sponsors must start writing and speaking more clearly and openly about their success with ePRO on the one hand, and any lingering concerns on the other hand.

 

The pros of ePRO are clear. The professionals of ePRO are improving. We need more prose on ePRO so that all are well-informed about when and how ePRO should be added to the clinical development program.

An extensive requirements definition effort leaves no patience or budget for the real hard work and the real key to success.

 

Herewith a radical proposition: the least useful requirement for a successful software acquisition project is a set of requirements. This seems pretty radical, doesn’t it? How can this be true, when developing requirements is in all the standard technology and consulting teachings?

 

I can support this proposition in two ways:

1) the usual method for gathering, documenting, quantifying and evaluating requirements for clinical research software is deeply flawed and expensive

2) the focus on functional requirements distracts from other, equally or more important evaluation criteria, and drains resources away from the implementation efforts needed after software selection.

 

A Fatal Assumption

Typical software development or selection methods require that internal or external personnel go out and interview staff in the departments projected to use the new tool about what they “need”. From this, using various methodologies, a list of requirements (sometimes even in three-tiers of detail) is delineated, often with admirable complexity and at great length. Some method is used to sift through these requirements, prioritize them and document them elegantly. Almost always, such an effort takes many months, many meetings and many iterations. Lots of money, in other words. And why does it take so long? Because so many opinions are being solicited and the myriad inputs must be reconciled. And this is just the tip of the iceberg of a traditional software description and evaluation effort.

 

Worse yet, neither solutions nor needs articulated by those working in the functions being supported by this effort take into account the business strategy of the organization, the context of the work in the larger enterprise, or how conditions will change in the future. Instead the typical method jumps right down into “I’d like my safety tool to be able to spell-check the case narrative,” or “my status summary in my EDC tool should use checkmarks instead of stoplights.”

 

The development of software applications requirements is not trivial nor should it be, if one is developing software from scratch (from where this method came). But as one hears from every biopharmaceutical company, “we are not in the software business,” by which they mean they want to buy software “off-the-shelf” and not develop and maintain software internally. And in our narrow, limited marketplace of clinical research IT (as distinct from, say, office automation, bookkeeping systems or iPhone apps), there is little to choose from in any one niche (EDC, ePRO, CTMS, AES, etc.). So here is the chain: We are not a software company = off the shelf tools for clinical research = little choice = requirements not required.

 

Why is this the logical chain? Because choosing software off-the-shelf is an entirely different process than software development. By definition, we have to make do with what is out there, and choose among them. The task is to figure out how to differentiate among a handful of vendors. But requirements to the detail usually generated by traditional methods either:

• Are met by all relevant vendors

• Aren’t met by any vendor; or

• Aren’t likely to be developed by any vendor soon enough to properly influence the off-the-shelf purchase.

 

In other words, you will not differentiate one vendor from another based on all that work you paid for, so why bother?

 

Worse still is the usual wrap-up to the standard requirements development process, when artificial quantification methods are applied with the aura of scientific rigor for concluding that Vendor A meets Requirement #64 to a score of 3.76, versus Vendor B’s score of 3.24. Are you really going to make a decision on this basis? What does these values mean? Did you have one of your biostatisticians in the room when you used this method? Why are we not embarrassed, in an industry whose livelihood is based on determining false from true statistical measures of outcomes, when we do not apply the same attitude to other uses (or misuses) of statistics?

 

A Waste of Focus

Traditional requirements development is equally deleterious for its impact on what is more important in our clinical research software projects. Requirements development is so expensive and time-consuming, it saps the budget and the enterprise’s energy to the point that we are gasping as we cross the requirements finish line. But that’s the problem – it is by no means the finish line for getting useful software in the hands of the users who need it.

 

First of all, functional and technical requirements ignore what is at least as important to our software selection, if not more so: is this vendor a good match for us? Do we like/trust working with them? Does their business strategy match our business strategy? What about our respective future directions?

 

For instance, what is the larger enterprise context for the need expressed by users for, for instance, real-time access to a monitor’s Site Visit Report? If the research sponsor is in a specialized therapeutic area with a limited pool of poor performing investigators, this need is critical. If not, is this just a “nice to have”?

 

Most damaging to the successful use of the software being considered is that an extensive requirements definition effort leaves no patience or budget for the real hard work and the real key to success: a well-designed, thoroughly executed implementation plan, which should encompass the gamut from project leadership to process definition to training and organizational psychology.

 

Our responsibility as clinical researchers is to ensure not only that we have the tools to do our work accurately and competitively, but to ensure that how we acquire these tools is properly focused. Traditional requirements gathering is not required; what is required is understanding why you are getting the software and how to implement it.

 

this huge transition to electronic data capture, which Pharma has made over the course of a decade, is finally over… Or not.

 

“Yes, we’re using it.” This is today’s universal answer to the question of EDC. And rare is the company who won’t claim to be “moving toward 100%.” At the same time, we’re told that the technologists have moved beyond EDC and that EMR integration is the next big thing. And so, this huge transition to electronic data capture, which Pharma has made over the course of a decade, is finally over… Or not.

 

The progress and the success in using EDC is real and the EDC/EMR of the future is exciting, but you (and your management) should know that numerous, significant benefits of EDC remain untapped by many, if not all, sponsors. These benefits have the potential to provide sponsors with continued process improvement and substantial return on investment (ROI), long before any new, technical advances in data collection are ready for large-scale use. Let’s look at five things you can still do with EDC to reap those benefits and make something good even better.

 

1. Ease the start-up pain

The length and complexity of EDC start-up timelines may be the most universal problem still encountered with this technology. It also provides the primary ammunition for today’s few remaining EDC skeptics. When experienced users are asked for the number one improvement they wish for, the answer will almost always contain a reference to time. “Too long,” “too slow,” and “too often” are the painful watchwords of the EDC startup experience.

 

The cause of this common complaint may surprise you, for it has little to do with technology. In a retrospective analysis of any start-up timeline, technical programming will not be the task that consumed the calendar. Rather, the time will have gone to study teams as they debate about what to program. Inefficient, ineffectual decision-making within clinical teams, masked in the past by paper-based processes, is landing this issue smack on the critical path of trials using EDC. Fixing this problem may finally convince those remaining skeptics and reduce the stress level of your study teams.

 

2. Do real data cleaning

Some of you may remember the Jetsons cartoon, where George’s job in the world of the future was reduced to sitting at a desk all day, pressing an unlabeled button over and over. George’s “value-added” may be the best analogy to today’s data-cleaning methods, methods that remain strangely unaltered by EDC. Huge infrastructures, developed to handle the mundane errors that paper CRFs produce, still remain in place, with legions of people checking and rechecking things that EDC edits have already prevented or caught.

 

What is the error rate of your data with EDC? That is, how many data points are ever changed from the site’s original entry (after those entries have passed EDC’s auto-checks)? You may be surprised how few errors remain to be treated. Sponsors who measure this have found that 95%, 97%, and even 99% of their data remain unchanged, even after the herculean efforts of the cleaning legions.

 

Reallocating those resources in more thoughtful ways can increase data quality even further by allowing closer looks at the whole forest (where important safety and efficacy trends await discovery), rather than staring harder at the trees. This reallocation can also provide opportunities for efficiency gains in an age where “doing more with less” is a frequent mandate.

 

3. Make monitoring more useful

The field-monitor role should be a crucial component of a trial’s success, both in the terms of the study’s conduct and in the preservation of its integrity. Yet this important role is often reduced, even in a world full of EDC, to the status of an on-site (double-)checker. EDC can change this model by enabling visibility of and interaction with the clinical data by monitors when they are away from the site.

 

The potential for monitors using EDC in this way away from the site is enormous. (The Monitor devoted an entire article to this in its June 2007 issue). The ability to see the data, to see what has changed, to see what has not yet been entered (though it should have been), to dialog with the site through manual queries about specific data points; all of these open new vistas of value-added tasks and responsibilities that can turn the “double-checker” into a contributing study-team member. Optimal use of these capabilities can provide insights into protocol issues, EDC design flaws and improved site performance. And all of this is available without using up one minute of that valuable on-site time with the investigator.

 

4. Find the efficiencies

EDC enables, really for the first time, the recording and time-stamping of diverse clinical activities across multiple roles. Sponsors can now track numerous metrics that indicate how well processes are working and how well people are performing: How soon is the site entering data? How quickly are monitors reviewing data? Are freezing and locking activities keeping up with the volume of entered data, or are you looking at the famous bolus of cleaning just prior to data lock? How about the efficiency of your editing efforts? Do your edits actually fire? Which ones?

 

It may surprise you to learn that much of that frantic edit-programming during start-up didn’t yield a lot. One sponsor found that 70% of their several hundred programmed edits never fired at all, while only a couple of edits accounted for 25% of firings and fifty accounted for 90%. All of this information is available, with almost a trivial amount of effort, out of an EDC tool that tracks and measures an enormous number of clinical trial process variables.

 

5. Lock all of it, not some of it

The now ubiquitous, fast database-lock times achieved by so many sponsors are encouraging, yet these times are often skewed by the failure to address all the data. Many celebrations of five-day locks ought to be accompanied by a footnote that says “EDC only, CDMS and SAS still to come.” This disconnect fails, in an odd way, to create any more value for the business than a paper trial.

 

The close-out of a trial deserves as much process improvement as the start-up. Work on those improvements, to be sure, will take you away from EDC and into the realm of external-vendors: their data flows, their contractual deadlines and their execution. But the chief benefit of EDC is easily frittered away by these very elements at the very end of all your efforts.

 

Sponsors who lock everything and lock it quickly do so by creating a last patient, last visit (LPLV) scenario where all previous data, regardless of source, has already been dealt with. For them, achieving final lock requires only handling the last set of eCRFs, the last set of analyzed blood-draw data, etc. These trivial amounts of data are easily coped with and incorporated into the final data set.

 

The good can still get better

Few sponsors enjoy today all the benefits mentioned here, not even those who think they are already “100%.” These five areas are the remaining fruit that can still provide further value to clinical development through the use of EDC. As we keep an eye on the (still distant) future of EDC/EMR, these additional benefits from EDC are available now and offer to sponsors both process improvement and substantial ROI.

If we do not know why, correctly, we are using these tools, then it is much easier for someone to come along and stop us using these tools, incorrectly

 

Today’s financial conditions make software investments more closely examined and measured, which makes it essential to get the measurements correct. To a great extent, biopharma companies use the wrong measures, and too many measures, to evaluate their clinical research information technology investments. The result is a fundamental misunderstanding of the benefits that can be expected, and of the real value these applications bring.

 

Misunderstanding through Mismeasurement

The challenges in measuring the value of clinical research software like electronic data capture (EDC), clinical trial management systems (CTMS), electronic patient-reported outcomes methods (ePRO) or adverse event systems (AES) have many similarities.

 

The first and most serious misunderstanding is a forest-for-the-trees phenomenon: in the search for precise metrics, we forget the context in which these applications are used:

• EDC, for example, affects nearly every aspect of clinical development. Consequently, what should a valid measure be? Some micro data management measure comparing eCRF and paper CRF data volume? What does that have to do with remote monitoring, source data verification, knowing patient recruitment data earlier, enabling adaptive trial design, or improving drug supply logistics?

• For a CTMS, is the correct measure of its effectiveness how many users enter their data on time? What data, to be used by whom, and why? And at what cost (to customize the application and its reporting tools to handle this data)? Most enterprise CTMS’ are trying to give top management the proverbial “dashboard” view of their huge clinical development investment. Should we measure that by how many days have passed between the monitor’s site visit and the filing of her report in the CTMS?

• ePRO tools change the very nature of the kind of clinical research we can do on our therapeutic candidates. Many study teams question the unfamiliar added cost incurred by ePRO use, but how do we measure that value against being able to prove a competitively superior endpoint effect hitherto unprovable?

 

On the opposite end of the continuum, poorly defined micrometrics are measuring things irrelevant even to daily operational impact or effectiveness. When EDC is left to data management and biostatistics alone (instead of being a shared enterprise with clinical operations), the risk of irrelevant micrometrics, and too many of them, is high. But it applies to how we track CTMS use as well. I have heard of companies tracking literally dozens – nearly hundreds – of micrometrics, but to what end? No one can derive meaning from so many metrics, no matter how well designed, but of course, by definition, to have so many metrics equals metrics that are too narrow. Examples abound: time from query generation to site response; CRF pages “closed” per day; patients recruited per week; ePRO device failures per site; etc.

 

The classic EDC metric is the interval from LPLV (last patient last visit) to DBL (database lock) – probably still useful, if only for its ubiquity. But as years of experience are accumulated, one can see vast disconnects in this reported interval among EDC-using companies. Why would that be? Because clinical research is not mechanical, EDC or no, and the human-based processes designed over, under and around EDC can accelerate, or screw up, the database locking time without regard to the eCRF intervention. In other words, we in the clinical research business, who are supposed to be expert in the design of measurement, allow a multitude of confounding variables and uncontrolled findings to tell us the value of our software investments.

 

We also misunderstand basic components of the evaluation: who are the real users of the software? Who are the true beneficiaries (not always the same as the users by any means)? Across what time period is a reasonable measurement taken (usually executives are expecting data unreasonably quickly)? What represents true workflow or throughput (especially important for adverse event systems)? Without understanding the fundamental business purpose of the application, we cannot measure its business value.

 

Further errors of understanding are introduced when criteria used for strict financial analyses (like Return on Investment – ROI), or for CRO services evaluation (like output per hour billed, or average daily rates), are applied to software. Software delivered as a service offering (SaaS) confounds these measures ipso facto, since the product/service identity is blended purposefully, and perhaps advantageously. As for ROI, it is hard to find any purely objective and financially-based metrics applied to a scientific enterprise (which, we forget, clinical research still is) which tell the whole story, or any chapter thereof, in a way meaningful to corporate management.

 

The “maturity” of EDC and other well-established applications at several biopharmas has done little to avoid the misunderstandings, suggesting that the maturity label is still premature. Most companies who have committed to EDC or CTMS, for instance, can only show fundamentally irrelevant micrometrics, misunderstood metrics, or none at all. The use of the software has been institutionalized on gut feel or a loathing to examine an expensive commitment. These may be good cultural justifications, but they leave you vulnerable in tough financial conditions, in times of executive turnover from mergers and acquisitions, and when resistance to change among middle management grows louder.

 

But this is the problem: if we do not know why, correctly, we are using these tools, then it is much easier for someone to come along and stop us using these tools, incorrectly, because there is no proof to justify them. Misunderstanding through mismeasurement – like assuming EDC will reduce field monitoring time by XX% and failing to prove that – will undermine the likelihood of funding and resourcing the continuing support and enhancement these applications will always need.

 

Re-measuring Today; Understanding Tomorrow

There is a way out of this dilemma, and it is as simple as stopping mismeasurement immediately, right now, as soon as you’ve finished this column. Even without a correct measure to replace your mismeasures, no measuring would be better than wasted measuring. Then start work on meaningful metrics, a vital few, that are:

• Tied to your company’s strategy

• Traceable to the business purpose of the software

• Useful to those who you depend on for the raw data

• Feasible to obtain

• Valid to measure, free of confounding variables

• Gathered with sufficient time to produce meaningful results.

 

Meaningful metrics will yield understanding, and properly evaluate whether how we are using our expensive tools is producing the results they most surely can.