Email us : info@waife.com
Management Consulting for Clinical Research

Measure Twice, Re-Engineer Once

Experienced carpenters will tell you, “measure twice, cut once.” This is always timely advice, especially for research sponsors and CROs who are re-examining their work processes. We often see companies who have undergone the wrenching and expensive experience of process re-engineering, only to do it again just a few years later. This is much like having to cut that piece of lumber over again and throwing away the first board. Metrics ¡© fashionable to talk about but usually poorly understood ¡© are a way out of this wasteful use of resources.

In this column and elsewhere, it is often repeated that the implementation of a new technology, or improvement in clinical operations cycle time, requires a change in process. The question is, how do we know if the change has been a good thing? We have to measure something. And most importantly, we have to measure what we do before the change, in order to know how the change has affected us. This may seem obvious, but it is not always done.

Generally, we see companies who plunge into technology adoption, or pursue high-level, abstract business goals, and sometime well into the project, management and staff alike have an uneasy feeling that maybe this was not worthwhile. In the worst cases, skepticism and resistance set in, even at relatively senior levels. We have found it is more critical to understand how you do business today, than it is to anticipate in detail the changes that will (may) be incurred by new technologies. The latter you will learn by doing; you won’t know if what you’re doing is any good unless you fully understand where you started.

Using and Abusing Metrics

There are clear principles on how to use metrics correctly:

– Keep the number of things you measure small ¡© focus on the “vital few”

– Ensure the metrics chosen are valid measures of your work

– Ensure collecting the necessary data is feasible

– Ensure the data is in fact collected, in a timely manner, by those who know the data

– Involve everyone in measuring them

– Show the data to everyone in the organization

– Ensure and demonstrate that management is committed to acting on the data

– Ensure the data is used to create a learning organization, not an atmosphere of fear

– Ensure that individual contributors, who are usually asked to generate the most critical data, get something back for their effort which is meaningful to their daily work.

The examples of using metrics incorrectly abound:

– Collecting data on so many parameters that a) no one reads the reports; and b) no one can tell how one’s efforts to improve have affected the organization

– Mismatching measures and project objectives (such as using Internet browser page turn times as a measure of EDC effectiveness)

– Picking measures for which data can’t be easily gathered (such as CRA satisfaction with a clinical trial management system)

– Keeping the data only in the hands of top management, so that the providers of the data never see the results

– The absence of any commitment by management to use the data (so the data goes up, and silence rains down).

Measure Before You Start

The worst abuse of metrics, however, is to not measure how you perform today. Very few clinical research organizations really know how long it takes them to clean a CRF, how long it takes to get from a draft protocol to an authorized protocol, how expensive a protocol amendment is, how fast their patient recruitment performance falls off from target, how many CRAs they need per study type.

Measuring twice ¡© before you change and afterwards ¡© has two benefits: it will be instantly informative, in unexpected ways, and it will ensure you measure your impending changes objectively.

Start with understanding how you do business today. When organizations measure themselves on how they work before change, they are likely to discover problems, issues and competencies which will alter the nature of the re-engineering or technology initiative originally planned. This is not a sidetrack: this is good. It ensures you are cutting the lumber the right way the first time.

Then, before you change, decide how you will measure the success of the change after it is completed. Otherwise, you will be affected by the change itself ¡© your biases, pro or con, will influence your perception of the change ex post facto. When the CTMS has been rolled out, or the EDC pilot finished, or the clinical department reorganization is completed, take out those pre-defined metrics and measure how you’re doing now. The result will be a much more objective appraisal of what may have felt like a painful experience.

It has been said, “not everything that can be counted counts, and not everything that counts can be counted.” Use metrics correctly, and you can make your operations innovations count.

Technology Today

——————————————————————————–

 

Baby Steps

 

There is good news and bad news these days as we assess the state of the industry in implementing clinical research software applications. The good news is that more and more biopharmaceutical companies are recognizing that clinical research software is not “plug and play”: you don’t just unwrap the box, slip in the CD, and start up an eClinical drug development process. The bad news is that this realization is not matched consistently with good process practices. The result is some common, well-meaning, but ineffective process work related to technology implementation.

 

Tripping Over Our Own Feet

 

Let’s look at some examples of poor technology implementation behavior, more or less in sequential order. The first is when we fall in love with the vision without having a path to get there. An executive, or even a middle management team, may be totally committed to an exciting eClinical strategy, but the strategy is delineated in only the thinnest of detail. The implementation of that strategy is neither properly staffed nor funded. Since the goal is presumably innovative, to achieve a new strategy a company must find the talent which can see beyond their daily work and yet still remember that it is the daily work which is being revolutionized. Without the operational vision to add to the strategic vision, you can’t get from here to there. Without operational vision, you cannot estimate what the strategy may cost – in cash and people’s time – and without cost estimates you have no budget, or an under-funded budget.

 

The tension grows as leaders push the vision but the practicalities of implementation lag behind. Visions are often announced with great fanfare. The executive usually feels they are now off the hook and the burden falls on those below. If middle managers are not empowered with the skill and money to implement, they will fail, and be blamed for the failure.

 

Another common failing of technology implementation these days is how companies approach the development of business requirements for their clinical research applications. There are two common mistakes here: one is the level at which requirements are developed, and the second is how companies determine requirement priorities.

 

A common error in requirements development, especially by those trained in traditional IT methodologies, or by those who are naïve to software development, is to jump into what is called “the solution space” at a very detailed level. Instead, we need to hang back in “the problem space” and take time to explore the operational challenges and business priorities. It does not help much to talk to an operational staffer and find out that she wants the next page button in the lower left of the screen instead of the lower right. What is important is to know how your staff spends their time, when they are most unproductive, and how that prevents them from meeting their department obligations to the corporation.

 

A second example of naïve requirements analysis is artificial, complex quantitative schemes for prioritizing requirements and then scoring vendor solutions. This is very common and most egregious. By following these methods we insult the professional statistical environment of clinical research we work in! By definition, these various quantitative schemes have no scientific legitimacy – they cannot be “tested” and “validated” because each requirements development effort is unique. So when requirement A scores 0.6 points higher than B, how do we know if that is at all meaningful? And when vendor A’s total score is 163 points, and vendor B’s is 122, how do we know the difference is as significant as it looks? The numbers alone are meaningless, and yet many companies place great stock in such analyses.

 

Training is another area where implementers fail to achieve their goals. As software projects move along and an application is ready to be used, most companies acknowledge that some kind of training is important. Almost always, the only training provided, either because of budget constraints or because of a lack of fundamental understanding, is technical training: how do I move around the page, what does this error message mean, how do I get a network connection to upload my data, etc. Even if the implementation team has spent time on the “softer side” of technology implementation – defining the myriad of business rules intended to dictate how the software will be used operationally – they usually decide that writing concomitant SOPs is sufficient. “Let the users read the SOPs,” is the attitude. This is a fatal failing.

 

Implementing eClinical software applications will require the participation and assistance from several “partners” – the vendors you use, your IT department, and whatever service providers (internal or external) who will provide Help Desk services, network setup, hardware provisioning, software enhancements, and so on. Too often, biopharmas rely on interdepartmental commitments, or vendor-sponsor commitments, that are not much more than interpersonal pledges, made at some weighty meeting. Without formal agreements, even if they are only inside your company between sister departments, mission-critical support functions will be dependent only on goodwill, individual memories, and lots of executive arm-twisting.

 

Last in this list of common process failings is a misplaced enthusiasm for metrics, which leads to dozens of measures being tracked about operational performance and software’s possible effect on them. You certainly want to measure what you do, and to examine in some key quantitative ways whether the eClinical initiatives are improving operations. But most companies who get “metrics religion” swing the pendulum too far over. If you generate dozens of metrics, usually at a very low level of operational detail, then your staff will rebel: they will feel they are spending more time on measures than on work, and even the managers for whom the metrics are for will not wade through the resulting stacks of reports. Instead, companies should focus on a few, meaningful metrics for which reliable data can be collected without burdening staff, and which you are really likely to act on. Nothing is worse than collecting and analyzing lots of metrics data, and then not being able to do anything with the results.

 

Steps Forward

 

These examples all point to some obvious fixes that biopharmas can try and use to ensure that the baby steps they are taking toward the eClinical future end up as long, confident strides:

 

— The work of the visionary does not end with expressing the vision. He or she must follow through by identifying those with the skill and financial resources to implement the vision, and then they must continue to mentor and encourage these staff throughout the work.

 

— Requirements development must be done within the context of key business imperatives. Techniques should be used to properly weigh requirements in the context of operational necessities, not through artificial number games.

 

— The extensive training efforts required for eClinical implementation must include true process training, and trainers must be well-grounded in the operations they are improving.

 

— Formal agreements must be used with vendors or internal support providers to avoid the dependency on fragile and fleeting interpersonal commitments.

 

— Implementers should examine their roster of metrics harshly, and repeatedly slash the list to a meaningful, actionable “vital few”.

Rigor Vivus

 

In the post-marketing period, when new drugs “come to life”, a significant and growing amount of clinical research continues to be performed on a drug or device. Indeed recently published information indicates such research volume is rising more rapidly than pre-approval research, albeit from a much smaller base. This trend is all to the good: much important information about therapies and disease can be found by increasing post-marketing research among patients in real-life settings, which cannot be accomplished by the inherent scientific constraints imposed on investigational trials.

 

Sponsors are increasingly realizing, however, that more rigor needs to be applied to post-approval research – not that there are issues with subject safety, but rather that poor clinical operations process in post-marketing trials means inefficiency and lost opportunity.

 

There are many reasons why postmarketing studies are conducted differently than investigational trials. Regulations are fewer or non-existent in some circumstances. Research performed under full Good Clinical Practice is very costly. And by definition, once the therapy has passed regulatory muster, the safety and efficacy have already been proven through the exemplary rigor the industry follows. Much post-approval research is initiated and performed by individual investigators (in so-called IITs), who simply have an interest in pursuing a personal hypothesis or want to keep track of patient experience in a manner more organized than is possible in a regular medical practice. Given such circumstances, it is natural for sponsors to take a pragmatic approach to research after a therapy has come to life.

 

The incidence of post-approval research is particularly high among medical device companies, where many new products are approved as variants of existing devices, and thus are not required to undergo long and complex investigational trials. In these circumstances, rigor in following an approved device seems unnecessary and unproductive to many device companies. It is also true that for those drug companies large enough to have dedicated post-marketing or Phase IV units, physical or organizational separation from the investigational side of the company can lead to a philosophical and tactical separation as well.

 

Defensiveness doesn’t help

When it is suggested that post-marketing units are not rigorous enough, they often (naturally) react with great defensiveness. But in doing so they are missing the point: it is not a question of patient safety or regulatory compliance, it’s a process problem – a business problem.

Reduced rigor in post-marketing trials runs up against two imperatives. One is increasing regulatory attention, the other is integratability of all sponsored research. Regulators are increasingly asking sponsors to follow their drugs in a systematic manner post-approval, and as has been famously publicized recently in the Wall Street Journal, many of these studies are long delayed. Some of these studies have been a condition of the FDA’s approval of the drug or device. These studies are delayed or incomplete for many reasons, in part because of the inherent difficulties in conducting research in the “live” world, but also because the units charged with conducting this research are not used to operating under investigational research standards.

 

The internally-driven imperative for more rigor in post-approval studies is more widespread. Increasingly, sponsors are seeking to combine the data, the processes, or the enabling operational technologies (or all three), of both investigational and post-marketing trials. There are several drivers for this. One is the desire to use outside operational assistance (CROs) more judiciously; one way to achieve that is to leverage internal investigational and post-marketing resources to help even out each other’s workload. Another similar driver is the desire to leverage the significant investment companies have made in clinical trial software systems; in doing so they find they need to align processes, or risk confusion and poor control. But the key driver in tearing down the wall between pre- and post-approval research process is to be able to integrate these two rich data sources.

 

If an IIT or a registry or Phase IV patient experience trial reveals something exciting from a therapeutic or competitive standpoint, it is enormously more cost- and time-efficient for a sponsor to be able to use that data directly, and not have to run a “good process” trial from scratch to reproduce the findings. Sponsors cannot predict when they may want to use post-marketing data for a label extension, secondary indication, or competitive claims. When the opportunity strikes, data units in post-marketing organizations have to scramble to apply rigor retrospectively – something notoriously difficult to do.

 

The worst thing about a sponsor having two different perceptions of clinical data (pre- and post-approval) is that it creates two groups of staff and two process universes that are incompatible. The skill sets and processes, and therefore the people themselves, are not fungible or shareable. The separate creation of SOPs, training programs, parallel networks of trial monitors, and incompatible overlapping data systems, is highly inefficient.

 

What does help

We contend that sponsors have nothing to lose and everything to gain by doing all trials to the highest reasonable standards the first time. Rigor might seem more costly, but it is a lot like quality: quality does not cost money, it saves money. Rigor in post-marketing trials cannot compare in cost to the cost of re-running a trial under stricter standards or trying to clean a trial retrospectively. Rigor cannot compare in cost to losing out on a label extension or a competitive claim. Rigor cannot compare to the cost of trying to use an unprepared staff in crisis mode to help with a regulated trial using procedures they are unfamiliar with.

 

The irony is that what we are talking about here is mostly just good clinical practice, something every sponsor knows well. Post-marketing units need to learn more from their pre-marketing peers. (And while they are talking with each other, maybe investigational trial units can learn from the innovative trial design and technologies which are often first used in the post-marketing environment.)

Technology also helps. If applied properly, the evermore ubiquitous electronic clinical trial tools can help enable rigorous processes. And those tools can make it easier for smaller post-marketing research staffs to do more with less. Equally exciting are electronic tools which enable researchers to reach out into patients’ daily life – electronic patient diaries and similar – so that the experience of patients outside of the clinic, or outside the artificial constraints created by pre-approval trials, can be recorded and analyzed for insights into how to make a good product better, or how to take an alternate tack in a therapeutic strategy.

 

Not every trial, but more of them

Of course, not all trials post-approval need be run with investigational rigor. By the very nature of the “free enterprise” of science, many of them could not run with investigational rigor even if you wanted to. We also understand full well the usefulness of marketing trials in building provider relationships, enabling “small research” that would never otherwise be funded. And some research, like long-running registries, can only be cost-effective if managed in a lean manner. In all these cases, some research is better than none. With minimum safety and compliance standards met, the only thing being compromised in such examples is the potential scientific value, or perhaps the integratability of the data.

 

But sponsors are recognizing they need to be more careful about this, and err on the side of rigor rather than laissez-faire. The cost of doing “small research” that is not re-usable, or interferes with the development of an efficient post-approval safety profile, is too high. If necessary, build a “wall” between those who work under pre-approval regulatory requirements and those who support investigator-initiated trials or patient registries. Recognize that these resources will not be fungible. Recognize that the data will not be easily merged, if at all. Recognize all this, and plan accordingly. When there is a hint of wider use of the research results, move it quickly back over the wall to the side of rigor.

 

Take a long look at how you handle your post-marketing trials. Better processes may be more cost-effective; better science may breathe new energy into those products of yours that have come to life.

Stardate 2004 (by Joe Anderson)

 

The Star Trek TV series promised to boldly take us “where no man has gone before.” And did it ever. Its three-year run on television exploded into a string of syndications, blockbuster movies, sequel series, and a fanatical cult following. The show did indeed introduce its viewers to a new universe that enthralled and fascinated.

Over the last five years, the pharma industry has adopted numerous technologies to take CRAs where they have never gone before. The modest steps in this direction from the early ’90s have now developed into a plethora of software and processes that impact the monitoring task. CRAs at many companies now find themselves using multiple electronic tools, both in the office and on the road. Let’s examine how to both cope and prosper in this environment.

 

“It’s Logical”

Among the Star Trek cast, Spock continually exasperated the emotional Dr. McCoy with the inevitability of his cool logic. McCoy’s frequent refusal to accept the facts as they were was countered by Spock’s acknowledgement of and measured response to the current situation. A similar response is now required of the clinical research professional. The accepted prognosis for our industry foresees the continued penetration of computers and software into virtually every aspect of the clinical trial process. This movement has an inevitability to it, that all CRAs must acknowledge and prepare for.

 

Even if your particular company is currently a “lagging” adopter of such trends, the advantages and efficiencies to be gained through such well-implemented tools are clear. It is only a matter of time until your turn comes – the trend is moving in one direction only, irrevocably forward. Even when software tools are of suboptimal quality or not well implemented, and monitor users suffer because of it, the consistent response from the monitors themselves is “fix it,” not “take back the computer.” Why? Because in today’s working world, including clinical research, it’s logical.

 

Recognizing the Future

A first step in preparing is to learn the nomenclature of the technologies and to understand where they intersect with the monitoring task. Everyone reading this journal should recognize these acronyms: CDMS, CTMS, EDC, EPD, AES, CATD, eSub. If not, you have a lot of learning to catch up on. It is by no means acceptable in 2004 for you to think this is the province of clinical data management or your IT department and that you already have too much to learn.

 

Today’s CRA should be familiar with each of these terms and where that tool fits in the overall clinical development process. Most of these categories have two or three leading products on the market in terms of market share. Their advertisements can be found in the industry literature and most of these vendors exhibit at industry meetings. A small investment in time will make you familiar and conversant in what is out there, how it is used and where it is going.

 

Preparing by Learning the Basics

Numerous conversations with experienced CRAs and technical support personnel suggest that the best adopters of technology within the monitoring profession have grounded themselves in at least three areas of basic computer knowledge. (This is in addition to the tool-specific training that every CRA should expect and receive.) Mastery of these three will prepare you for maximizing the benefit derived from the new tools.

 

1. Yes, we do Windows. In one Star Trek movie, Scotty travels back to the 20th century and encounters a “personal computer.” Needing information, he announces his request and waits impatiently for the computer to answer. Prompted by his 20th-century host to “use the mouse,” Scotty picks it up like a microphone and repeats the request. Changing tactics, his host suggests using the other input device, to which Scotty exclaims, “Ah look! A keyboard. How quaint!”

 

Given the ubiquitous presence of computers in our lives, companies can and are expecting monitors to understand the basic components of their computers. Technical support should be and is usually available, but those good folks will ask you “to open the Control Panel” or “plug the cable into the USB port.” Your ability to understand this response, and even the ability solve simple problems yourself by understanding computer basics, will save you innumerable hours and much frustration in the future.

 

2. Beam me up. With Internet access entering our homes, offices, airports, bookstores and coffee shops(!), being online is an enabling technology that is changing almost every part of our lives. This is not lost on technology vendors, who are busy offering access to almost any clinical data from almost every location. While the advantages to this are clear and the challenges (security, e.g.) are being worked out, the CRA is being asked to connect, to connect and to connect again, both to clinical data and to study metadata, from a variety of locations.

 

Understanding the rudiments of telecommunications (like knowing the difference between your modem and your IP address, or your Cat-5 cable from your proxy server) will rescue your precious time at the site or your evening of work in the hotel when the connection “doesn’t work like it is supposed to.” Access to data online is great, but not if you haven’t learned how to get to it.

 

3. I know Google. A common feature of clinical tools is the ability to ask questions of the data they contain. “How many out of range visit dates are there?”, “How many eCRFs have I gotten since Monday?” and “Are there any male Caucasians without a rash?” are ways of resolving, cleaning, and completing clinical information. Any CRA that can create simple queries directly, without asking the Data Management group for a report and then waiting days to get it, will see a dramatic leap in personal productivity.

 

The good news is that you are already doing this. Anyone who has searched for “FDA and GCP” on Google or clicked the “In the Last 30 Days” button while searching the CNN site is already creating queries to complete information. Using the Advanced Search options on these sites is even better. Your Data Management colleagues may call it “SQL” or “Boolean logic,” but the goal is the same: to learn the basic ways of combining elements in searches to find the desired results. It’s not just for the techie anymore.

Consider How You Work

 

New Yorkers were treated to a rare sight in the fall of 2003 when they looked out at the East River and saw a supersonic Concorde moving upstream at 5 MPH. On its way to a local museum for display, the Concorde made little use of its overwhelming power and speed to get there and took several hours to make the journey from JFK airport.

 

We often encounter similar states of technology utilization as we work throughout the industry. It is still possible to find laptops out in the field, brimming with clinical data, query tools, direct E-mail contact with sites and other technology enablers, while, at the same time, the monitor herself continues to follow a process that has not changed in many years. Data is still not reviewed until the next site visit, items are hand-copied between the EDC tool and the monitoring reports (which are still written in Microsoft Word) and study enrollment is hidden in the monitor’s personal spreadsheet on the laptop’s “E:” drive.

 

Technology implementations require a rethinking of how people work. CRAs can and should serve as an invaluable resource in doing such rethinking with respect to clinical tools. But often, they are the last to be asked and, when asked, they say they have never thought about it and probably don’t have time to. No company can derive ultimate payback from its investment in clinical technology until these end users and domain experts have been invited (and required) to describe what difference the tool can really make.

 

Stardates: Bridging the Present and Future of Clinical Research

The original Star Trek series was placed in the 23rd century. But Trekkies know that the series itself dated its own episodes in terms of “Stardates.” Such things are necessary, of course, when the universe of the possible has expanded, and it no longer works to see things only in terms of the Sun (or paper). Most Star Trek characters, however, could still work in both modes, as evidenced by the numerous episodes that traveled back to the 20th century. The ability to continue to work with paper-based trials, while preparing for, coping with, and anticipating the electronic future is a skill set that every CRA must cultivate today. Doing so will produce an experience and a career for the CRA that truly is (as Spock always said), “Fascinating.”

With Benefits Like These, Who Needs the Costs?

 

Inevitably, as more and more companies invest in new information technologies for clinical research, they are beginning to ask: “Am I getting any benefit from this investment?” As it turns out, this is either a very simple or a very complex question. The simple answer, that many companies find to be quite legitimate, is what we call the value of necessity, i.e. “we could not have done it any other way.” The complex answer, when a company seeks to actually calculate a cost/benefit, or a formal Return on Investment (ROI) in terms accepted in the financial world, is much more difficult to arrive at, primarily because very few companies conducting clinical research understand their true costs or even how to start identifying them. If you do not understand your true costs, then obviously you will have a hard time knowing if a change has affected them positively or negatively.

 

Leaving financial calculations aside (a topic for another column), I have become more concerned lately about a fundamental misunderstanding about the benefits of new information technologies in clinical research. These misunderstandings can be quite dangerous, because they can create unrealistic expectations, impossible goals for managers to achieve, and lead to inappropriate process change initiatives. With benefits like these, who needs costs?

 

Let’s look at four benefits typically expected by adopters of electronic data capture (EDC): faster database lock; real-time data; reduced outsourcing costs; and accelerated decision-making.

 

Faster Database Lock?

One of the most universally accepted benefits of EDC is faster database lock (DBL): by catching eighty percent or more of common data entry errors at the time of entry, the number of errors that need to be followed up in person is drastically reduced, and the study team (monitors and data managers) can keep up with the dataflow in such a timely manner that little needs to be cleaned up at the study close. Of course many factors contribute to whether this will be achieved or not, including site training, monitor participation, back-end database issues, adverse event reconciliation processes, third-party data sources such as central laboratories, and so on. But nonetheless, many companies have now reported database lock times using EDC as certainly a matter of days (instead of weeks), and sometimes even a matter of hours.

 

So, this is an easy one, right? Faster DBL is a benefit of EDC. But for whom? How is this benefit manifested in a clinical development organization? The key is this: if you are not ready to start the next study in the clinical development plan at the moment you have obtained the necessary results from the just-closed EDC trial, but instead start that next study when you always would (some months later), then you have not achieved any overall drug development time savings. Faster DBL yes, timeline compression no. And without the latter, the former is only mildly satisfying and modest in dollars saved.

 

Yet we rarely hear anyone talk about this — either sponsors or vendors. The problem will be when someone wants to justify the EDC expense by faster DBLs. An executive with sufficient vision, beyond one trial at a time, will look back and say, “my darn drug got to the FDA no faster with EDC, so what was all the hoopla about?” Faster DBLs are not enough; your clinical development program has to change its calendar, its fundamental way of thinking about time. EDC helps you do that, but only the process change will achieve the business benefit.

 

Real-Time Data?

The advent of the Internet for applying information technology has given us the great opportunity of “real-time” data, or so they say. This phrase is a pet peeve of mine (along with “hybrid systems”). First, let’s ignore the implication that there is “unreal time” (perhaps only in trials of hallucinogenic drugs). The real problem with the hype about real-time data is that no one seems to ask themselves if they need the data in real time. Do you need to know what patient 2576B’s blood pressure value was within minutes of it having been taken? Would it be alright if you knew tonight instead of “right now”? How about by the end of the week? Or two weeks from now?

 

In some very specific examples of particular trial designs, one can imagine how real-time data might be useful (perhaps in a Phase I of a potentially dangerous compound, or to track a developing trend in Phase II that raises a safety question, or is producing such a positive outcome as to justify a compassionate use exemption). Otherwise, every time I have asked this question of groups large and small, almost no one has a reason why they need the trial data right now.

 

On the other hand, there are many circumstances when we need trial data just in time, and that is a very different concept. If we are trying to make a judgment about a site’s recruitment performance, perhaps to pull further patients away from them because it is taking them too long, at the time we need to make that decision we would like to have the most up-to-date information about that site. We don’t want to have to wait until we can send a CRA out there to see what the status is, but neither did we need to know, every hour since the trial started, hour by hour, how that site was doing.

 

This is the crucial difference between real-time data and just-in-time data. The implication is that an EDC technology which can’t get you real-time data, but can get you data more or less instantly when you ask for it, is probably just fine for your needs. That’s the expectation you want to set, and that’s the benefit you want to pay for.

 

Reduced Outsourcing Costs?

Nothing is more loaded for setting executive expectations than to tell them that EDC will save on outsourcing costs. Whose costs did we have in mind, exactly? Usually people mean that EDC enables the sponsor to bring more of the data handling process (or projects) “back” in house, where it is assumed to be cheaper, getting it out of the hands of those expensive CROs. Or for the more sophisticated EDC users, in-licensing the software is assumed to be cheaper than relying on an ASP-delivered EDC solution (where a vendor provides all the necessary ancillary infrastructure and process required to make an EDC trial run).

 

Well, we may indeed have reduced our outsourcing costs, but what have we done to our internal costs in so doing? Very few sponsors understand the cost of their routine processes today, as we have said, much less understand the myriad cost implications to internalizing the new EDC process, especially if the software has been in-licensed. The operational implications of internalizing EDC are broad, and potentially costly. We usually advise sponsors to think of EDC as “cost neutral”, at least in the short run.

 

This is not to say, by any means, that EDC is not cost-effective. But throwing around phrases like “EDC will reduce outsourcing”, or “EDC will save money,” without knowing how to back up those statements, creates the worse kind of expectations among management, financial staff and clinical researchers. It is highly unlikely, if someone who knows what they are doing comes in to do a financial audit, that the average EDC project could come out in the black — not because it isn’t in the black, but because those using EDC don’t know how to prove the cost differential properly.

 

Accelerated Decision-Making?

Another easily accepted benefit of EDC, which is incorporated into people’s expectations without much thought, is that because EDC allows for faster DBLs and more accurate, faster interim analyses through real-time data, that EDC can accelerate decision making. With the clinical data coming in faster and cleaner, we can make in-trial decisions (close that under-performing site) or program decisions (kill this equivocal drug) faster, with all the attendant cost-savings implied (eliminating future development of a drug going nowhere, or accelerating a good drug through a slimmer series of tightly knit trials).

 

This is indeed a potentially powerful benefit of EDC. The problem with this one is more subtle: are we, by killing off a compound faster, losing the chance to learn more about it at a more steady, thoughtful, scientific pace? Are we closing the door to serendipity, that highly productive R&D tool which has produced so many blockbusters? This strategic debate rages up and down the executive halls of most biopharmas, and does not necessarily have anything to do with software. But the extent to which EDC is “sold” on this concept, this “benefit” may backfire if/when management decides its decisions are accelerated, yes, but also too mechanical.

 

There are many benefits, provable objectively, to electronic data capture, so much so that few companies are seriously planning a long-term clinical development strategy which ignores EDC and sticks to NCR paper. Those seeking to speed its adoption, in an industry which needs computerization so urgently, need to use words carefully, and need to differentiate true benefits from false gods. Timeline compression is what speeds submissions, not faster database locks. Just-in-time data is what we need, not real-time data. The operational cost of using EDC may not be less than how you do research today. And accelerated decision-making, at the cost of lack of insight, is no benefit at all. Keep the true benefits in your sights, and you will mislead or disappoint no one. Instead, your investment will reap the returns it deserves.