Email us : info@waife.com
Management Consulting for Clinical Research

An extensive requirements definition effort leaves no patience or budget for the real hard work and the real key to success.

 

Herewith a radical proposition: the least useful requirement for a successful software acquisition project is a set of requirements. This seems pretty radical, doesn’t it? How can this be true, when developing requirements is in all the standard technology and consulting teachings?

 

I can support this proposition in two ways:

1) the usual method for gathering, documenting, quantifying and evaluating requirements for clinical research software is deeply flawed and expensive

2) the focus on functional requirements distracts from other, equally or more important evaluation criteria, and drains resources away from the implementation efforts needed after software selection.

 

A Fatal Assumption

Typical software development or selection methods require that internal or external personnel go out and interview staff in the departments projected to use the new tool about what they “need”. From this, using various methodologies, a list of requirements (sometimes even in three-tiers of detail) is delineated, often with admirable complexity and at great length. Some method is used to sift through these requirements, prioritize them and document them elegantly. Almost always, such an effort takes many months, many meetings and many iterations. Lots of money, in other words. And why does it take so long? Because so many opinions are being solicited and the myriad inputs must be reconciled. And this is just the tip of the iceberg of a traditional software description and evaluation effort.

 

Worse yet, neither solutions nor needs articulated by those working in the functions being supported by this effort take into account the business strategy of the organization, the context of the work in the larger enterprise, or how conditions will change in the future. Instead the typical method jumps right down into “I’d like my safety tool to be able to spell-check the case narrative,” or “my status summary in my EDC tool should use checkmarks instead of stoplights.”

 

The development of software applications requirements is not trivial nor should it be, if one is developing software from scratch (from where this method came). But as one hears from every biopharmaceutical company, “we are not in the software business,” by which they mean they want to buy software “off-the-shelf” and not develop and maintain software internally. And in our narrow, limited marketplace of clinical research IT (as distinct from, say, office automation, bookkeeping systems or iPhone apps), there is little to choose from in any one niche (EDC, ePRO, CTMS, AES, etc.). So here is the chain: We are not a software company = off the shelf tools for clinical research = little choice = requirements not required.

 

Why is this the logical chain? Because choosing software off-the-shelf is an entirely different process than software development. By definition, we have to make do with what is out there, and choose among them. The task is to figure out how to differentiate among a handful of vendors. But requirements to the detail usually generated by traditional methods either:

• Are met by all relevant vendors

• Aren’t met by any vendor; or

• Aren’t likely to be developed by any vendor soon enough to properly influence the off-the-shelf purchase.

 

In other words, you will not differentiate one vendor from another based on all that work you paid for, so why bother?

 

Worse still is the usual wrap-up to the standard requirements development process, when artificial quantification methods are applied with the aura of scientific rigor for concluding that Vendor A meets Requirement #64 to a score of 3.76, versus Vendor B’s score of 3.24. Are you really going to make a decision on this basis? What does these values mean? Did you have one of your biostatisticians in the room when you used this method? Why are we not embarrassed, in an industry whose livelihood is based on determining false from true statistical measures of outcomes, when we do not apply the same attitude to other uses (or misuses) of statistics?

 

A Waste of Focus

Traditional requirements development is equally deleterious for its impact on what is more important in our clinical research software projects. Requirements development is so expensive and time-consuming, it saps the budget and the enterprise’s energy to the point that we are gasping as we cross the requirements finish line. But that’s the problem – it is by no means the finish line for getting useful software in the hands of the users who need it.

 

First of all, functional and technical requirements ignore what is at least as important to our software selection, if not more so: is this vendor a good match for us? Do we like/trust working with them? Does their business strategy match our business strategy? What about our respective future directions?

 

For instance, what is the larger enterprise context for the need expressed by users for, for instance, real-time access to a monitor’s Site Visit Report? If the research sponsor is in a specialized therapeutic area with a limited pool of poor performing investigators, this need is critical. If not, is this just a “nice to have”?

 

Most damaging to the successful use of the software being considered is that an extensive requirements definition effort leaves no patience or budget for the real hard work and the real key to success: a well-designed, thoroughly executed implementation plan, which should encompass the gamut from project leadership to process definition to training and organizational psychology.

 

Our responsibility as clinical researchers is to ensure not only that we have the tools to do our work accurately and competitively, but to ensure that how we acquire these tools is properly focused. Traditional requirements gathering is not required; what is required is understanding why you are getting the software and how to implement it.

 

this huge transition to electronic data capture, which Pharma has made over the course of a decade, is finally over… Or not.

 

“Yes, we’re using it.” This is today’s universal answer to the question of EDC. And rare is the company who won’t claim to be “moving toward 100%.” At the same time, we’re told that the technologists have moved beyond EDC and that EMR integration is the next big thing. And so, this huge transition to electronic data capture, which Pharma has made over the course of a decade, is finally over… Or not.

 

The progress and the success in using EDC is real and the EDC/EMR of the future is exciting, but you (and your management) should know that numerous, significant benefits of EDC remain untapped by many, if not all, sponsors. These benefits have the potential to provide sponsors with continued process improvement and substantial return on investment (ROI), long before any new, technical advances in data collection are ready for large-scale use. Let’s look at five things you can still do with EDC to reap those benefits and make something good even better.

 

1. Ease the start-up pain

The length and complexity of EDC start-up timelines may be the most universal problem still encountered with this technology. It also provides the primary ammunition for today’s few remaining EDC skeptics. When experienced users are asked for the number one improvement they wish for, the answer will almost always contain a reference to time. “Too long,” “too slow,” and “too often” are the painful watchwords of the EDC startup experience.

 

The cause of this common complaint may surprise you, for it has little to do with technology. In a retrospective analysis of any start-up timeline, technical programming will not be the task that consumed the calendar. Rather, the time will have gone to study teams as they debate about what to program. Inefficient, ineffectual decision-making within clinical teams, masked in the past by paper-based processes, is landing this issue smack on the critical path of trials using EDC. Fixing this problem may finally convince those remaining skeptics and reduce the stress level of your study teams.

 

2. Do real data cleaning

Some of you may remember the Jetsons cartoon, where George’s job in the world of the future was reduced to sitting at a desk all day, pressing an unlabeled button over and over. George’s “value-added” may be the best analogy to today’s data-cleaning methods, methods that remain strangely unaltered by EDC. Huge infrastructures, developed to handle the mundane errors that paper CRFs produce, still remain in place, with legions of people checking and rechecking things that EDC edits have already prevented or caught.

 

What is the error rate of your data with EDC? That is, how many data points are ever changed from the site’s original entry (after those entries have passed EDC’s auto-checks)? You may be surprised how few errors remain to be treated. Sponsors who measure this have found that 95%, 97%, and even 99% of their data remain unchanged, even after the herculean efforts of the cleaning legions.

 

Reallocating those resources in more thoughtful ways can increase data quality even further by allowing closer looks at the whole forest (where important safety and efficacy trends await discovery), rather than staring harder at the trees. This reallocation can also provide opportunities for efficiency gains in an age where “doing more with less” is a frequent mandate.

 

3. Make monitoring more useful

The field-monitor role should be a crucial component of a trial’s success, both in the terms of the study’s conduct and in the preservation of its integrity. Yet this important role is often reduced, even in a world full of EDC, to the status of an on-site (double-)checker. EDC can change this model by enabling visibility of and interaction with the clinical data by monitors when they are away from the site.

 

The potential for monitors using EDC in this way away from the site is enormous. (The Monitor devoted an entire article to this in its June 2007 issue). The ability to see the data, to see what has changed, to see what has not yet been entered (though it should have been), to dialog with the site through manual queries about specific data points; all of these open new vistas of value-added tasks and responsibilities that can turn the “double-checker” into a contributing study-team member. Optimal use of these capabilities can provide insights into protocol issues, EDC design flaws and improved site performance. And all of this is available without using up one minute of that valuable on-site time with the investigator.

 

4. Find the efficiencies

EDC enables, really for the first time, the recording and time-stamping of diverse clinical activities across multiple roles. Sponsors can now track numerous metrics that indicate how well processes are working and how well people are performing: How soon is the site entering data? How quickly are monitors reviewing data? Are freezing and locking activities keeping up with the volume of entered data, or are you looking at the famous bolus of cleaning just prior to data lock? How about the efficiency of your editing efforts? Do your edits actually fire? Which ones?

 

It may surprise you to learn that much of that frantic edit-programming during start-up didn’t yield a lot. One sponsor found that 70% of their several hundred programmed edits never fired at all, while only a couple of edits accounted for 25% of firings and fifty accounted for 90%. All of this information is available, with almost a trivial amount of effort, out of an EDC tool that tracks and measures an enormous number of clinical trial process variables.

 

5. Lock all of it, not some of it

The now ubiquitous, fast database-lock times achieved by so many sponsors are encouraging, yet these times are often skewed by the failure to address all the data. Many celebrations of five-day locks ought to be accompanied by a footnote that says “EDC only, CDMS and SAS still to come.” This disconnect fails, in an odd way, to create any more value for the business than a paper trial.

 

The close-out of a trial deserves as much process improvement as the start-up. Work on those improvements, to be sure, will take you away from EDC and into the realm of external-vendors: their data flows, their contractual deadlines and their execution. But the chief benefit of EDC is easily frittered away by these very elements at the very end of all your efforts.

 

Sponsors who lock everything and lock it quickly do so by creating a last patient, last visit (LPLV) scenario where all previous data, regardless of source, has already been dealt with. For them, achieving final lock requires only handling the last set of eCRFs, the last set of analyzed blood-draw data, etc. These trivial amounts of data are easily coped with and incorporated into the final data set.

 

The good can still get better

Few sponsors enjoy today all the benefits mentioned here, not even those who think they are already “100%.” These five areas are the remaining fruit that can still provide further value to clinical development through the use of EDC. As we keep an eye on the (still distant) future of EDC/EMR, these additional benefits from EDC are available now and offer to sponsors both process improvement and substantial ROI.

If we do not know why, correctly, we are using these tools, then it is much easier for someone to come along and stop us using these tools, incorrectly

 

Today’s financial conditions make software investments more closely examined and measured, which makes it essential to get the measurements correct. To a great extent, biopharma companies use the wrong measures, and too many measures, to evaluate their clinical research information technology investments. The result is a fundamental misunderstanding of the benefits that can be expected, and of the real value these applications bring.

 

Misunderstanding through Mismeasurement

The challenges in measuring the value of clinical research software like electronic data capture (EDC), clinical trial management systems (CTMS), electronic patient-reported outcomes methods (ePRO) or adverse event systems (AES) have many similarities.

 

The first and most serious misunderstanding is a forest-for-the-trees phenomenon: in the search for precise metrics, we forget the context in which these applications are used:

• EDC, for example, affects nearly every aspect of clinical development. Consequently, what should a valid measure be? Some micro data management measure comparing eCRF and paper CRF data volume? What does that have to do with remote monitoring, source data verification, knowing patient recruitment data earlier, enabling adaptive trial design, or improving drug supply logistics?

• For a CTMS, is the correct measure of its effectiveness how many users enter their data on time? What data, to be used by whom, and why? And at what cost (to customize the application and its reporting tools to handle this data)? Most enterprise CTMS’ are trying to give top management the proverbial “dashboard” view of their huge clinical development investment. Should we measure that by how many days have passed between the monitor’s site visit and the filing of her report in the CTMS?

• ePRO tools change the very nature of the kind of clinical research we can do on our therapeutic candidates. Many study teams question the unfamiliar added cost incurred by ePRO use, but how do we measure that value against being able to prove a competitively superior endpoint effect hitherto unprovable?

 

On the opposite end of the continuum, poorly defined micrometrics are measuring things irrelevant even to daily operational impact or effectiveness. When EDC is left to data management and biostatistics alone (instead of being a shared enterprise with clinical operations), the risk of irrelevant micrometrics, and too many of them, is high. But it applies to how we track CTMS use as well. I have heard of companies tracking literally dozens – nearly hundreds – of micrometrics, but to what end? No one can derive meaning from so many metrics, no matter how well designed, but of course, by definition, to have so many metrics equals metrics that are too narrow. Examples abound: time from query generation to site response; CRF pages “closed” per day; patients recruited per week; ePRO device failures per site; etc.

 

The classic EDC metric is the interval from LPLV (last patient last visit) to DBL (database lock) – probably still useful, if only for its ubiquity. But as years of experience are accumulated, one can see vast disconnects in this reported interval among EDC-using companies. Why would that be? Because clinical research is not mechanical, EDC or no, and the human-based processes designed over, under and around EDC can accelerate, or screw up, the database locking time without regard to the eCRF intervention. In other words, we in the clinical research business, who are supposed to be expert in the design of measurement, allow a multitude of confounding variables and uncontrolled findings to tell us the value of our software investments.

 

We also misunderstand basic components of the evaluation: who are the real users of the software? Who are the true beneficiaries (not always the same as the users by any means)? Across what time period is a reasonable measurement taken (usually executives are expecting data unreasonably quickly)? What represents true workflow or throughput (especially important for adverse event systems)? Without understanding the fundamental business purpose of the application, we cannot measure its business value.

 

Further errors of understanding are introduced when criteria used for strict financial analyses (like Return on Investment – ROI), or for CRO services evaluation (like output per hour billed, or average daily rates), are applied to software. Software delivered as a service offering (SaaS) confounds these measures ipso facto, since the product/service identity is blended purposefully, and perhaps advantageously. As for ROI, it is hard to find any purely objective and financially-based metrics applied to a scientific enterprise (which, we forget, clinical research still is) which tell the whole story, or any chapter thereof, in a way meaningful to corporate management.

 

The “maturity” of EDC and other well-established applications at several biopharmas has done little to avoid the misunderstandings, suggesting that the maturity label is still premature. Most companies who have committed to EDC or CTMS, for instance, can only show fundamentally irrelevant micrometrics, misunderstood metrics, or none at all. The use of the software has been institutionalized on gut feel or a loathing to examine an expensive commitment. These may be good cultural justifications, but they leave you vulnerable in tough financial conditions, in times of executive turnover from mergers and acquisitions, and when resistance to change among middle management grows louder.

 

But this is the problem: if we do not know why, correctly, we are using these tools, then it is much easier for someone to come along and stop us using these tools, incorrectly, because there is no proof to justify them. Misunderstanding through mismeasurement – like assuming EDC will reduce field monitoring time by XX% and failing to prove that – will undermine the likelihood of funding and resourcing the continuing support and enhancement these applications will always need.

 

Re-measuring Today; Understanding Tomorrow

There is a way out of this dilemma, and it is as simple as stopping mismeasurement immediately, right now, as soon as you’ve finished this column. Even without a correct measure to replace your mismeasures, no measuring would be better than wasted measuring. Then start work on meaningful metrics, a vital few, that are:

• Tied to your company’s strategy

• Traceable to the business purpose of the software

• Useful to those who you depend on for the raw data

• Feasible to obtain

• Valid to measure, free of confounding variables

• Gathered with sufficient time to produce meaningful results.

 

Meaningful metrics will yield understanding, and properly evaluate whether how we are using our expensive tools is producing the results they most surely can.

“Being a partner has given vendors permission to underperform.”

It is most fashionable these days to describe all business relationships as “partnerships.” It doesn’t matter if we are licensing a compound, handing off functional operations to a CRO or buying a piece of software, we are always doing it with “partners.” I suppose the paper distributor who supplies our paper towels is our “cleanliness assurance partner.” While we can dismiss this language as the marketing cover-up that originated its usage, we cannot so easily dismiss how misperceptions resulting from this language have become an obstacle to the theoretical concept behind the word.

Everyone wants to be our partner these days, even when they are simply selling us a product or service. I have to wonder what is wrong with being “just” a provider or a vendor these days? The glib answer is that somehow partners provide us “more”, while in my observation they actually provide us less. Being a partner has given vendors permission to underperform, especially to the unreasonably high expectations they have set for themselves with this partner language. By being our partner, more can be blamed on us and less on them. Being “all in this together” somehow has shifted more oversight, more responsibility, and more strategy burdens onto us and away from the vendor. It’s a marvelous marketing ploy. Woe to all of us if the IRS announces tomorrow that they have now become our partner! Or to use the Latin, caveat socius!

Symptoms

The partnering game plays out in clinical research information technology in a myriad of ways. First off, all the software vendors are of course now our partners, not our vendors. This has become a business necessity because a) (see above) everyone is a partner, and b) they are under-performing as simply “vendors.” So claiming the partner mantle is an attempt to avoid attention on their:

—  inadequate support performance

—  stagnant product performance

—  need for a steady revenue stream

—  slow product migration /enhancements

—  serially monogamous innovation.

The state of clinical IT products and services in 2009 is a stagnant pool, and the algae are starting to grow on the edges. Most vendors of EDC, CTMS, ePRO, AES and related tools are evenly modest in their support capabilities: every sponsor has learned that the quality of service they receive from a vendor depends entirely on who the vendor project manager is that month, that help desks consist of thinly prepared outsourced staff, and that vendor training is increasingly automated and automatic.

Meanwhile product lifecycles have matured. It is not an exaggeration to say that the functions and features of the leading tools in all clinical IT spaces today are barely distinguishable from those 7-8 years ago.

Money of course is a key contributor. Vendors need to partner with us because they need a steady stream of revenue. One cannot sustain a software business in our industry by selling large boluses of perpetual licenses because you’ll run out of biopharma customers too quickly. So the game is to get sponsors to subscribe to a variable-dependent service (per trial, per program, per year, per something) so that the income will never end. Much better (from their marketing perspective) for us to think of this company you will never stop paying as being a partner, rather than a vendor.

Partnering is a great method for distracting the customer from what is missing in the product strategy (which is harder to hide if they admitted that, after all, they are a product supplier). As the core products stagnate, customers are making longer lists of desired (or required) functional enhancements. They want to see a practical vision for technology migration, and especially now, technology integration. But enhancements and integrations are not in the vendors’ fundamental interest. There is simply not enough money to be made in helping out with these things – the revenue to be made will never cover the engineering costs.

This is compounded by the vendors’ “serial monogamy” approach to functionality: vendors develop, or enhance, or acquire a functional solution one at a time, without particular regard to interoperability or revisiting their customers’ needs every once in a while. Vendors are still approaching our market one silo at a time, even when the shape of sponsor clinical development is rapidly morphing into…well, something unknown, but definitely something not well-served  by automating the 1970’s. So not only are sponsors not satisfied, but the vendors themselves are starting to drift, as the end of innovation sucks the energy out of the market.

Worsening

Gee, that sounds like a dark picture on today’s vendors. Perhaps, but it is realistic. And sponsors are feeling the situation getting slowly worse. Why would this be?

—  Takeovers, spun as “consolidation”, which is supposed to be good for us (mimicking sponsors perhaps?), leading to, at a minimum, enormous distraction, and usually to overlapping sets of software and therefore delayed product growth strategies or enhancements, as well as confused support services.

—  Tough financial times – true for everyone, but software vendors who never did understand the actual clinical research application of their products are now less likely than ever to hire or retain the “subject matter” professionals who otherwise are not programming or selling.

—  Sponsor disarray – the tough times and takeovers affecting sponsors means fewer sponsor resources for vendor management, informed contract management, internal process analyses and continuous improvement, and so on. This results in poor policing of the vendors and poor communication management between even the best vendors and deepest sponsors.

First, Change the Language

How do we get out of this mess? The first step lies in the language: we are not partners with our software vendors, we are buying something from them. We have expectations for what we are buying which they need to meet. This needs to be managed not as if we were lawyers in the same law firm, but rather as customer and provider. We are most definitely sitting across the table from each other, not side by side. As promising as the partnership mystique has seemed, the results have simply not been there, and a return to buyer and seller is not demeaning or a failure, but rather an ancient, proven method of achieving buyer satisfaction.

And More

The good news is that there are practical steps for sponsors, easily within our grasp, to reclaim control over a software market we depend on, but which has gone flat. These steps take a little money, but mostly they take willpower:

—  Dedicate internal resources fulltime to projects that manage, specify, or implement new information technologies into the clinical research process

—  Insist on transparency from the vendors – we have the right as customers to know their true plans, their staffing (and qualifications thereof), how and why they price their services, what is the likelihood of enhancements, and so on

—  Professionalize vendor management internally, and understand this is a fulltime job without which our software or service providers will fail us

—  Take back the initiative on vendor contracts from the vendors.

Despite the rise in power and scope of sponsor contracting groups, they generally have not helped this situation I have described because they do not sufficiently understand either the software business or clinical research operations. For instance, if software enhancements are not in vendors’ fundamental interest, then sponsors need to structure a relationship that makes it so, including considering taking back the strategic leadership of this vendor space.

If we can’t get what we want from vendors, it means we are not applying our vast resources of talent and money to get it. This is not rocket science. It’s not even drug discovery. Rather, maybe clinical development really is too small a market to leave to the vendors, and too important to us to leave it to them.

What will make the vendors cooperate with such radicalism? Vendors must remember that it isn’t scary anymore for a sponsor to change EDC vendors, even enterprise commitments, on the fly. It is happening regularly now. Such changes have always frequently happened in the ePRO space. And it would not be that much out of the question to do so with CTMS (because none are satisfactory) or AES (because they are all the same). So caveat venditor as well.

Let’s take the energy we’re spending hugging our partners and turn it towards communicating with, and managing, our vendors effectively to deliver what clinical development needs in the coming years.

“In Hollywood now when people die they don’t say, “Did he leave a will?” but “Did he leave a diary?”” — Liza Minelli

 

The importance of collecting patient reported outcome data in biopharma research is increasing rapidly. The recent geometric explosion of technological innovation has brought about the introduction and use of electronic diaries to fulfill this purpose. We have reached a point when it is no longer cost-prohibitive or technologically infeasible to collect almost any type of data via electronic means, but this capability has outpaced our operational methodologies and capabilities for the successful large-scale implementations now required. This is one important reason why electronic diary adoption is so much slower than EDC at the moment.

 

The broad term used for the collection of patient-reported (as distinct from investigator-recorded) data is ePRO (Electronic Patient Reported Outcomes), which encompasses a wide range of so-called “instruments” (series of questions, not hardware) and instruments (hardware) administered in a variety of ways (completed in real-time, or otherwise; completed by patient directly, or otherwise; etc.). In this column, however, I want to focus on the most challenging (but often most important) format of ePRO – the electronic diary.

 

While use of the common telephone is a common resource for ePRO when data collection requirements are simpler, a full-fledged patient diary requires technology in the handheld and tablet format, where it seems that every few months a new gadget emerges that can do something which simplifies one’s life and makes it that much more complicated at the same time. The data collection options are no longer limited to answering a question. We are now able to visualize the question textually and diagrammatically before providing an answer. We can easily record the exact time and date the answer was provided, limit when and how often the answer can be given, send reminders to prompt for that answer and if we want to get really personal, record the exact location of where that answer was given and have a photograph taken of the person giving the answer. These types of mainstream technological enhancements may pave the way for potential ad hoc analyses of how location could affect an answer (temperature, climate, urban, rural), or provide additional biometric markers in relation to the emotion of the person at the time they were giving an answer. I will not attempt to address the scientific validity of any of these measures, rather the point is to illustrate that enhanced capability and operational complication are directly related, and the latter is often overlooked in favor of the former.

 

It’s So Easy

 

What could be so difficult about answering a question on a device which asks you to push a button or select an option with a stylus? This is a common view, and a common misperception of technology providers and sponsors using ePRO. We are so used to dealing directly with sites and personnel who are experienced in the conduct of clinical research that we forget our nice new gadget is now in the hands of the general public, who are by definition not in the research data management mainstream. Not only have we failed to adequately address this fact, but rather have exacerbated the problem by not providing adequate training on how to use these devices for our trials. The training, as with other technologies (EDC), is often considered to be of ancillary importance. We hope that in one, three-hour, session at an investigator meeting, not only will sites become experts on the ePRO device and survey use, but they will also be able to convey this knowledge in a simplified manner to the inexperienced end user. As expedient as this is to sponsor and vendor, it is unfortunately not sufficient to the public user upon whom the data (and study results) now depend.

 

To further complicate matters, the tendency has been to take full advantage of the capability of the technology to collect ultimately extraneous data. Once again we forget the end user and direct our attention to the almost limitless potential to collect multiple points of data for theoretical ad hoc analyses, often losing site of the primary datapoints we are interested in. It is a classic case of allowing the details and options to confound and overwhelm the intended purpose. I am reminded of the time when a relative of mine from South Africa first walked into a supermarket in the USA in search of a loaf of bread. She was so used to having only two choices — “wheat or white” — that she was overwhelmed by the number of choices available to her and walked out without buying. The second time she went, she returned with fifteen loaves of all different breads — one was eaten and the rest spoiled. And so too it is easy to lose focus when presented with multiple possibilities and end up with lots of extraneous, unutilized data that took a large amount of effort to collect.

 

Where in the World is Carmen’s Diary?

 

Globalization is another area which has been recognized as a significant challenge in the implementation of ePRO. Our diary writers now live all over the world, speak many different languages and is recording data every hour of every day of the year. There are two primary challenges in getting patients to use the diary correctly: 1) actually getting the device to them in the first place and, 2) providing them support once they begin using the device.

 

The provisioning challenge stems directly from the fact that despite all of the innovation we are still using a local piece of hardware that has to be shipped, inventoried and repaired or replaced if damaged. Using ePRO globally will require that sufficient lead times are considered to address issues such as, shipping, customs, and local couriers, not to mention algorithmic calculations to allow for adequate inventory management related to expected screen failures and completed patients. This inventory balance is especially difficult considering the cost replacement (device cost, image loading costs, QA) and the importance placed on missing pieces of data that are irretrievable and thus un-analyzable. It is evident that one cannot record data in the device if they do not have access to it and the technology loses its primary benefit of real time recording, if the contingency is to back enter data once a new device becomes available.

 

One may ask, “gee, sounds like EDC in the ‘90’s; the Internet took care of that, didn’t it?”. For obvious reasons, the Internet does not quite yet supersede dedicated hardware for ePRO. If protocols call for true, frequent, real-time recording of data, then reliable Internet access – even through enabled smartphones – is not dependable in most areas of the world (try running errands in your hometown all day and think about hourly recording of your symptoms through your smartphone being dependent on a WiFi connection).

 

I Just Called to Say…I Don’t Understand You.

 

Another crucial aspect of globalizing ePRO is providing ongoing support to subjects for any questions or technical difficulties that may arise. The difficulty here cannot be exaggerated. It is a very different proposition to be supporting the average trial subject in the use of technology than it is to support clinical research professionals (monitors, etc., as we are used to). There are few if any large helpdesk providers that are familiar enough with this technology to be effective, and supporting an endless number of languages within a manageable cost structure is next to impossible. Vendors have tried to solve this problem by locating their helpdesks in exotic locations claiming multi-language support, when in fact the focus is more on cost-savings due to lower hourly rates. The result is a frustrated subject who spends the majority of the phone call trying to find a person that s/he can understand. This frustration typically leads to lower compliance and a real hesitancy on the part of the sponsor and to use the technology again.

 

Lead Me to the Landfill

 

Technological innovation, for all its positives, has left us with an important underlying problem: technology outpaces clinical trial timelines. This puts sponsors in the position of having to continually re-purchase new hardware as the device they used for their previous trial is no longer supported or obsolete. Many argue that this is the cost of doing business and that the capital expense is “the pill you swallow” for the benefit of real time and accurate data. However, in the current economic environment, vendors should be thinking about options that are not hardware-dependent and that can be leveraged across trials. Not only would this go a long way in helping solve part of the provisioning problem, but would also allow some recognizable cost-savings that can be realized across large programs, which currently is a barrier to sponsor acceptance. There is no question that for the most part, for the new generation of clinical research subjects, the technology learning curve will decline. Web access, and its inherent detachment from any one piece of hardware, will undoubtedly help the current hardware dependencies but the solution is not immediately at hand, and sponsors need to be using ePRO now. So much work remains to be done by vendors and their customers alike.

 

 

There is little question of the value of collecting patient reported outcomes electronically, and it is clear that the industry in general is moving towards this methodology in the same way they have been doing with EDC for the past decade. Our excitement and acceptance of the innovations of today and tomorrow should not distract us from the importance of operational changes and procedures that play a significant role in realizing the benefit of this innovation. If we can achieve operational excellence in ePRO, we have a chance to overcome temporary barriers to ePRO acceptance.