Email us : info@waife.com
Management Consulting for Clinical Research

Clinical research professionals considering upcoming information technology investments are at a crossroads in 2010: complexity versus simplicity – which should we choose?

The decision is not at all obvious. The implications of the decision are profound: the resulting strategy will direct the spending of millions of dollars and thousands of hours, and decide the fate of dozens of vendors.

 

Simple Complexity

We all understand that information technology is an essential underpinning of every clinical development function. As a result, software applications have been installed for almost every function, with little ultimate regard for each other. Each individual decision has generally been justified, and paid for, by the function for which the software was written, and not primarily for how similar it is to other applications, or how well it may talk with others.

 

Many of us have argued for years that this “silo” approach to building and buying individual software packages, which are increasingly sophisticated servants of their intended users, is reinforcing bad organizational behavior. It enables individual micro-professions to make their data so specialized as to be unusable by others. It means multiple instances of re-entering identical data. It makes ensuring the currency of data very difficult, to say nothing of the challenge of finding the necessary linkages between silos. The answer would be, as much as possible, a single system: an interweaving of applications so transparent, based on common means of entry, storage and visualization, so that the entire enterprise breathes data as a single organism.

 

So on the one hand, we have a path for functionally focused, deeply featured independent software, integration and cross-talk be damned. And on the other hand, we have a path based on the belief that integration is everything, realism be damned. You can see that we cannot have both. In order to integrate effectively – to make the mammoth effort and frequent compromises worthwhile – we cannot maintain the rich depth of vertical profession-specific applications. It is too much to tie together. And vice versa. We cannot keep digging deeper for richer and richer vertical functionality and expect to tie these unique entities together usefully.

 

Complex Simplicity

We have a magnificent example of complex simplicity in every facet of daily life today – the Internet, or more precisely, the Web. (Indeed, what better metaphor for complex simplicity than a spider’s work.) Everyone with access to some kind of Web-enabled device can easily learn how to use this vast network that enables the simplest and most sophisticated exploration, transactions, learning, discovery, entertainment and influence. It seems so simple at its face, but of course it is enormously complex, as only something which tens of thousands of people, and literally billions of dollars, can create. And besides people and money, the Web has needed repeated flashes of brilliance to invent it and sustain it and expand it. Think of the search algorithms, fiber optics, high capacity storage, wireless access, global ubiquity, and on and on. This is highly complex simplicity.

 

And that’s the rub. Clinical research is not a universe of a billion users justifying billions of dollars for IT innovation. How can biopharma R&D mimic what the Web did with information? We in R&D want a confluence of basic science, applied biochemistry and physiology, genomics, real patient experience, epidemiology, macro- and micro-marketing, economic forecasting, regulatory and provider/payor policymaking, and more. All of this information, together, is what we actually need to be integrated. Integration in clinical research is not about having only one contact address per investigator, or even about being able to track a physiologic parameter from mice to Phase III. We want to know whether if we invest millions of dollars in therapy X, it will help enough people at the right price to more than cover the cost of developing it. Again, this is highly complex simplicity.

 

One has to wonder if our industry would be ready for simple access to such a complex picture. Do we understand how to run an enterprise that knew so much? What would our flashes of brilliance be to make this flower? Outsourcing and deal-making are not the stuff of genius. What would be the R&D equivalent of search algorithms or wireless access?

 

And On the Other, Other Hand

Let us return to the R&D IT choice in front of us. Should we invest in the complex vision of a single integrated universe of R&D software, be it powered by SAS or Oracle or Microsoft? How far can they take us today? How long will it take to get to the point where their integrated system, having been irrevocably chosen back in 2010, will justify the commitment?

 

On the other hand, continuing on the path of collecting sophisticated (and expensive) vertical applications is in fact a dead end. But there is one adjustment that might make the choice of paths more distinct and dramatic. If we want an alternative to the integration nirvana that may never be reached, a better path would be to design and acquire simple vertical applications, optimized for the barest minimum of business requirements, with the leanest of function sets and the easiest of user interfaces. We would carefully but doggedly dismantle the thick infrastructure of the vertical applications we have today, by a merciless zero-sum examination of what we really need, knowing that the simpler the tool, the more likely it will be used and the more relevant the output will be. Such a collection of applications would be cheaper to acquire and easier to learn. We would worry much less about the commitment to these applications because we would intentionally plan on their obsolescence, and ruthlessly strip their purposes to the bone. If only we could find vendors willing to build and sell such things.

 

Remember when watching TV was really simple? There was almost nothing to watch and very little choice. Perhaps your little sister had to ground the antenna with her hands, but hey, that’s what little sisters were for. The picture resolution (a word we didn’t even know) was terrible. And the world of media was divided like the continents: TV, radio, movies, records, books, newspapers, libraries and stores were entirely separate and knew their place. Who wants to go back there again, when today all of those media and more fit in the palm of your hand? So the question is, can anyone really (and I mean really) get the world of R&D decision-making into the palm of our hands, or should we count on just turning on a TV-simple CTMS to hear the latest trial news?

 

The choice is a conundrum: simple complexity or complex simplicity? Which can we afford? Which do we need? Which can we make happen?

Delay in no way guarantees thoughtfulness, as some wishful thinking would have it….In the worst cases (which are not uncommon), delay leads to cynicism and passivity.

 

It has been said that one sure way not to win the lottery is not to play it. Without condoning indiscriminate gambling, and notwithstanding the parallels between buying software and gambling, we should be increasingly concerned that the caution with which biopharma approaches information technology may be turning into cynicism, and may be keeping our clinical research organizations from practical and much needed benefits.

 

Delay is Not Benign

Delay in IT selection and implementation is not benign. It is not simply a matter of “luxury” deferred. And delay in no way guarantees thoughtfulness, as some wishful thinking would have it. The dilemma of IT delay is that operational needs and software to fix them are increasingly fleeting in our rapidly changing business and technical environments. A need expressed one year may be gone (or greatly diminished) in 18 months. I have seen executives who count on this phenomenon, thereby proudly “saving” money. But these are false savings: if the need was truly there to begin with, the delay only represents a period of sub-efficiency and missed performance.

 

Similarly, the rapid pace of technical change is the serpent in the garden of software use. Delay in software acquisition and implementation means that the users’ needs are ultimately addressed by technology which may no longer be worth the investment (compared to newer, often cheaper, alternatives), and at the same time, the newly enabled users will be disappointed they are not using a platform or interface or function set which they find commonly available in their non-business lives.

 

Delays in IT projects have many causes, driven by sponsors as well as vendors:

• Cultural caution (“I want to analyze this some more”)

• Financial restraint (“We’ll wait until next year”)

• Mistrust (“I remember how badly the upgrade went last time, so let’s hold off awhile”)

• The grass is greener over there (“Gee look at that new vendor, or that new strategy”)

• Acquisitions (“Rumor is, we’ll get acquired in a couple years so let’s sit tight”)

• Personnel turnover (“The new VP doesn’t like that vendor” or “The vendor’s management changed and we have to wait and see”)

• Team management (“We have to have the input of all the stakeholders, and darn it’s hard to get them all together at the same time….”)

• Poor software development practices (The new release is really buggy)

• Excessive legalism (the downside of professionalizing, and isolating, contract administration)

• And so on.

 

But all these excellent reasons for inaction have to be balanced against the cost of delay. Besides not meeting the operational needs that prompted the project in the first place, delay has other insidious side-effects:

• Delay softens the demand (which may mean they weren’t important, but also may mean your organization is losing willpower)

• Delay softens the energy behind the project, the project champions move on to other things, and the staff perceive a lack of drive for improvement from management

• Focus and talent drift away; like fielders on a baseball team behind a slow-working pitcher, it’s hard to keep your edge. The mind, and the project, drifts.

• In the worst cases (which are not uncommon), delay leads to cynicism and passivity. Staff who put in considerable effort in helping the project leader analyze needs and specify requirements see that little or nothing comes from their efforts, creating a dangerous negative feedback loop that will undermine future efforts and poison the corporate culture.

 

Less Time, More Benefit

What are the antidotes to delay? The primary solution is one of cultural change: a corporate environment where impact is favored over cost-savings, where performance instead of caution still matters, where contributing to staff cynicism is a mortal sin.

 

Based on such a foundation, the research organization has to approach its IT projects with a sense of urgency, flexibility, and trust. Since trust is bred by success, quick successes should be sought out (the essential opposite of delay). All of those involved in identifying needs and solutions must understand that time is of the essence, and that some benefit is better than none. And as always, senior management support for this strategy must be vocal and prominent.

 

Speedier IT implementations can be obtained through:

• Focus (prioritize and re-prioritize, not just which projects to do, but what to do in each project; forego nice-to-have features for those that drive key business objectives)

• Understand what is important (essential to the focus referred to above; every employee should always be clear as to the 3-4 things the research organization must achieve in the near term)

• Narrow team membership (nothing causes delay in the soup like too many cooks; up-end your corporate fetish for broadly populated project teams and include only the absolute minimum contributors who you can count on to participate knowledgeably)

• Set aggressive deadlines (pick a date which seems impossible, given your organization’s history, and then move it up a month or two)

• Ensure sustained senior management visibility and oversight (they have to stay involved and stay on message)

• Implement sustained, closely managed, tightly scheduled vendor management (do not confuse arms-length oversight with trust – you cannot set and re-set expectations with your vendors often enough)

• Trumpet your successes, and the features and benefits of the new software, through actual stories from your experience (the proof is in the pudding and everyone needs to hear a lot about it)

• Keep moving! (side-step obstacles, shorten your objectives list when necessary, and keep that sense of urgency).

 

Looking back at this list, I realize the same advice could be given to clinical trial study teams. Hmm. Does reducing delay have resonance for the essence of our work?

 

Speedier implementations will require important changes in the skills and perspective of research staff, in sponsor-vendor trust, and in project execution. An organization that tries to move in these directions may win the lottery after all.

 

©Waife & Associates, Inc., 2010

We all now know that the cost [of software acquisition] in money, time, headcount and disruption will be high….the benefit therefore must be high as well, or the cost reduced.

 

“Begin with the end in mind” is one of those classic business phrases which is no less valuable for the number of times it is ignored. Software developers and clinical research sponsors alike are guilty of sometimes fatal forgetfulness of this key concept when planning the development, implementation and use of software applications.

 

Clinical research sponsors generally start a software acquisition process not with the end in their mind, but with some stimulus in their back: a department is complaining that everybody else gets new tools except them; our competitors all use this tool so we should; I met a salesperson on the airplane; the vendor just announced an upgrade and they won’t support our version anymore; we just hired a new vice president and she prefers vendor “x” over vendor “y”. While some or all of these situations may justify a re-evaluation of our need for software, they do not in themselves sufficiently define the “why” and “what”.

 

Starting Isn’t the Hard Part

Sponsors may also start from some business trigger which gives them the illusion that the end is mind: we need to save headcount so let’s use EDC (electronic data capture); or we’re frustrated with having multiple overlapping and out-of-date investigator databases so let’s buy a CTMS (clinical trial management system); or our new translational medicine VP says we’re going to have a flood of pharmacogenomics data coming in so let’s get one of those “data warehouses.”

 

What’s missing from these situations is the company’s consideration of the strategic benefit, and of how the software’s users will actually have to use it and what they will get from it. What is the tie-in between the initial impetus – the needle in the back or the business trigger – and the actual output the new software will provide? This disconnect is particularly critical in enterprise software projects because we all now know that the cost in money, time, headcount and disruption will be high. The benefit therefore must be high as well, or the cost reduced, to be in line with the diminished (and realistic) results.

 

Analyzing a potential project’s end user benefit compared to the initial impetus need not be fatally time-consuming, which is the usual reaction to the suggestion. But it can save a large amount of wasted time and money. We should recognize that it is very easy to fall into the disconnect trap. For instance, let’s consider the situation where clinical operations gets that frustration over the multiple investigator databases. The complaint is forwarded to the IT department (or worse, a naïf goes to a booth at a trade show), and the answer comes back: there is no “investigator database fixer” product out there, but there are these CTMS packages and boy, they do everything. Before you know it, you are installing a multi-million dollar application over multiple years, you’ve doubled the amount of training everyone has to go through, and you have all this rich functionality, and no one can or wants to use it because it’s not relevant – neither to the original trigger or actual user circumstances.

 

I would suggest that even a good understanding of how the end user works, and what he or she needs, is not sufficient in today’s business environment. We have fewer and fewer in-house staff, we are narrowing our “distinctive competencies,” we have uncertain economic and reimbursement conditions, and we have unrelenting competitive pressures. All of this mitigates against expensive multi-year infrastructure projects unless we do more to predict and understand the future end user business need. What are the future identity, purpose and constitution of our business, and therefore, what tools do we need to get there?

 

Even for projects where the pain and the solution appear more clear and pragmatic, we are usually missing a robust and detailed visualization of how a tool will be used, and without this, we will mis-configure and mis-spend our time and money. For instance, h0w does a shift to outsourcing change who the users are for a CTMS, document management system, EDC, and similar programs? How does a thoroughly “e”-oriented sponsor exploit tools like ePRO (electronic patient-reported outcomes) which are still fundamentally device-based? Or alternatively, how useful are e-tools if the “back end” of the workflow stays “paper-minded” in its policies and procedures, reflected in unchanged workflows, double-checks, and review practices?

 

And Vendors Too

The developers of software used in clinical research are equally guilty of forgetting the context of how customers use their tools. Vendors have a great opportunity to add significant value to their customers by helping sponsors see the possibilities that their tools open up, and by knowing the clinical research business as deeply and broadly as possible. This knowledge should translate into more focused and anticipatory designs, creating more powerful and efficient tools.

 

Typical software development, even the industry-specific kind we encounter, falls for chasing after customer-driven enhancement requests that are often shortsighted for all the reasons cited above (responding to the “needle in the back”). The result is needlessly complex software with features even the requesting sponsor may forget they wanted! More damaging than needless complexity is that the effort to chase enhancements takes money away from the vendor’s literal “end” – the output of the tools they are developing, which is all a tool is really good for.

 

This irony plagues each aspect of the research software universe. Vendors may see the whole gamut of functionality possible, but as professional engineers, they see it, and build it, linearly (they begin at the beginning and end with the end). As a consequence, they inevitably run out of time and money before they reach the output function (reporting). How many times do we hear vendors do their demos this way: they start with the very first point of data entry, move through to the point everyone is waiting for (getting something back for all that entry), and then they say, “well, there was no point in re-inventing a report writer so use something standard, off-the-shelf.” It is the “data out” that matters in the actual business context, but to a software engineer it looks like a data processing problem, not a business use problem. If this were true, and off-the-shelf reporting was adequate, so too would be off-the-shelf entry – so actually, let’s forget the whole thing. And yet there really is a utility for clinical research specific software products, if built with the end in mind.

 

Today’s software vendors need a knowledge base and a discipline not commonly found. The need for vendor domain knowledge is greater than ever, plus an understanding and vision of where their customers are going. Certainly sponsors have the bulk of the responsibility in teaching this. For the vendors, the discipline is in rejecting enhancements for enhancements’ sake and leading their customers towards being enabled to handle the future.

 

“Begin with the end in mind” is certainly the start of a solution. Begin with an understanding of the end is probably more profound. Identifying possible “ends” is one thing; understanding their meaning to the user and the enterprise requires more thought, breadth and management than most sponsors or vendors are used to supplying.

There’s something missing from the eSolutions discussion. In fact, there are two key missing links:

— eSolutions that are no more than bridges between silos will not meet the needs of a rapidly changing clinical development environment; and

— strategies, pilots, and initial use of eSolutions are skipping over the hard work of operational change management, as most industry-specific IT innovations do.

 

Serving the Changing Environment

Just as new information technology in the world at large brings a myriad of innovation possibilities to all industries, new market, business and science developments are generating the need to approach biopharma clinical development quite differently. If eSolutions strategies and tools do not enable the changing business structure, eSolutions will be expensive wasted investments.

 

For instance, although CROs have been around for over 30 years, biopharma is again reinventing how it uses these services. The near future will feature complex relationships between sponsors and CROs , with high variability even within a single sponsor, and the traditional lines of responsibility and function will continue to blur, even as some sponsors begin to pull back from knee-jerk outsourcing.

 

Because this is happening at the same time that government and the public demand more accountability and transparency, the need to gather, integrate and report knowledge (not just information or simple data) places sophisticated pressures on the clinical development function. eSolutions need to be part of, or indeed lead, the design of new means of clinical development. If we do not re-imagine clinical development, we will fail to meet these challenges in a business environment of reduced financial resources.

 

Does it make sense for us to continue to organize clinical development in silos that reflect the paper-based, pharma-centric, linear workflows of the past? Or do we need to have our work (and the technologies which enable it) reflect the patient-centric, multi-dimensional, nimble realities of today? How could we re-imagine clinical development? Instead of thinking in a linear workflow, should we focus our talent and technologies now by who our customers are (internal or external)? By customer need? By business objective (“project”)? By distinctive competency? These choices are hypercritical, because each leads to dramatic changes in human resources, inter-business dependencies, and therefore, eSolutions designs.

 

If eSolutions to you means automatic reconciliation of the EDC and Safety System databases, that’s not re-defining clinical development – although it is a solution, to a real problem. Tying investigator performance to payment timelines without having seven sets of hands touching the process is a solution, to a real problem, but it’s not re-defining clinical development. So as useful as these solutions are, and challenging in their cross-silo integrations, in many ways these efforts simple are fighting the last war.

 

Implementing for Success

The gap between technology innovation and successful implementation is growing again. In many ways this is a repeat of the late 1990’s, as the Internet brought great technical innovation but pharma had little understanding of or cultural tendency toward exploiting it. Just as the gap seemed to be closing sufficiently, technology and market forces have broken accepted wisdoms again.

 

eSolutions undoubtedly will proceed, step by step, but we will stumble, losing years of progress, if implementation is not respected and understood. Have we learned from the early years of difficult EDC adoption? Have we learned from the extended, expensive CTMS projects? Have we learned from our underutilized adverse event systems? If we have learned, we know now that careful attention to organizational impact, sustained change governance, thoughtful process efficiency, and creative technology exploitation will be the keys to realizing the benefits of eSolutions we will require.

It is not inaccurate to say that using paper diaries in research today borders on the unethical, and if nothing else, it is a manifest waste of time and money.

 

As I was writing this column, we had a sign from heaven – or at least Rockville: the FDA had issued its Final Guidance for Industry on Patient-Reported Outcome Measures. If ever there was a “pro” for ePRO (Electronic Patient-Reported Outcomes), this is it. There is nothing that garners the attention of biopharma executives like a statement from one of its key regulators, and the Guidance is welcome news to those of us who have advocated for a wider use of PRO data with the reliability that an electronic means of PRO collection brings. I couldn’t have asked for a more timely coincidence.

 

Pros over Cons

Let’s look at the “pros” of ePRO in three ways: pros, pro’s, and prose. Let’s assume you have a product in clinical trials that needs data recorded directly by the patient, either at home or in the clinic. You will use this data to investigate a primary or secondary endpoint, and as such, this data is essential for your understanding of the disease, or your product, and of patient experiences of both. Simply put, if this data is needed for this kind of research, you cannot use paper-based methods of collection anymore. The worthlessness (inaccuracy, untimeliness) of this data is now well accepted, beginning with anecdotal evidence experienced by researchers decades ago and proven in controlled examination like that published in the British Medical Journal in 2002 (Stone, et al., Patient non-compliance with paper diaries, BMJ 324: 1193, 18 May 2002). It is not inaccurate to say that using paper diaries in research today borders on the unethical, and if nothing else, it is a manifest waste of time and money. Use “e” methods for PRO or nothing at all.

 

But there’s more. What’s often missing in the discussion in favor of ePRO is the change in science that is enabled by knowing you now have methods of data collection which will support the validity of asking research questions which you could never do before, with any scientific rigor. So while safety and disease efficacy will always be important quantitative goals of clinical trials, you can now explore if and why your candidate may be superior to other treatments based on the qualitative dimensions commonly lumped under umbrella terms like “quality of life”, “health outcomes”, “health economics”, or “evidence-based medicine”. In some respects, these non-specific terms have misled the listener and undersold the impact that an imaginative study design can now bring to the research program. Creating new market opportunities, and new means of improving patient health, await.

 

While this column is focusing on pros, not cons, perhaps one of the biggest challenges to widespread substitution of paper PRO with ePRO is the perceived disproportionate cost of “adding” ePRO to the study budget. Without going into great detail, a simple characterization of this issue is that ePRO services today rely on a (yet another) third-party vendor, with their own bid, their own quotation of cost, which is thereby easily identified by study teams as “incremental”. Besides missing the understanding that paper PRO also costs something, I think there is a way around this: ePRO is another means of EDC in clinical trials (a characterization I have resisted in the details because I am too close to the trees to see the forest). In this sense, it becomes comparable to the (now forgotten) debates over electronic central laboratory data. So the chain of reasoning is the same: if ePRO data is really important to you, you should (and perhaps must) collect it electronically, just as you would, in 2010, collect most any other trial data. In other words, it is not a discretionary cost. If you need it, you need to spend it. If you don’t need it, don’t spend it (and live without that data).

 

The Pro’s at Work in ePRO

Are there “pros” (professionals) in ePRO? This question has been a legitimate concern of the biopharma industry as we have watched ePRO vendors struggle to create a new market from scratch, learn the processes and logistics of a new research methodology, correlate the important scientific component, and try to scale to the volume of studies where ePRO use is possible.

 

Although the vendor community is still maturing and does not compare in top-to-bottom professionalism of say, the EDC vendor community (or that of vendors for statistical analysis tools or document management or safety monitoring), the ePRO vendor universe has several admirable, time- and trial-tested providers for biopharmas to lean on. As always with a technology market, older ePRO companies are getting much more professional in the “soft” side of their offerings (services, logistics, science consulting) while their software and hardware technology tends to lag, and the new ePRO companies sometimes have more exciting technology but insufficient experience in the soft side.

 

It’s important to keep the technology and the services separate. Unlike other clinical development IT spaces, where the technology has become very similar vendor to vendor, technology still matters in ePRO. And it is a bedevilment that we still don’t quite have the magic, one-size-fits-all, ideal hardware platform and may never. This is challenging for the vendors, but also for their customers. Sponsors looking to make long term commitments to ePRO are understandably confused about the myriad of hardware choices and accompanying software platforms, and this leads to a reluctance to commit to any one approach, while accepting ePRO conceptually.

 

The service component is particularly critical in ePRO because, for now, it needs so much support. This support primarily comes from the vendor since sponsors are not, or cannot, absorb ePRO capabilities internally. Services are also critical because, as with most clinical development IT, sponsor processes are not mature, tailored to the tool and optimized for individual sponsor efficiency. This affects all aspect of ePRO services including logistics, supply, support, helpdesk, workflow process, and project management. Most vendors today show an uneven professionalism in services – some are true pros in logistics, some are rapid study designers, others have been lucky with their project managers.

 

On the other hand, one area where a large handful of providers are truly professional is in the science of ePRO – the development, tailoring, selection and validation of the “instruments” (questionnaires) that are chosen for administration on ePRO devices.

 

Where is the Prose?

Perhaps what is missing the most at the moment is the prose about ePRO. That is, we are still not talking enough about ePRO, to the right decision-makers, with the most appealing contentions and data, to have ePRO become as widely used and uncontroversial as central lab data, or even EDC. Clearly, not enough clinical development personnel understand the horrors of paper (since we are still doing paper PRO studies), nor do they understand the potential for new scientific explorations possible with valid patient-reported data. For some reason, ePRO remains at best the province of biopharma’s marketing guys, or “outcomes nerds,” and PRO is viewed as a “last resort” and not “real” research. This is an antique attitude, and it contributes to the resistance to funding ePRO services. The vendors have probably talked enough; sponsors must start writing and speaking more clearly and openly about their success with ePRO on the one hand, and any lingering concerns on the other hand.

 

The pros of ePRO are clear. The professionals of ePRO are improving. We need more prose on ePRO so that all are well-informed about when and how ePRO should be added to the clinical development program.