Email us : info@waife.com
Management Consulting for Clinical Research

It’s baseball season and time for baseball metaphors. As we in clinical development go into the technology game these days, we need all of our players on the team. Too often, our lineup is limited to data management and information technology staff. We need clinical staff to “step up to the plate” and help win this essential, serious game.

 

The better clinical IT applications become (and they are getting better and better), the more they directly impact the daily work of clinical staff, as indeed they are supposed to do. By clinical staff, I mean study managers, CRAs, project managers, medical monitors and advisors, and so on. Much of the clinical IT universe (clinical trials management systems – CTMS; electronic data capture – EDC; adverse event systems (AES); electronic patient-reported outcomes – ePRO; even clinical data warehousing) has been built to be used by and for the benefit of clinical staff. Increasingly in fact, data managers are relatively marginal users. And yet many sponsors still keep clinical staff on the sidelines, or ask them onto the team as an afterthought, when acquiring, specifying, or implementing clinical IT applications. Indeed many of these projects remain the province of sponsors’ fulltime IT staff, who are even further removed from the work of clinical development.

 

It is by no means always the fault of IT or data management that clinical is an afterthought. At many sponsors, clinical staff want the benefits without joining in the hard work of making IT successful for them. They will whine but they won’t work. Is that too harsh? If you are a clinical professional, how often have you begged off of a clinical IT project team? Were you too busy, understaffed, couldn’t afford to focus on this “peripheral” task? Did it strike you as too “technical”? After all, that’s what the techies are here for – the data managers and the IT folks, right? But no one looks after your interests like yourself; no one knows what you know about your work like yourself; no one can represent the investigators and trial subjects like you can.

 

Play Your Position

How should clinical staff be contributing to clinical IT projects? The first and most meaningful way is to share in the governance of the project itself. Do not let it be run exclusively by data management or IT. Indeed, it is not inconceivable that clinical could run a project acquiring EDC or a CTMS. Taking governance means that your end-user needs will be met with full attention, rather than taking a backset to the back-end (data management, statistics, executive management). It means that you can help create a timetable which is meaningful to your clinical development plan. And it means that you can significantly alter the prioritization of features and functions.

 

But with governance participation comes responsibility – not only to show up, on time, but to know how to play this technical game. Participants from clinical should be selected for a proclivity or interest in technical matters. Even with the interest, they will need to learn about technology in some detail – not just about different pieces of software, but about technology platforms (.Net, Java, XML, etc.), basic building block tools (like brand-name reporting tools), and the marketplace (which vendors are being used widely, and why; what are the risks and benefits of innovation, etc.). So what you have is a dynamic learning relationship between clinical and your more technically inclined staff, but both “sides” are fully contributing.

 

Next comes full participation in the specification of the technology being acquired. And not in the manner in which clinical usually helps out (looking at a sample screen, telling the techies what your favorite report would include). Clinical needs to really draw the vision for how technology is to be used in clinical development, and to understand the potential benefits, risks, costs and burdens. Clinical is usually in a much better position to “think outside the box” on how information technology can help. But this vision must be grounded in some realities of what current and near-term technologies can do. For instance, if your vision of technology in clinical development is grounded entirely in harvesting data from investigator site electronic health records, well let’s just say you’ll have a long wait. But one need not crawl out to the bleeding edge to have a vision. In fact most sponsors do not begin to benefit fully from the technologies they already own. This is where clinical staff need to step up and learn the possibilities, so that even the current investments are properly profitable.

 

As with the specification phase, clinical staff need to be active participants in vendor research, vendor selection (such as participating in reference checks or usability testing), and in the design and execution of successful software implementation. This latter step is of course quite significant. It means taking a leadership role in process re-design, user acceptance testing (UAT), training, and enterprise communication. Throughout these steps, clinical not only represents the interests of internal staff such as study managers, project managers, medical monitors and pharmacovigilance staff, but also clinical is the best – perhaps only – representative of the needs and perspectives of those not in the home office: the regional monitors, the investigative site and trial subjects themselves.

 

Throughout, clinical has to commit to this participation. You have to commit a part of your brain, a part of your calendar, a part of your budget. Without a consistency of commitment, from top to bottom in the clinical hierarchy, your contribution will be muted and the enterprise will suffer.

 

Share the Victory

What’s in it for clinical? Why invest precious time in learning and specifying things which we have technical staff around to do for us? The answer is because the benefits of information technology to clinical development can be so profound, and to date have not been realized, in part because of clinical’s general passivity. If you step up to the plate, by learning how technology can change the way we think about clinical development design, you can share in the victories brought by:

 

– Compressing the “white space” (the calendar time) between individual trials

 

– Reducing the number of trials, and altering fundamental trial design, through use of interim analyses

 

– Meeting the challenge (and exploiting the possibilities) of measuring patient-reported outcomes

 

– Really knowing, in real time, how a study or a development program is going

 

– Reducing the workload required to obtain quality safety data and timely reporting

 

– …and much more.

IT and data management can try and win these games while clinical sits on the bench; the probability and size of your victories will be so much improved if clinical fully joins the team.

If pharmaceutical companies have a special Harry Potter-like Defense Against the Dark Arts class for their management team, one of the first techniques they must be learning is the Culture Defense. When confronted with evidence of their reluctance to change, they are apparently taught to point their wands out in front of them and say, “it ain’t me babe, it’s the culture here”. This turns out to be a marvelous, widely applicable spell – the easiest way out of an uncomfortable situation. There’s one problem: we are the culture.

 

We can’t all be the rebels, can we? If so, how would the “culture” ever form with beliefs different from our own? To claim that company culture is the reason technology innovation fails to take root is to deny your own place in the company you work. Culture doesn’t kill technology, people do.

 

This common weakness of corporate organizations is particularly obstructive to the introduction of information technology because technology generates so much upheaval, especially in areas of clinical development still untouched, or merely grazed, by the productive use of software. Often standing in the way of that productivity is the Culture Defense.

 

Let’s look at the following examples of flawed technology employment where “culture” is often blamed as the cause of failure, and let’s ask ourselves if there might be other reasons lurking.

 

 

The Ubiquitous Culture Defense

We’re getting lousy data out of a great tool (an expensive enterprise CTMS for instance, or a state-of-the-art Adverse Event System). How does this happen? The old IT acronym, “GIGO” (garbage in, garbage out), applies. But why is it happening? Why are staff waiting until the last minute to enter trial status information that is supposed to feeding a highly accurate real-time CTMS? Or in the case of the AES, why are antique paper-based dataflows being maintained, while the AES is an alien, unwelcome layer imposed on top. Why is this allowed to happen?

 

The Culture Defense says, “well, we’re not used to reporting data in real-time”, or “we want to review and double-check the information before anyone sees it”. Or in the safety case, “we won’t risk the importance of safety surveillance to software which may not work”. It’s a culture thing. Really?

 

Another example: we throw resources (human and monetary) at database lock of our pivotal trial, with no restraint. At that moment there is nothing more important to the company. Our EDC tool, or indeed even our trustworthy old CDMS, might be able to contribute to this moment in timesavings ways, but we don’t take the time to learn how, or change our process accordingly. “It’s the culture.” Perhaps it is, but is that a good thing? Does the Culture Defense make all other options moot?

 

Yet another: “we don’t measure” here. It’s our culture not to measure, or if we do, we don’t do it consistently, or with rigor, or learn from the results. There’s technology to help us (and if we are using technology at all, we will need metrics to justify its expense some day), but it’s not in the culture. Is that culture or laziness? Culture or fear?

 

And another: despite EDC’s inherent purpose in catching errors at the site at time of entry, and drastically reducing data cleaning at the backend, many sponsors still insist on multiple layers of data review (data managers, in-house CRAs, medical reviewers, and back to the data managers again) just like in the paper days. “It’s our culture, we want to get it right.” Wrong.

 

More pervasively, it is common to see clinical development executives across the industry turn a blind eye to what really happens at the operational level. Executives announce an impassioned commitment to a particular process improvement initiative, often technology-enabled, and tiptoe out of the room – leaving the implementation to middle management. In many companies, without the executive watching your back, there is little incentive for middle managers to execute on the vision. Is this disconnect a culture problem, or a management problem?

 

It Is You, Babe

If individual study teams or even entire therapeutic areas don’t follow company-wide SOPs (but instead make up their own regulatory-compliant “standards”), is that culture, or the acts of individual managers? (It may be a justifiable action on the manager’s part, but that’s logic, not culture, at the source.)

 

If we put training of the new EDC tool in an e-learning environment, but I (and most of my fellow monitors) don’t really pay attention (we click through it and get “certified” but don’t remember much), can I blame my culture for being anti-training? I’m the one who chose not to pay attention.

 

If we rely on individuals’ cooperation in using technology appropriately, and people fail to do so, isn’t that a series of individual decisions? If I fail to fill out all the fields in a templated Site Visit Report in my CTMS, isn’t that my choice? The culture didn’t make me do it, I chose not to do it.

 

The damaging side-effects of the Culture Defense are legion: it enables us to drag our feet when it comes to changing the way we are used to working; it gives us permission to abdicate responsibility without penalty; it enables us to stand in the way of progress with impunity for whatever our personal motivation may be (we’re overworked, we’re jealous, we want our pet project to get all the attention, we’re afraid of learning too much software).

 

Psychologists will tell us that the most powerful realization victims of damaging habits can have is that they have a choice. The Culture Defense is designed to prevent choice, to prevent individual responsibility, even to preclude individual initiative. The Culture Defense is defeated by individuals choosing not to go along with the easy path, to see the executive direction as good for themselves as well as the company, to embrace change as the inevitable condition of modern business, to risk using tools that may reveal true operating conditions quicker because it is better to do so, to risk measuring because objective data about how we work can make us better workers.

 

We as individual pharmaceutical company staff, middle managers, and executives, can choose to act in manner that enables information technology to flourish. We can face down the Culture Defense so that our CTMS’s actually produce accurate, actionable data on clinical trial program performance. So that our Adverse Event System is allowed to automate the fatally flawed reliance of paper. So that our EDC tool can be authored quickly, and used by monitors to catch errors and underperforming sites quickly. So that our technology investments are worth the effort.

 

Walt Kelly, in his famous cartoon strip Pogo, memorably exclaimed, “we have met the enemy and he is us.” Culture isn’t the enemy, we are. Facing up to this fundamental truth will begin to enable technology innovation to meet our expectations.

What does this scene sound like? Thousands of pages being faxed from country to country. Papers being printed on multi-part forms and signed in ink by the boss. Pleading with programmers to prepare a basic report from your database. Dozens of people doing what a handful of people could do. Key information about where things are stored only in someone’s head. Everyone checking and re-checking each other’s work.

 

Does it sound like banking in the 1960’s? Does it sound like your typical office of the 1970’s? Does it sound like clinical data management in the 1980’s? Are you old enough to have experienced a workplace like this? Unfortunately, these examples are from the 21st century, and culled from a range of biopharmaceutical companies. Call it the last frontier: drug safety operations.

 

Let’s be clear up front, so the lawyers can all sit back down: we are not talking about a public safety issue in any way. What we are talking about is a question of internal business efficiency only: drug safety operations optimization. With all of the appropriate focus on operational cost reduction in pharma these days, one of the areas which too often remains untouched is drug safety – not because it doesn’t need to be more efficient, but because executives are afraid to go near it, for obvious reasons. And unless the safety executive is innovative enough to volunteer for process analysis and improvement, it will likely never be forced on them.

 

Scenes from the Wilderness

When we have looked at drug safety operations at different biopharmas, we are struck by the huge range in case load (i.e., number of adverse event cases processed per drug safety staff) – varying literally by an order of magnitude from one company to the next — ten times the personnel to handle a similar case load. These variations have not been found to be explained by the variables one might guess – therapeutic area complexity, geographic diversity, staff preparedness, stage of development, or any other obvious explanation. Instead, it is a direct result of inefficient case load processing and other process wastefulness.

 

You can still find this kind of over-staffing in some other pharma departments , like CDM (clinical data management) and even monitoring, where the company culture dictates that a surfeit of human effort will protect against error. But this is a concept most industries have long since rejected, mostly because they could not afford to hold on to it.

 

Some safety managers will still cry that they need more staff, not less. But except in the most unusual circumstances (perhaps a budding biotech), this is the time to examine process efficiencies before signing those personnel requisitions.

 

Why does this happen? Inefficient processes, lots of paper, lots of checking each other’s work, and, frankly, a lack of pressure or will to work differently. Just one example: a reported adverse event consists of 3-5 pages of source documents. At one company, processing this information causes the 5 pages to balloon to 250 pages, since every change to any piece of data requires a new printout of the whole case for the archive, no matter how trivial the modification. Moving, reviewing and storing this kind of paper load is obviously inefficient, especially when thousands of cases are processed each year.

 

Second-Guessing

Often we will see an atmosphere of “quality control” that borders on mistrust or job security: checking and re-checking each other’s work ’til the cows come home. This is also a direct tie to a dependency on paper, but it also speaks to culture. Several companies have developed “quality” into a punishing, mistrustful exercise, where I tell you what to check and then I check later if you checked it! Quality (and the concomitant protection of public health) is achievable without these multiple layers of cross-checking. This is something CDM has been addressing for years – all CDM departments have to some extent (or to a great extent) streamlined their quality control (i.e., discrepancy management) processes to reduce time, effort and resources. Thus they avoid what safety departments still experience: a physician who will only review a case off-line (on paper), who will mark up the paper copy with corrections to be made by clerical staff in the system requiring that the case be printed out again so the originating physician can see if the system change corresponds with the original mark-up!

 

We see lots of paper in places in drug safety where you would think paper is long gone – even though nearly every company has already bought safety software tools precisely to eliminate that paper! For instance, we see companies admirably learning how to do ICSR submissions via E2B (paperless, by definition) to regulatory authorities, but still find those exact same companies faxing thousands of the same cases from one country affiliate to another. Why are the advantages of E2B ignored, just because the recipient is not an agency? And what happens to those faxes? The data is entered again by hand into the receiving office’s (separate) software system, introducing new errors and starting a whole new cycle of quality control.

 

Finally, we also see safety-specific technology all around but we do not see the safety departments taking responsibility for using it well, the way data managers, for example, do in a CDM department using a CDMS or EDC. In safety, instead, we see departments that remain beholden to their IT departments, like all of us were in days of yore, waiting for simple reports to be programmed by programmers, even when the technology is simple enough to be used by safety staff themselves. Who is complicit in this arrangement? Is IT holding on to the feeling of being useful? Does drug safety simply not have the time and/or technical understanding to use the tools they bought? In the memorable words of one country safety officer, “I simply don’t trust my computer.” Charming, but intolerable in modern times.

 

What this all most resembles is CDM in the 1980’s. It is remarkable that drug safety can sometimes still exhibit these qualities. The fix for CDM was executive intolerance for the cost and delay which such behavior caused in clinical trials, and the increasing automation and professionalization of CDM itself. It brings to mind that perhaps the answer for drug safety optimization is similarly two-fold: an executive spotlight on the issue, and the creation of a “DSDM,” or Drug Safety Data Manager, role which, like the clinical data manager, services as the interface between the user and the technology (between clinical operations and IT, in CDM’s case). As overstaffed as some drug safety departments are, moving some selected staff into a DSDM role could help eliminate the extra unnecessary staff through process streamlining.

 

Meanwhile, don’t look to the safety application vendors as the answer to process optimization. Historically they have not offered enough help, proactively, at reasonable cost, with the big picture in mind. These are, after all, cultural problems first, process problems second, and ultimately a matter of will. The tools the vendor sell are fine and have been for years; safety departments need to use them fully, and as enablers of process efficiency. This is the sponsor’s responsibility, not the software vendor’s.

 

And so, it comes back to a willingness to change. No sponsor wants to face a situation where external pressure forces a function to become more efficient. The best solution is always to anticipate areas for improvement and pursue them proactively. Assuming a sponsor can examine itself objectively, then basic process analysis skills, combined with safety domain expertise, should enable sponsors to eliminate these safety operations inefficiencies.

 

In the story of human progress, frontiers are reached, breached and conquered. We may lament the passing of some frontiers (the Amazon, the Arctic, restrooms without cellphone conversations), but drug safety inefficiency is not one to be mourned. Saddle up and tame this frontier so biopharma’s dollars can be used to the greatest good.

Buddy Holly, and then Linda Ronstadt, sang enthusiastically that “It’s So Easy” (in their case, to fall in love). Lately, leading EDC vendors have been singing the same song about implementing their technologies. Unfortunately, both of these songs are very misleading.

 

The pitfalls of falling in love I presumably do not need to tell you about. The potential pitfalls in implementing EDC, on the other hand, apparently need some re-iteration. Listening to EDC vendors today, one would conclude that EDC is as easy to adopt as a mildly complex desktop tool like MS Project¢â. This is the worst kind of déjà vu, reminding me of the mid-1990’s, when EDC technology and vendor support was very immature, and yet EDC vendors would routinely tell their prospective customers that EDC was “simply an electronic CRF”, that their software interface was “intuitive”, that study startup would accelerate, and that study closeout would be “overnight”. The failure of EDC in the 1990’s to deliver these features, at least as perceived by pharma customers, was a huge contributor to the derailment of EDC progress in that decade. It would be a great disservice to pharma clinical development if the EDC train derailed again.

 

Yes a lot has happened in the past decade. Technology and vendors are stronger. Sponsors know what the abbreviation EDC means, and are modestly more aware of what EDC implies. But the fundamental change which EDC triggers, which indeed can and should extend far beyond making the CRF computerized, is only recently being realized, conceptualized, and the implications understood.

 

I am not sure why “easy EDC” is becoming such a popular marketing pitch. Are the vendors frustrated at the pace of EDC adoption? Are their investors impatient? This would be ironic, since EDC adoption is finally taking off. Do they think this will overcome a major selling obstacle? This too would be ironic, because misleading the customer is the best way to create a selling obstacle, as the ¡®90s proved.

 

Clearly some of the very small EDC vendors are using this approach to try and break into the market, differentiate themselves from more capable vendors, and try to appeal to biopharmas with low budget or low tolerance for complex reality. This is understandable as a time-honored marketing tactic. What is more distressing is to see market leaders — capable vendors working on a large scale — who now want to “move to the next level” by assuring customers EDC is not as hard as they may think.

 

When vendors say that implementing EDC can or should be easy, they are thinking about it from their own perspective, naturally. So they would say that these things should not be complicated:

 

-learning the tool (by which they mean training the sites and monitors in the software interface)

-following a checklist of technical steps to set up a study, or derive an report

-setting some software switches (yes, query writing has gotten easier)

-writing a SAS export routine

-basic aspects of how staff roles change, or that the time for study startup has to be planned more carefully.

These are all good things to learn and maybe one can learn them quickly.

 

But implementing EDC is much more about things that no seminar can “transfer knowledge” about:

 

-governance of the EDC change effort (who’s in charge, who has the money, how conflicting interests among CDM, clinical operations, IT and others get accommodated, etc.)

-the business model and budgeting (there are so many ways to “buy” or “rent” EDC technology and support services, including these days “EDC monitors”, plus there is the question of who should be paying: is this coming from the study budget, is it a capital expense, etc.)

-support issues across the board, for the site, monitor, patient (in the case of ePRO), IT, study manager, and more

-examining the cleaning processes in your company and long-entrenched roles (this is so diverse than no standard seminar can begin to address it specifically)

-raising “site consciousness” (responsiveness to site needs, workflow, usability) which is nearly non-existent in most companies

-”change management” in the classic sense of attitude changes and acceptance of new roles

-and coping with the tough personnel problems that come from change which is not (never will be) universally embraced.

Let’s look at just one example in more detail, an example that points the diversity and detail that has to be coped with in implementing EDC – data cleaning choices. Even though we are all in the same business, and trials are nominally the same from a data processing standpoint, sponsors vary enormously in their policies and approaches to data cleaning. For instance, sponsors vary by how they define the role of the monitor in data cleaning: it runs the gamut from on-site data scrubber to completely hands off. Sponsors using EDC, therefore, vary accordingly in what they allow their monitors to do with the EDC software – some take advantage of the ability of monitors to review data online before a site visit, while others prohibit it! Sponsors vary notoriously in their source data verification policies (100% of datapoints, or a subset). From trial to trial, even within the same sponsor, teams have different styles of querying the data, from complex queries to simple. And teams differ in who queries the data (data managers, monitors, statisticians, physicians) and when (early and often, or late in the game). Sponsors also vary widely in how standards (for CRF design, data structures, common questions) are applied – by whom, how rigorously, and why.

 

With all this variation, it is not enough for any third party to say either “this is the way we do these things with our tool” or “decide all that later, let’s get started”. For a study team who is conscientious, or rigorous, or conservative (pick your adjective), this drive for simplicity is misleading and sure to ultimately generate disappointment.

 

It’s not that there is anything wrong with vendor-driven quickie seminars about EDC-induced process change. Just getting vendors to admit there is process change around EDC adoption, and that there is more for sponsors to consider than purchasing software, is definitely progress. The value added in such seminars, assuming quality content, is unquestionable. It is the assertion of sufficiency, that all the issues will be covered, that EDC has been “made too complicated”, which is so dangerous, and simply inaccurate.

 

Why does this matter? Why can’t EDC be simple? The answer has at least two levels:

 

– The Clinical Development process overall is not simple now. Perhaps it could be, and EDC will make it simpler, but you have to get from here to there, and understand the path. Ample experience at sponsors over the past decade shows us that teams and staff will stumble, duplicate work, maintain unnecessary work, and even unwittingly resista change without careful re-design of roles and processes, participation of those needing to change, agreement on how/when to change, and some trial and error.

– Secondly, over-simplification of EDC matters because without careful preparation, the resulting unnecessary inefficiencies, delays (or lack of time compression), and other expected benefits will result in widespread disappointment at the failure to deliver the promised impact (speed, reduced resources, higher quality with less effort, etc.). And this is deadly. Because biopharmas cannot afford to sit out another decade without widespread clinical development innovation, of which EDC is only one piece among many.

It’s easy to fall in love, and it’s easy to believe that putting a CRF page up on a computer screen should make clinical trials equally simple. Some day it will be, but it won’t be tomorrow. Saying it doesn’t make it so, only hard work does.

There can be over a thousand people in a big pharma’s Clinical Development department, and yet the most important people in the process are not on our payroll. It can take 8 years to bring a drug through clinical trials, and yet the most important events in the timeline are not in our control. We may have hundreds of offices in dozens of office buildings, and yet the most important office is not on our campus.

The missing pieces in each case are our Sites, without whom no clinical research is possible. Some biopharma research departments with an inkling of this problem have cited “Site satisfaction” as a strategic goal by. But in fact, almost all sponsors find a way to consistently aggravate our critical partners – to bite the hand that feeds us the information we cannot live without. One particular way we aggravate Sites is by demanding they use underperforming technologies which we are ill-prepared to support.

While we know that a happy Site, like a happy employee, is more likely to recruit faster and more correctly, record data more accurately and on time, and think more favorably about the trial Sponsor, most sponsors do not consider the Site as a seamless extension of the clinical trial workflow. If we did, we would be involving them in each step of the process we follow in evaluating, selecting, designing, implementing and supporting site-based technology tools.

It’s not about clever

What do we give our Sites of more lasting memory that our logo’ed paperweights at the Investigator Meeting? We give them mistrust and hostile contracting. We try to be clever in our contracting and our patient volunteer advertising. Meanwhile we take the Sites’ nurses and phone lines away. We pressure them to perform, without the support to enable them to succeed. We make them use our flawed software tools, and fail to train them properly. In sum, we make them make up for our poor management. We dwell on the negative (chronic under-enrollment and dirty data) without regard to how what we do as sponsors directly affects Site performance.

It’s about trial management excellence
Trial management excellence includes a number of important components, including building close professional relationships with principal investigators, communicating our plan and delivering on what we promised, creating a service orientation toward our Sites, and ensuring the tools that support these actions (EDC, CTMS, IVRS, ePRO, etc.) are helpful and not burdensome. Let’s look at several of these items and how technology can and cannot support Site Satisfaction.

Relate Locally
While we often spend long hours, appropriately so, trying to improve our ability to manage globally, what we mostly fail to do is relate to our Sites locally – to build an effective and knowledgeable personal relationship. Several biopharmas are beginning to speak about “territory management” as a task of their regional monitoring networks. This term, borrowed from salesforce management, represents an excellent concept: regional monitors should get to know the physician pool in their region, to be proactive in identifying potential future investigators. Even if they get to know specialists in areas they don’t “need” yet, you never know when your company will enter that therapeutic area, and these physicians can refer them to others they can use now.

Another term borrowed from salesforce management is more unfortunate: “IRM” (Investigator Relationship Management). This phrase is adapted from “CRM” (Customer Relationship Management) and refers as much to technology used for this purpose as it does to a strategic operational concept. While the metaphor of IRM to CRM is superficially useful, it is another clever idea taken too much to heart. The key to “IRM” is an intelligent human connection, not a new database tool. As soon as technology enters the picture (and of course there is a place for it), the tendency is to rely on the tool and forget the human connection. IRM, as a technology, in practice ends up objectifying the Site and further mechanizing what should be a personal relationship. To be effective, IRM technology needs to be subservient, even invisible, to the effort of sponsors to build effective, creative Site relationships.

Professionally Manage the Trial
Trial management skills across the industry are modest at best. One key to Site Satisfaction is to demonstrate professionalism in the timeliness of achieving milestones and responding to issues. Another is to minimize those protocol amendments! Both of these objectives can be helped or hurt by technology, depending on how we use it.

A CTMS (Clinical Trial Management System) can be used to greatly assist the trial manager in understanding trial progress and problems, and in timely communication with the Site. But many “CTMS” applications are focused completely inward, on what the biopharma enterprise needs, without including the Site’s needs in the equation. A CTMS, or other application, which provides immediate and direct access by Site personnel to critical trial conduct data, and which is easy for the Site to use to supply information to the sponsor, is truly enabling of trial management excellence.

Similarly, the best goodwill created between Site and sponsor, over time, by repeated use of EDC, is consistently undermined by the annoyance of repeated protocol amendments. These changes have always been irritating to Sites, expensive to sponsors, and often avoidable. When using EDC, the amendment is additionally annoying. This is how good trial practice is directly related to the Site’s perception of the technology they are required to use.

Provide Tools that Work
Too often, we inflict, rather than empower, our Sites with technology tools. We select these tools totally without regard or consultation with Sites, choose vendors because of price or through suboptimal selection processes, and then send the software out to the Sites and good luck to ¡®em. Clinical Operations staff (those directly interacting with our Sites) also often feel as if they are a victim of information technology, something inflicted on them from the CDM or IT department, or from an individual executive who leaves the implementation to others. In both cases, the fault lies in software tools being driven by those furthest away from the point of use (Sites and monitors). Reverse that fact and application quality (and Site satisfaction) will rise.

All parties – sponsor side and Site side – also can be victims of the software vendors. Despite the fact that it is 2005 and the third-party clinical research software industry is now 20 years old, we still find some vendors who are quite unprepared for the responsibilities of supporting clinical research, including providing knowledgeable and timely support, software that is predictable and reliable, tools that provide some benefit for the lowest level user, and applications that are scalable to our ever-expanding trial load and complexity.

Create a Service Orientation
A fourth key element of trial management excellence, and thereby Site Satisfaction, is for Clinical Operations and Data Management groups to have a service mentality toward Sites. We need to think of the Site as our customer, someone we serve instead of someone we berate. This single, yet dramatic change of thought, would be a remarkable exercise for our organizations to go through. What we have decided is important or critical in our daily work would change dramatically. A customer service orientation, for instance, would never allow a data cleaning process that helped our backoffice offshore entry clerks, but interfered with a study coordinator trying to get through her clinic day.

These shortcomings in how we employ technology, rather than the functions and features of the software, are the key to trial management excellence, and to satisfying instead of aggravating our Sites. Sites are critical to the success of our trials, but we treat them like vendors instead of scientific partners. While there are many poor Sites, there are an equal proportion of poor sponsor trial managers. Trial management excellence, and the properly designed and supported technology tools to support excellence, will ensure we don’t go hungry biting the hands that feed us mission-critical patient experience.

Ronald S. Waife is President of Waife & Associates, Inc. and can be reached at ronwaife@waife.com or +1 (781) 449-7032.