Email us : info@waife.com
Management Consulting for Clinical Research

IT & Clinical Development Strategy: The BioExec’s Gordian Knot

(published in BioExecutive International, December 2006)

 

Biopharma executives regularly face a series of decisions beyond their original professional competency. This is a requirement for biopharma executives’ success and I imagine therein lies much of the appeal of the job. In all 21st century management, decisions are accompanied by choices in the use of information technology, and this is no less true in biopharma clinical development. This supplement covers a sampling of the myriad issues in using IT to support the process of human testing of new drugs.

 

IT decisions should be like any other decisions you are making – they must first be cast in the mold of your business strategy and business conditions. Both strategy and conditions vary widely from company to company, even when the companies may seem so similar in purpose and objectives. (This is why “benchmarking” is so dangerous.) As these circumstances vary, so do the parameters upon which IT decisions must be based. The issues are different and so too are the resolutions, depending on how your company is funded, staffed, experienced, pipelined, partnered, organized and led. Indeed, IT is entwined with company strategy and business conditions in a Gordian Knot of inseparable implications.

 

Biopharma execs are often impeded by their own backgrounds – commonly either academic or laboratory-based. Clinical research is unfamiliar territory and may appear to be misleadingly “simple” compared to bench science and the “genius of discovery”. While the cry of “eureka!” may be hard to hear during the long years of clinical trials, the science and rigor in clinical research are no less important to bringing a discovery into medical practice. So the first challenge for biopharma execs is to put the right people in charge of clinical development – those who understand and respect it. It may seem odd, but this is the first step to effective use of IT in clinical research.

 

Decision, decisions; Choices, choices

Managing IT for any purpose encompasses infrastructure, platforms (hardware/software), networking and security, quality management and validation, user support and maintenance, and the software applications which your staff actually use. In clinical development, there is a wide range of applications which can be employed, and for which a biopharma must determine which of these it really needs and when. A sampling of such applications include:

 

Data handling (EDC, ePRO, CDMS)

 

Trial conduct (CTMS, IVRS, study portals)

 

Safety surveillance and reporting (AES)

 

Submission preparation (submission manufacturing systems)

 

“Infrastructure” (document management, data warehouse).

 

Too many biopharmas jump right into this list, like do-it-yourselfers at a big-box hardware store, and start watching vendor demos and freaking out at the price tags. The place to start instead is the business strategy: how are you going to run clinical trials, when, and why?

 

The first constellation of choices revolves around how you will resource your clinical development. Are you going to outsource most or all of the functions (a common approach for young companies)? Are you going to selectively outsource by function (keep data management in house but outsource site monitoring, or vice versa?), and what about project management? If you choose to operate functions internally, are you staffed appropriately? Are you willing to bear the cost and maintenance of these staff? Can you find the staff you need?

 

What do your partners use in the way of IT? Most biopharmas have all kinds of partners — companies you are licensing to, licensing from, using for key development services (radiographic readings, core labs, patient recruitment, clinical supply packaging), and so on. Do your partners offer technology systems you can leverage, or do you have a more efficient strategy? How do you pull together these multiple sources of data?

 

And where is your business at this moment, or next year, or in five years? Are you heading for submission? Quickly? Ever?

 

Each of these questions, each of these choices, dramatically alter the appropriateness, ROI and operational impact of any particular clinical IT application choice. Ultimately it comes down to a practical, essential business question: how do you control your clinical development process? And some executives would add, how do I have control and flexibility simultaneously? How do I have both rigor (compliance) and the creativity of entrepreneurial nimbleness? And of course, how do I do this on a limited budget?

 

Three Areas to Focus On

It is probably helpful for a biopharma executive to focus at first on three main areas of clinical research IT, what I will call control, product data, and safety.

 

Control, in this context, means knowing how your trial(s) — not your subjects or your product — are doing: are they on time, on budget, experiencing bottlenecks? Are they experiencing site performance issues, compliance issues, supply issues? Are your partners performing as expected? What changes need to be made? These questions are naturals for IT support. In the pharma world the application used is some kind of CTMS (clinical trials management system). Often, small young companies will avoid this arena, because the best known applications are big and expensive, and the small ones may not be robust or mature enough, or may have been developed for a customer’s situation too dissimilar from your own.

 

But in our experience, obtaining control over clinical trial conduct through information is as important, or may be more so, to a young company than the traditional focus on patient data handling. What is particularly challenging, besides the complexity of managing information from diverse partner data sources, is that the design for the kind of system your company needs must come from your clinical staff (not data managers or IT staff), and your clinical staff may be your least pharma-experienced.

 

I use the term product data handling to be as generic as possible in referring to your patient/subject data as it relates to the effects of your product (drug, biologic, device, combination thereof). This encompasses traditional CRF data, but these days increasingly includes relevant “non-clinical” data, PRO data (“patient-reported outcomes”), images (radiographic, pictorial, motion video), and more. This is often where a biopharma starts its clinical IT journey, particularly since this is where people with “data” in their title seem to reside, and where most executives are more willing to spend dollars on technology.

 

Handling product data for all biopharmas is increasingly focused on usability – both for the end user (the site) and the business (i.e., for accelerated decision-making). This means access, rapid startup, and ease of reporting. When seeking to control and analyze product data, it is harder and harder in 2007 to accept a paper-based, backend-heavy application strategy. Thus a traditional CDMS gets hard to justify, especially considering the time to start up and staff the necessary support, and to run the accompanying paper processing. But are newer approaches (EDC plus an analytical backend, versus a storage-oriented backend) too risky for a new company? These newer approaches may be actually more appropriate for a new company: a) they are easier to implement in a “blank slate” environment; and b) the risk, such as it exists, is likely to be more than offset by timely data and the facilitation of interim analyses. Regardless, a number of critical staffing, process and infrastructure decisions have to be made to implement an effective data handling approach. Again, a business’ priorities should guide these choices.

 

Conservatism finds a home in young biopharmas when considering the monitoring and reporting of patient safety. Fortunately, a handful of similar software applications are available to choose from in this area, and because the number of your staff who will be using them is likely to be small, the cost of these applications are quite reasonable. Here the choices are much easier: pick an application, buy it and use it. Complex resourcing algorithms are not necessary; ROI pales in comparison to the cost of a safety crisis.

 

Nonetheless it is surprising how often biopharma executives (who have the most to lose, personally and professionally, by a safety crisis) will balk at the cost and perceived complexity of owning a safety monitoring and reporting tool. This is particularly ironic for those companies who are counting on multiple indications for their compound or biologic, and must have the means to detect safety indicators across the development stream to ensure an acceptable safety profile. Once again, the business strategy and the supporting IT needs are intertwined.

 

Just a Taste

This overview of control, product data, and safety is just the beginning of the IT issues which require decisions in support of clinical development. There is much else to consider, including where and how you equip your infrastructure, on what platforms, under appropriate quality management systems and with compliant validation. The key is that biopharma executives should not abdicate their involvement in these decisions because the issues seem too technical or too narrow. Precisely because of their inextricable connection to the business decisions executives are responsible for, clinical IT choices must be made with the help of senior management.

 

Innumerable issues must be considered and resolved as you prioritize your IT needs, schedule IT adoption, select your vendors and shoulder the mighty work of implementing these tools with your staff (or new staff), under the correct governance model, with efficiency, flexibility and compliance. Trying to cut through the Gordian Knot will lead to your operations falling in pieces. Embrace the conundrum and you will learn much about the complexity of clinical development, and through such learning will come excellence in biopharma leadership.

“These days man knows the price of everything, and the value of nothing,” so said Oscar Wilde over a hundred years ago (referring to cynics), and so might complain many vendors of software written for the clinical trials industry. The yawning gap in perception between software vendor and research sponsor as to what clinical trials IT should cost is one of the most significant barriers to both widespread adoption and technology innovation.

 

Pricing of software has always been imprecise, in all industries. Software pricing has come under repeated, cyclical pressures as hardware platforms, code size, price of storage media, explosive growth of users, the Internet, connectivity innovations, and business model innovations have all upturned the assumptions of their time about how much software should cost. The starting point of the software pricing dilemma is that reproduction of software (assuming no customization) is for all practical purposes free, thus eliminating the classic basis for product pricing (“cost of goods sold”). So since the dawn of “packaged” or standard software, pricing has been a Wild West frontier.

 

In most markets other than clinical research software, pricing of software has become more normalized because the markets are large (meaning that there are a lot of buyers), the functionality develops to a point of understandable expectations, variability among offerings gets reduced around the mean, and in the end the market determines a typical or expected price point.

 

None of these factors help us in the clinical research market: the market is very small, there are not many buyers, functionality expectations (as distinct from functionality offered) can vary dramatically from buyer to buyer, and variability in pricing for comparable functionality can vary dramatically from vendor to vendor.

 

Pricing variability can be seen in all sectors of the small but highly differentiated clinical research IT market. In some of the niches for enterprise software, pricing can vary literally an order of magnitude depending on the vendor, on the moment in time (Is the vendor having a good year?), on market conditions (Is the vendor trying to capture market share or are they desperate for new business?), and on vendor maturity (Is the vendor naïve about how to stay in business?). Pricing variability is often a direct function of what expectations software vendors have set among their investors or stockholders (expectations of revenue metrics, of recurring cashflow, and of market size). Pricing variability is also rampant among vendors offering their software on a per-study basis or with a high proportion of services attached (such as in the EDC or ePRO markets). Often such bids from competing vendors are nearly indecipherable by customers. And this is where the frustration sets in.

 

The Impact of Pricing Problems

So what’s the problem with high variability in pricing? First there is the manifest confusion about what this “should” cost. With such high variability, how do biopharma customers of these software tools come to understand what is a “correct” or “fair” or “reasonable” price? There is a very real opportunity to overpay for this software. There is also the danger of paying so little that the software community cannot survive to provide what the customer needs. Worst of all, this variability brings attention to itself, and draws attention away from where biopharma customers should be focusing when making vendor selection – on vendor-sponsor fit, on service quality, on technical reliability, on functional suitability. Instead of staying focused on precise and efficient analysis of these latter factors, too often a clinical research software acquisition will get sidetracked by the wild distribution of pricing.

 

This phenomenon is compounded by the increasing involvement and power of contracting/purchasing staff in the selection of research IT software. When only back-end enterprise scale software was being used, mostly the acquisition process (rightly or wrongly) would stay within the biopharma’s IT organization and budget. In today’s world of per-study and service-laden software licensing, clinical IT tool acquisition is often falling under the oversight of those who handle CRO contracts and the hiring of outsourced services. These folks (rightly or wrongly) will almost always let price dominate their decision-making – indeed they would assert it is their corporate function to do so.

 

The most damaging influences of the pricing confusion are many-fold:

 

Rampant cynicism, if not mistrust, of vendor pricing

 

Severe difficulties in projecting IT budgets, operational budgets, and/or trial costs (indeed, clinical development costs overall)

 

A stunting of vendor maturity growth as a by-product of sponsor manipulation

 

Perhaps worst of all, inhibition of software development innovation because of the manifest uncertainty vendors face in predicting their future cashflow.

 

The Vendor Answer: What Price Value?

In the Vendor Valhalla, there is a consummate answer to this dilemma: the price their tools command should equal the value they deliver to the customer. Value-based pricing has been the elusive goal of product and service vendors perhaps since the earliest beginnings of commerce. In some markets, economists would say value-based pricing operates quite efficiently: people buy small cars because the value of transportation to them lies in the efficiency or affordability with which the car moves them from point A to point B versus the alternatives (city bus, walking); people buy luxurious sporty cars because of the value they bring in personal satisfaction or status; people buy minivans because of the value of carrying family and friends in large numbers in relative comfort.

 

Value-based pricing for clinical research IT tools is hard to achieve. Gone (I hope) are the days when vendors sold their tools to biopharma on how many days faster a drug will get to market because of the use of their product. And yet, isn’t it true that switching from paper to EDC can (in theory) accelerate clinical development to the point that literally billions of dollars will be made sooner, and more billions to boot, as a result of faster submission? If this can be traced to the use of an EDC tool, what should the inventor, developer and provider of that tool be paid? If the robustness, ease-of-use, and cleverness of an adverse event tracking and reporting system enables a sponsor to identify post-marketing safety issues faster and more accurately, improving patient safety and fending off a damaging market withdrawal, what should that sponsor have paid for that software?

 

As much as vendors salivate at such “value propositions”, rare indeed are the instances where tools have been rewarded by the hands which used them. It doesn’t happen. And yet, we know the software is worth more than the CD it has been burned onto.

 

A Possible Response

Sponsors have at least two ways to contribute to the resolution of these dilemmas in a positive manner. One is relatively quick and simple, and ultimately limited, but still an advance: sponsors need to somewhat artificially create some normalization of this pricing by developing and applying basic “bid grids” and similar templates, especially when purchasing software through service-based models. This is directly akin to CRO bid processing, logically, and should allow sponsors to make the best use of the experience of their purchasing departments. More than that, it will enable clinical development staff to begin to structure the pricing evaluation component of vendor selection in something closer to an apples-to-apples comparison and help them understand where (vendor) costs come from.

 

But more importantly, sponsors are woefully behind in understanding how much their day-to-day operations truly cost, and the components of that cost, without which they cannot begin to judge what the value is of a tool which replaces, displaces or enhances those components. This is the heart of the problem from the biopharma side of the conundrum, and has always been a hindrance to the improvement of clinical development operations issues of all sorts.

 

Some Truths

Ultimately, there are two fundamental truths in small-market pricing that apply in particular to clinical research software. First of all, the ultimately “correct” price for software must meet two requirements:

It must seem fair to the biopharma sponsor

It must be able to keep the vendor in business.

Obviously, and painfully, to reach these twin goals requires a degree of openness, honesty, financial awareness and sophistication, that is mostly missing in our industry.

At the other end of the continuum from the quantifiable to the qualitative, there is another approach to pricing value which may be, in the short term, the most compelling: the value of necessity. In describing the role that EDC played in processing a huge amount of data from a large number of patients in a pivotal Phase III trial in a highly competitive product market, one sponsor memorably said that while using this vendor was enormously painful in many ways (and presumably not worth what they were paid), the sponsor could not have completed this trial and the subsequent submission any other way. The tool had been essential to meeting the business need.

This is value, in its purest form. If vendors can deliver this kind of benefit without the pain, and if sponsors can learn enough about their costs to understand what the tool did for them financially, the price of value will be clear to all, and willingly paid.

It’s baseball season and time for baseball metaphors. As we in clinical development go into the technology game these days, we need all of our players on the team. Too often, our lineup is limited to data management and information technology staff. We need clinical staff to “step up to the plate” and help win this essential, serious game.

 

The better clinical IT applications become (and they are getting better and better), the more they directly impact the daily work of clinical staff, as indeed they are supposed to do. By clinical staff, I mean study managers, CRAs, project managers, medical monitors and advisors, and so on. Much of the clinical IT universe (clinical trials management systems – CTMS; electronic data capture – EDC; adverse event systems (AES); electronic patient-reported outcomes – ePRO; even clinical data warehousing) has been built to be used by and for the benefit of clinical staff. Increasingly in fact, data managers are relatively marginal users. And yet many sponsors still keep clinical staff on the sidelines, or ask them onto the team as an afterthought, when acquiring, specifying, or implementing clinical IT applications. Indeed many of these projects remain the province of sponsors’ fulltime IT staff, who are even further removed from the work of clinical development.

 

It is by no means always the fault of IT or data management that clinical is an afterthought. At many sponsors, clinical staff want the benefits without joining in the hard work of making IT successful for them. They will whine but they won’t work. Is that too harsh? If you are a clinical professional, how often have you begged off of a clinical IT project team? Were you too busy, understaffed, couldn’t afford to focus on this “peripheral” task? Did it strike you as too “technical”? After all, that’s what the techies are here for – the data managers and the IT folks, right? But no one looks after your interests like yourself; no one knows what you know about your work like yourself; no one can represent the investigators and trial subjects like you can.

 

Play Your Position

How should clinical staff be contributing to clinical IT projects? The first and most meaningful way is to share in the governance of the project itself. Do not let it be run exclusively by data management or IT. Indeed, it is not inconceivable that clinical could run a project acquiring EDC or a CTMS. Taking governance means that your end-user needs will be met with full attention, rather than taking a backset to the back-end (data management, statistics, executive management). It means that you can help create a timetable which is meaningful to your clinical development plan. And it means that you can significantly alter the prioritization of features and functions.

 

But with governance participation comes responsibility – not only to show up, on time, but to know how to play this technical game. Participants from clinical should be selected for a proclivity or interest in technical matters. Even with the interest, they will need to learn about technology in some detail – not just about different pieces of software, but about technology platforms (.Net, Java, XML, etc.), basic building block tools (like brand-name reporting tools), and the marketplace (which vendors are being used widely, and why; what are the risks and benefits of innovation, etc.). So what you have is a dynamic learning relationship between clinical and your more technically inclined staff, but both “sides” are fully contributing.

 

Next comes full participation in the specification of the technology being acquired. And not in the manner in which clinical usually helps out (looking at a sample screen, telling the techies what your favorite report would include). Clinical needs to really draw the vision for how technology is to be used in clinical development, and to understand the potential benefits, risks, costs and burdens. Clinical is usually in a much better position to “think outside the box” on how information technology can help. But this vision must be grounded in some realities of what current and near-term technologies can do. For instance, if your vision of technology in clinical development is grounded entirely in harvesting data from investigator site electronic health records, well let’s just say you’ll have a long wait. But one need not crawl out to the bleeding edge to have a vision. In fact most sponsors do not begin to benefit fully from the technologies they already own. This is where clinical staff need to step up and learn the possibilities, so that even the current investments are properly profitable.

 

As with the specification phase, clinical staff need to be active participants in vendor research, vendor selection (such as participating in reference checks or usability testing), and in the design and execution of successful software implementation. This latter step is of course quite significant. It means taking a leadership role in process re-design, user acceptance testing (UAT), training, and enterprise communication. Throughout these steps, clinical not only represents the interests of internal staff such as study managers, project managers, medical monitors and pharmacovigilance staff, but also clinical is the best – perhaps only – representative of the needs and perspectives of those not in the home office: the regional monitors, the investigative site and trial subjects themselves.

 

Throughout, clinical has to commit to this participation. You have to commit a part of your brain, a part of your calendar, a part of your budget. Without a consistency of commitment, from top to bottom in the clinical hierarchy, your contribution will be muted and the enterprise will suffer.

 

Share the Victory

What’s in it for clinical? Why invest precious time in learning and specifying things which we have technical staff around to do for us? The answer is because the benefits of information technology to clinical development can be so profound, and to date have not been realized, in part because of clinical’s general passivity. If you step up to the plate, by learning how technology can change the way we think about clinical development design, you can share in the victories brought by:

 

– Compressing the “white space” (the calendar time) between individual trials

 

– Reducing the number of trials, and altering fundamental trial design, through use of interim analyses

 

– Meeting the challenge (and exploiting the possibilities) of measuring patient-reported outcomes

 

– Really knowing, in real time, how a study or a development program is going

 

– Reducing the workload required to obtain quality safety data and timely reporting

 

– …and much more.

IT and data management can try and win these games while clinical sits on the bench; the probability and size of your victories will be so much improved if clinical fully joins the team.

If pharmaceutical companies have a special Harry Potter-like Defense Against the Dark Arts class for their management team, one of the first techniques they must be learning is the Culture Defense. When confronted with evidence of their reluctance to change, they are apparently taught to point their wands out in front of them and say, “it ain’t me babe, it’s the culture here”. This turns out to be a marvelous, widely applicable spell – the easiest way out of an uncomfortable situation. There’s one problem: we are the culture.

 

We can’t all be the rebels, can we? If so, how would the “culture” ever form with beliefs different from our own? To claim that company culture is the reason technology innovation fails to take root is to deny your own place in the company you work. Culture doesn’t kill technology, people do.

 

This common weakness of corporate organizations is particularly obstructive to the introduction of information technology because technology generates so much upheaval, especially in areas of clinical development still untouched, or merely grazed, by the productive use of software. Often standing in the way of that productivity is the Culture Defense.

 

Let’s look at the following examples of flawed technology employment where “culture” is often blamed as the cause of failure, and let’s ask ourselves if there might be other reasons lurking.

 

 

The Ubiquitous Culture Defense

We’re getting lousy data out of a great tool (an expensive enterprise CTMS for instance, or a state-of-the-art Adverse Event System). How does this happen? The old IT acronym, “GIGO” (garbage in, garbage out), applies. But why is it happening? Why are staff waiting until the last minute to enter trial status information that is supposed to feeding a highly accurate real-time CTMS? Or in the case of the AES, why are antique paper-based dataflows being maintained, while the AES is an alien, unwelcome layer imposed on top. Why is this allowed to happen?

 

The Culture Defense says, “well, we’re not used to reporting data in real-time”, or “we want to review and double-check the information before anyone sees it”. Or in the safety case, “we won’t risk the importance of safety surveillance to software which may not work”. It’s a culture thing. Really?

 

Another example: we throw resources (human and monetary) at database lock of our pivotal trial, with no restraint. At that moment there is nothing more important to the company. Our EDC tool, or indeed even our trustworthy old CDMS, might be able to contribute to this moment in timesavings ways, but we don’t take the time to learn how, or change our process accordingly. “It’s the culture.” Perhaps it is, but is that a good thing? Does the Culture Defense make all other options moot?

 

Yet another: “we don’t measure” here. It’s our culture not to measure, or if we do, we don’t do it consistently, or with rigor, or learn from the results. There’s technology to help us (and if we are using technology at all, we will need metrics to justify its expense some day), but it’s not in the culture. Is that culture or laziness? Culture or fear?

 

And another: despite EDC’s inherent purpose in catching errors at the site at time of entry, and drastically reducing data cleaning at the backend, many sponsors still insist on multiple layers of data review (data managers, in-house CRAs, medical reviewers, and back to the data managers again) just like in the paper days. “It’s our culture, we want to get it right.” Wrong.

 

More pervasively, it is common to see clinical development executives across the industry turn a blind eye to what really happens at the operational level. Executives announce an impassioned commitment to a particular process improvement initiative, often technology-enabled, and tiptoe out of the room – leaving the implementation to middle management. In many companies, without the executive watching your back, there is little incentive for middle managers to execute on the vision. Is this disconnect a culture problem, or a management problem?

 

It Is You, Babe

If individual study teams or even entire therapeutic areas don’t follow company-wide SOPs (but instead make up their own regulatory-compliant “standards”), is that culture, or the acts of individual managers? (It may be a justifiable action on the manager’s part, but that’s logic, not culture, at the source.)

 

If we put training of the new EDC tool in an e-learning environment, but I (and most of my fellow monitors) don’t really pay attention (we click through it and get “certified” but don’t remember much), can I blame my culture for being anti-training? I’m the one who chose not to pay attention.

 

If we rely on individuals’ cooperation in using technology appropriately, and people fail to do so, isn’t that a series of individual decisions? If I fail to fill out all the fields in a templated Site Visit Report in my CTMS, isn’t that my choice? The culture didn’t make me do it, I chose not to do it.

 

The damaging side-effects of the Culture Defense are legion: it enables us to drag our feet when it comes to changing the way we are used to working; it gives us permission to abdicate responsibility without penalty; it enables us to stand in the way of progress with impunity for whatever our personal motivation may be (we’re overworked, we’re jealous, we want our pet project to get all the attention, we’re afraid of learning too much software).

 

Psychologists will tell us that the most powerful realization victims of damaging habits can have is that they have a choice. The Culture Defense is designed to prevent choice, to prevent individual responsibility, even to preclude individual initiative. The Culture Defense is defeated by individuals choosing not to go along with the easy path, to see the executive direction as good for themselves as well as the company, to embrace change as the inevitable condition of modern business, to risk using tools that may reveal true operating conditions quicker because it is better to do so, to risk measuring because objective data about how we work can make us better workers.

 

We as individual pharmaceutical company staff, middle managers, and executives, can choose to act in manner that enables information technology to flourish. We can face down the Culture Defense so that our CTMS’s actually produce accurate, actionable data on clinical trial program performance. So that our Adverse Event System is allowed to automate the fatally flawed reliance of paper. So that our EDC tool can be authored quickly, and used by monitors to catch errors and underperforming sites quickly. So that our technology investments are worth the effort.

 

Walt Kelly, in his famous cartoon strip Pogo, memorably exclaimed, “we have met the enemy and he is us.” Culture isn’t the enemy, we are. Facing up to this fundamental truth will begin to enable technology innovation to meet our expectations.

What does this scene sound like? Thousands of pages being faxed from country to country. Papers being printed on multi-part forms and signed in ink by the boss. Pleading with programmers to prepare a basic report from your database. Dozens of people doing what a handful of people could do. Key information about where things are stored only in someone’s head. Everyone checking and re-checking each other’s work.

 

Does it sound like banking in the 1960’s? Does it sound like your typical office of the 1970’s? Does it sound like clinical data management in the 1980’s? Are you old enough to have experienced a workplace like this? Unfortunately, these examples are from the 21st century, and culled from a range of biopharmaceutical companies. Call it the last frontier: drug safety operations.

 

Let’s be clear up front, so the lawyers can all sit back down: we are not talking about a public safety issue in any way. What we are talking about is a question of internal business efficiency only: drug safety operations optimization. With all of the appropriate focus on operational cost reduction in pharma these days, one of the areas which too often remains untouched is drug safety – not because it doesn’t need to be more efficient, but because executives are afraid to go near it, for obvious reasons. And unless the safety executive is innovative enough to volunteer for process analysis and improvement, it will likely never be forced on them.

 

Scenes from the Wilderness

When we have looked at drug safety operations at different biopharmas, we are struck by the huge range in case load (i.e., number of adverse event cases processed per drug safety staff) – varying literally by an order of magnitude from one company to the next — ten times the personnel to handle a similar case load. These variations have not been found to be explained by the variables one might guess – therapeutic area complexity, geographic diversity, staff preparedness, stage of development, or any other obvious explanation. Instead, it is a direct result of inefficient case load processing and other process wastefulness.

 

You can still find this kind of over-staffing in some other pharma departments , like CDM (clinical data management) and even monitoring, where the company culture dictates that a surfeit of human effort will protect against error. But this is a concept most industries have long since rejected, mostly because they could not afford to hold on to it.

 

Some safety managers will still cry that they need more staff, not less. But except in the most unusual circumstances (perhaps a budding biotech), this is the time to examine process efficiencies before signing those personnel requisitions.

 

Why does this happen? Inefficient processes, lots of paper, lots of checking each other’s work, and, frankly, a lack of pressure or will to work differently. Just one example: a reported adverse event consists of 3-5 pages of source documents. At one company, processing this information causes the 5 pages to balloon to 250 pages, since every change to any piece of data requires a new printout of the whole case for the archive, no matter how trivial the modification. Moving, reviewing and storing this kind of paper load is obviously inefficient, especially when thousands of cases are processed each year.

 

Second-Guessing

Often we will see an atmosphere of “quality control” that borders on mistrust or job security: checking and re-checking each other’s work ’til the cows come home. This is also a direct tie to a dependency on paper, but it also speaks to culture. Several companies have developed “quality” into a punishing, mistrustful exercise, where I tell you what to check and then I check later if you checked it! Quality (and the concomitant protection of public health) is achievable without these multiple layers of cross-checking. This is something CDM has been addressing for years – all CDM departments have to some extent (or to a great extent) streamlined their quality control (i.e., discrepancy management) processes to reduce time, effort and resources. Thus they avoid what safety departments still experience: a physician who will only review a case off-line (on paper), who will mark up the paper copy with corrections to be made by clerical staff in the system requiring that the case be printed out again so the originating physician can see if the system change corresponds with the original mark-up!

 

We see lots of paper in places in drug safety where you would think paper is long gone – even though nearly every company has already bought safety software tools precisely to eliminate that paper! For instance, we see companies admirably learning how to do ICSR submissions via E2B (paperless, by definition) to regulatory authorities, but still find those exact same companies faxing thousands of the same cases from one country affiliate to another. Why are the advantages of E2B ignored, just because the recipient is not an agency? And what happens to those faxes? The data is entered again by hand into the receiving office’s (separate) software system, introducing new errors and starting a whole new cycle of quality control.

 

Finally, we also see safety-specific technology all around but we do not see the safety departments taking responsibility for using it well, the way data managers, for example, do in a CDM department using a CDMS or EDC. In safety, instead, we see departments that remain beholden to their IT departments, like all of us were in days of yore, waiting for simple reports to be programmed by programmers, even when the technology is simple enough to be used by safety staff themselves. Who is complicit in this arrangement? Is IT holding on to the feeling of being useful? Does drug safety simply not have the time and/or technical understanding to use the tools they bought? In the memorable words of one country safety officer, “I simply don’t trust my computer.” Charming, but intolerable in modern times.

 

What this all most resembles is CDM in the 1980’s. It is remarkable that drug safety can sometimes still exhibit these qualities. The fix for CDM was executive intolerance for the cost and delay which such behavior caused in clinical trials, and the increasing automation and professionalization of CDM itself. It brings to mind that perhaps the answer for drug safety optimization is similarly two-fold: an executive spotlight on the issue, and the creation of a “DSDM,” or Drug Safety Data Manager, role which, like the clinical data manager, services as the interface between the user and the technology (between clinical operations and IT, in CDM’s case). As overstaffed as some drug safety departments are, moving some selected staff into a DSDM role could help eliminate the extra unnecessary staff through process streamlining.

 

Meanwhile, don’t look to the safety application vendors as the answer to process optimization. Historically they have not offered enough help, proactively, at reasonable cost, with the big picture in mind. These are, after all, cultural problems first, process problems second, and ultimately a matter of will. The tools the vendor sell are fine and have been for years; safety departments need to use them fully, and as enablers of process efficiency. This is the sponsor’s responsibility, not the software vendor’s.

 

And so, it comes back to a willingness to change. No sponsor wants to face a situation where external pressure forces a function to become more efficient. The best solution is always to anticipate areas for improvement and pursue them proactively. Assuming a sponsor can examine itself objectively, then basic process analysis skills, combined with safety domain expertise, should enable sponsors to eliminate these safety operations inefficiencies.

 

In the story of human progress, frontiers are reached, breached and conquered. We may lament the passing of some frontiers (the Amazon, the Arctic, restrooms without cellphone conversations), but drug safety inefficiency is not one to be mourned. Saddle up and tame this frontier so biopharma’s dollars can be used to the greatest good.