Email us : info@waife.com
Management Consulting for Clinical Research

If you are not locking your database within 5 days after your “last subject is out” then something is wrong.

 

It is a well-recognized in clinical research that succeeding and/or failing quickly is critical to the ability to bring new therapies to market. There are many aspects of clinical research that are beyond our control, such as how will the drug compete against other therapies in efficacy, how will the drug interact with the physiology of the subjects, and will this yield a favorable safety profile? These are the questions we hope to answer by conducting clinical trials, but we don’t have any direct control over them until we get the data.

 

What we do have control over, is how we collect and analyze the data, and more importantly, how quickly we are able to obtain this data to arrive at that critical and expensive decision of whether or not to proceed further. Over many years working in the biopharma research industry, I notice how much discussion is spent on getting that first subject into the trial. But many would argue that a heavier focus should be placed on getting the trial completed, so that you have the data you need to make critical development decisions. The database lock timepoint, then, is the key step in arriving at a development go/no go decision.

 

I continue to be shocked when I ask at conferences and clients, “How many days, on average, after last patient out, do you lock your database?” and the responses are overwhelmingly “weeks to months”. There are those few companies that are doing this within 5 days, and to those I say congratulations! ­– you need not read any further. To the rest, if you are using electronic data capture (EDC) and you are not locking your database within 5 days of last subject out, then there is something broken in your process. This applies to both outsourced and insourced models, and to a large degree is independent of the EDC software you are using. To be fair, some software solutions may make this process a little easier with various built-in tools, but most of you can achieve this goal with your current EDC technologies.

So, you should ask yourselves about whether your procedures are preventing you from completing your trials in an expedited fashion; this is one of those components of clinical research that you have full control over, and there are many proven strategies and techniques to succeed at faster database lock processes. Often, as in most process analysis and correction, the solution is very specific to your particular circumstances of talent, policies, politics, compliance and history. If you are interested in updating your processes to take full advantage of the technology you have purchased and implemented, then please reach out to Waife & Associates at www.waife.com. Or to me directly at shevel@waife.com. We have a proven track record over 25 years specializing in biopharma clinical development to address these types of issues. A 5-day database lock is a very real and achievable milestone, and if you are not achieving this performance in your Phase II and Phase III programs, then you are leaving opportunities and money on the table.

Identifying possible “ends” is one thing; understanding their meaning to the user and the enterprise requires more thought.

“Begin with the end in mind” is one of those classic business phrases which is no less valuable for the number of times it is ignored. Clinical research sponsors are guilty of often fatal forgetfulness of this key concept when planning the development, implementation and use of new software applications or major organizational change.

Clinical research sponsors generally start an organizational change or a software acquisition not with the end in their mind, but with some stimulus in their back: a department is complaining that everybody else gets new tools except them; our competitors all changed their outsourcing model and so we should too; I met a salesperson on the airplane; the vendor just announced an upgrade and they won’t support our version anymore; we just hired a new vice president and she prefers vendor “x” over vendor “y”. While some or all of these situations may justify change, they do not in themselves sufficiently define the “why” and “what”.

Starting Isn’t the Hard Part

Sponsors may also start from some business trigger which gives them the illusion that the end is mind: we need to save headcount so let’s use EDC (electronic data capture); or we’re frustrated with having multiple overlapping and out-of-date investigator databases so let’s buy a CTMS (clinical trial management system); we just acquired Teeny Biotech and we don’t have anyone in-house in their therapeutic area, or our new translational medicine VP says we’re going to have a flood of pharmacogenomics data coming in so let’s get one of those “data warehouses.”

What’s missing from these situations is the company’s consideration of the strategic benefit, what the daily operational impact will be, what the software’s users will have to change to use it properly, and overall what will the benefit be two years from now? What is the tie-in between the initial impetus – the needle in the back or the business trigger – and the actual output the change will provide? This disconnect is particularly critical in enterprise software projects or major business acquisitions because we all know that the cost in money, time, headcount and disruption will be high. The benefit therefore must be high as well, or the cost reduced, to be in line with the diminished (and realistic) results.

Analyzing a potential project’s end-user benefit compared to the initial impetus need not be fatally time-consuming, which is the usual reaction to the suggestion. But it can save a large amount of wasted time and money. We should recognize that it is very easy to fall into the disconnect trap. For instance, let’s consider the situation where clinical operations gets that frustration over the multiple investigator databases. The complaint is forwarded to the IT department (or worse, a naïf goes to a booth at a trade show), and the answer comes back: there is no “investigator database fixer” product out there, but there are these CTMS packages and boy, they do everything. Before you know it, you are installing a multi-million dollar application over multiple years, you’ve doubled the amount of training everyone has to go through, and you have all this rich functionality, and no one can or wants to use it because it’s not relevant – neither to the original trigger or actual user circumstances.

I would suggest that even a good understanding of how the end user works, and what he or she needs, is not sufficient in today’s business environment. We have fewer and fewer in-house staff, we are narrowing our “distinctive competencies,” we have uncertain economic and reimbursement conditions, and we have unrelenting competitive pressures. All of this mitigates against expensive multi-year infrastructure projects unless we do more to predict and understand the future end user business need. What are the future identity, purpose and constitution of our business, and therefore, what changes and tools do we need to get there?

Even for projects where the pain and the solution appear more clear and pragmatic, we are usually missing a robust and detailed visualization of how a tool will be used, and without this, we will mis-configure and mis-spend our time and money. For instance, h0w does a shift to outsourcing change who the users are for a CTMS, document management system, EDC, and similar programs? How useful are e-tools if the “back end” of the workflow stays “paper-minded” in its policies and procedures, reflected in unchanged workflows, double-checks, and review practices?

And Vendors Too

The developers of software used in clinical research are equally guilty of forgetting the context of how customers use their tools. Vendors have a great opportunity to add significant value to their customers by helping sponsors see the possibilities that their tools open up, and by knowing the clinical research business as deeply and broadly as possible. This knowledge should translate into more focused and anticipatory designs, creating more powerful and efficient tools. Too often, however, vendors and CROs see educating their clients as a danger to future sales, and try to over-simplify change.

Typical software development, even the industry-specific kind we in clinical research usually encounter, tends to chase after customer-driven enhancement requests that are often shortsighted for all the reasons cited above (responding to the “needle in the back”). The result is needlessly complex software with features even the requesting sponsor may forget they wanted! More damaging than needless complexity is that the effort to chase enhancements takes money away from the literal “end” – the output, reporting and visualization of information which is all a tool is really good for.

This irony plagues each aspect of the research software universe. Vendors may see the whole gamut of functionality possible, but as professional engineers, they see it, and build it, linearly (they begin at the beginning and end with the end). As a consequence, they inevitably run out of time and money before they reach the output function (reporting). How many times do we hear vendors do their demos this way: they start with the very first point of data entry, move through to the point everyone is waiting for (getting something back for all that entry), and then they say, “well, there was no point in re-inventing a report writer so use something standard, off-the-shelf.” It is the “data out” that matters in the actual business context, but to a software engineer it looks like a data processing problem, not a business use problem. If this were true, and off-the-shelf reporting was adequate, so too would be off-the-shelf entry – so actually, let’s forget the whole thing. And yet there really is a utility for clinical research specific software products, if built with the end in mind.

Today’s software vendors need a knowledgebase and a discipline not commonly found. The need for vendor domain knowledge is greater than ever, plus an understanding and vision of where their customers are going. Certainly sponsors have the bulk of the responsibility in teaching this. For the vendors, the discipline is in rejecting enhancements for enhancements’ sake and leading their customers towards being enabled to handle the future.

Is There an End?

Another way that clinical research sponsors get ahead of themselves is to assume that once the first wave of interest and urgency is sated, the project is done. This is hardly the case. Yes, processes may have been re-written, software configured, and newly re-organized staff trained in their altered jobs. But the work does not, and cannot, stop there. The second and third waves of change wash over the organization as the “lower priority” staff need to be oriented, and as the new processes need to be iterated to reflect actual experiences versus the original assumptions.

It sounds like continuous improvement, except for those sponsors who have process improvement staff, those folks themselves are moving from project to project – working continuously perhaps, but not necessarily improving. They too get bored (or run out of resources) with the first wave of the project, and are not there to reconsider the impact and effectiveness of new work models or software applications. So in some senses there is no end, but rather steady re-examination of purpose, needs and solutions.

“Begin with the end in mind” is certainly the start of a solution. Begin with an understanding of the end is probably more profound. Identifying possible “ends” is one thing; understanding their meaning to the user and the enterprise requires more thought, breadth and management than most sponsors or vendors are used to providing.

Centricity in Search of its Center
Ronald S. Waife, as printed in Clinical Researcher
“Any unnecessary delays in drug development are a disservice to our patients (and few delays are truly necessary).”
Jargon comes from three sources:
  • to quickly express a concept in shorthand that would otherwise take too many words to express;
  • to protect professional knowledge and impress the ignorant;
  • and three, as a placeholder to fill the air until real thought and knowledge can fill the gap.
Jargon is only justified if it comes from the first impulse, and let’s assume for the moment that “patient centricity” is meant to be a justifiable shorthand.
 
Patient centricity has a cloud of obfuscation hanging over it akin to other neologisms like “pre-boarding” and “post-marketing”. (How does one board a plane before one boards a plane? If we are in post-marketing, haven’t we stopped selling the product?). Are there drugs we are developing that are not for patients? Are there trials we are doing that don’t involve patients? If not, then shame on us.  But perhaps what is meant is a matter of degree.
 
Jargon for Justification
Various players in clinical research are seizing on the patient centricity phrase to justify or promote concepts and products that, for the most part, have been with us for decades:
  • Patient centricity is the latest in a long line of frustrating attempts at justifying what should be obvious – gathering data on drug effects as close to the patient as possible. The original electronic patient diaries movement (constantly renamed as ePRO, eCOA, mHealth) continues to struggle for industry acceptance (inexplicably) and it is unlikely that a new turn of phrase will do the trick.
  • Patient centricity is a euphemism for another perpetual bugaboo – slow rates of subject enrollment in trials. Maybe this is a case of justifiable shorthand, if patient centricity means improving the practicality (from the subject’s viewpoint) of the trial protocol, or improved outreach to potential subjects, or more compelling reasons for trial participation.
  • Patient centricity is also being used, less admirably, as a way to express the chronic frustration sponsors have with their investigators, the implication being if we were only more patient-centric we could skip over those pesky investigators altogether. This is either a cynical method of selling new software or a shortcut around an important philosophical and scientific debate.
Patient centricity is supposed to mean we should care more about patients in protocol design, in data collection methods, in information sharing, and so on. Ok, sure. But some of this does not ring true: for instance, if we feel our investigators aren’t respecting, communicating, informing and sharing enough with the subjects in our trials, is this all the investigators’ fault? The history of clinical trial operations over the past 40 years has been the sponsors’ steady march away from the sites (not coincidentally simultaneous with the rise of CROs to do the work), and therefore away from the patients they see, so why are we surprised that there is a disconnect? 
We can’t ask investigators to be more connected to our subjects if we have disconnected from our investigators. Some of the efforts in the name of patient centricity seem to suggest we bypass those frustrating old-fashioned sites and get right to the patients. Why are we likely to do a better job centering on patients than we did centering on sites? In fact, sites have the best opportunity, knowledge and training to connect with patients. If we have failed the sites, let’s fix that and not just run after another elusive technology-plus-jargon fix.
 
The Center of Centricity
There is great value in patient centricity if we can find the true meaning of the term. If we have drifted from patient focus to profit focus, we need to correct that. If we have forgotten why we do clinical research, we must remember. If we have ignored the patient in pursuit of elegant statistics, or in fear of regulatory unpredictability, we have to fix this.
 
I propose a jargon-free understanding of patient centricity. It probably doesn’t mean you need new software or need to hire a “chief centricity officer”. It means re-examining, or even re-thinking, how we do clinical research to better serve our patients:
  • Any unnecessary delays in drug development are a disservice to our patients (and few delays are truly necessary). Clinical research remains widely inefficient at all sponsor companies and supporting CROs. Our tolerance for this inefficiency over decades remains baffling, and inexcusable. Centering on our patients includes eliminating the actions and activities that delay our drugs getting to market.
  • We should be more responsive to the needs of investigative sites, and more proactive in improving their performance in recruitment and quality data, or more thorough in questioning their continued participation. We should so the same in how we handle the data we receive and what we do with it. Some companies are beginning to realize the richness of already collected data in their possession, which both values more the contribution subjects have made by being the source of the data, and provides more knowledge to our companies and to medicine.
  • Everyone seems to recognize that study protocols are too often onerous for our patient volunteers, in time and travel requirements, in the number of procedures, and in paperwork. Like most improvements in study conduct, the realization of the need for simpler and more respectful protocols is trickling through the industry very slowly, despite the ubiquitous lip service.
  • Overall, as I have previously written, we need to ratchet up our collective sense of urgency. This may be the most useful and sincere way for the industry to express patient centricity. If we all care more about accelerating the timeline from discovery to marketing, and act as soon as we can on the next step in the process, we will do more for the patients who are waiting for our innovations than any other bundle of trendy concepts.
 

Patient centricity should mean doing a better job for patients, and doing our job better. Let’s not let jargon drain the meaning out of language: focusing on patients, if done correctly, could not be more worthwhile.

Needham, MA, June 8, 2017– The fourth annual Benjamin and Sholom Waife Memorial Scholarship in Scientific Journalism was awarded at Needham, Mass. High School Class Day ceremonies this Spring. The scholarship, created by Waife & Associates, Inc., supports collegiate studies toward a career in writing or journalism about science and medicine. The annual award’s recipient this year is Audrey Wey Pratt, selected by the faculty of Needham High School.

The Scholarship is in memory of Benjamin Waife (1895-1972) and his son Sholom O. Waife, MD (1919-2011). Waife & Associates, Inc. is based in Needham and was founded by Dr. Waife’s son, Ronald S. Waife. Benjamin Waife, writing under the pen name B. Z. Goldberg, was a newspaper editor and columnist for over fifty years for New York and Israeli newspapers. For much of his career he was managing editor of Der Tog, one of the two main Yiddish newspapers in New York City in the 20th century. He earned one of the first psychology PhD’s from Columbia and wrote two books, in addition to his weekly column which appeared in various newspapers until his death.

Dr. Sholom Waife continued the family writing heritage by combining it with his medical profession. He started one of the first in-service hospital CME Programs, at Philadelphia General Hospital, developed an award-winning series of textbooks while at Eli Lilly & Co. which were distributed to all US medical school graduates, wrote a column for the Physicians Bulletin, co-founded the American Journal of Clinical Nutrition, and was an early national officer of the American Medical Writers Association (AMWA).

Both Benjamin and Sholom Waife dedicated their professional lives to clarity in writing and using the written word to make complex subjects easier to comprehend. The Scholarship is intended to further these principles, in these times when science and medicine are increasingly affecting our daily lives, yet are moving further from common understanding.

Waife & Associates, Inc. awards this scholarship annually. The company provides management consulting services to biopharma organizations conducting clinical research.

If pharmaceutical companies have a special Harry Potter “Defense Against the Dark Arts” class for their management team, one of the first techniques they must be learning is the Culture Defense. When confronted with evidence of their reluctance to change, they are apparently taught to point their wands out in front of them and say, “It ain’t me, it’s the culture here.” This turns out to be a marvelous, widely applicable spell—the easiest way out of an uncomfortable situation. There’s one problem: we are the culture.

We can’t all be the rebels, can we? If so, how would the “culture” ever form with beliefs different from our own? To claim that company culture is the reason that operational innovation fails to take root is to deny your own place in the company you work. Culture doesn’t kill efficiency, people do.

This common weakness of corporate organizations is particularly obstructive to the introduction of information technology because technology generates so much upheaval, especially in areas of clinical development still untouched, or merely grazed, by the productive use of software. Often standing in the way of that productivity is the Culture Defense.

Let’s look at the following examples of flawed process improvement where culture is often blamed as the cause of failure, and let’s ask ourselves if there might be other reasons lurking.

The Ubiquitous Culture Defense

We’re getting lousy data out of a great tool (an expensive enterprise clinical trials monitoring service [CTMS] for instance, or a state-of-the-art adverse event system). How does this happen? The old IT acronym, “GIGO” (garbage in, garbage out), applies. But why is it happening? Why are our staff waiting until the last minute to enter trial status information that is supposed to be feeding a highly accurate real-time CTMS? Or in the case of the adverse events system (AES), why are antique paper-based data flows being maintained, while the AES is an alien, unwelcome layer imposed on top. Why is this allowed to happen? The Culture Defense says, “Well, we’re not used to reporting data in real-time,” or “We want to review and double-check the information before anyone sees it.” Or in the safety case, “We won’t risk the importance of safety surveillance to software which may not work.” It’s a culture thing. Really?

Another example: A major process improvement project is organized into the ubiquitous “workstreams” and comes up with a flood of recommended changes. Several of the most important changes require re-organizing staff, and while the net headcount will stay the same, some people will probably not fit the new skills required. Impossible! Why? Because “we don’t (or can’t) fire people here – it’s our culture.”

And another example: We throw resources (human and monetary) at the database lock of our pivotal trial, with no restraint. At that moment, there is nothing more important to the company. If the data management processes are examined, however, you will likely find that the electronic data capture (EDC) tools you have used for years are being used sub-optimally and inefficiently. It’s the culture. Perhaps it is, but is that a good thing? Does the Culture Defense make all other options moot?

Yet another example: “We don’t measure here.” It’s our culture not to measure, or if we do, we don’t do it consistently, or with rigor, or learn from the results. There’s probably loads of data – indeed too much data – for you to measure from, but it’s not in the culture to act on this information. Is that culture or laziness or fear?

More pervasively, it is common to see clinical development executives across the industry turn a blind eye to what really happens at the operational level. Executives announce an impassioned commitment to a particular process improvement initiative, and tiptoe out of the room—leaving the implementation to middle management. In many companies, without the executive watching your back, there is little incentive for middle managers to execute on the vision. Is this disconnect a culture problem or a management problem?

 

It Is You, Babe

If individual study teams, or even entire therapeutic areas, don’t follow company- wide SOPs (but instead make up their own regulatory-compliant “standards”), is that culture or the acts of individual managers? (It may be a justifiable action on the manager’s part, but that’s logic, not culture, at the source.)

If we put training of the new CTMS tool in an e-learning environment (although most monitors won’t really pay attention and only click through it to get certified), can we blame our culture for being anti-training? It’s the individual who chose not to pay attention. If we rely on individuals’ cooperation in using new tools appropriately, and people fail to do so, isn’t that a series of individual decisions? If I fail to fill out all the fields in a template-based site visit report in my clinical trial management system (CTMS), isn’t that my choice? The culture didn’t make me do it, I chose not to do it.

The damaging side-effects of the Culture Defense are legion: it enables us to drag our feet when it comes to changing the way we are used to working; it gives us permission to abdicate responsibility without penalty; it enables us to stand in the way of progress with impunity for whatever our personal motivation may be (e.g., we’re overworked, we’re jealous, we want our pet project to get all the attention, we’re afraid of learning too many new process details).

Psychologists will tell us that the most powerful realization victims of damaging habits can have is that they have a choice to change. The Culture Defense is designed to prevent choice, to prevent individual responsibility, even to preclude individual initiative. The Culture Defense is defeated by individuals who choose not to go along with the easy path, to see the executive direction as good for themselves as well as the company, to embrace change as the inevitable condition of modern business, to risk getting information that may reveal true operating conditions quicker because it is better to do so, and to risk measuring because objective data about how we work can make us better workers.

We as individual pharmaceutical company staff, middle managers, and executives can choose to act in a manner that enables operational improvement to flourish. We can face down the Culture Defense so that our process redesigns are easily learned and pragmatic, so that our CTMS systems actually produce accurate, actionable data on clinical trial program performance, so that our CRO vendors are well managed; so that our technology investments are worth the effort to implement them; and so that our diverse and broadly skilled staff can be focused on productive work with urgency.

 

Walt Kelly, in his famous cartoon strip Pogo, memorably exclaimed, “We have met the enemy and he is us.” Culture isn’t the enemy, we are. Facing up to this fundamental truth will begin to enable operational innovation to meet our expectations.