Email us : info@waife.com
Management Consulting for Clinical Research

Needham, MA, May 23, 2016 — The Sholom Aleichem Centenary Pen, a fountain pen commemorating the 100th anniversary of the famed Yiddish writer’s death, was awarded the Readers Choice Award from Pen World magazine. The award will be presented in early June in Las Vegas. The pen was designed by the writer’s great-grandson, Ronald S. Waife, and handmade by Urso Luxury S.A. in Italy. The pen is produced in a Limited Edition of 57 fountain pens and 57 rollerball pens, in sterling silver or vermeil over specially selected acrylic, with a mother of pearl cap. It is hand engraved with the writer’s distinctive signature. Urso’s distributor in the USA can be reached at luigigirotto@gmail.com.

SApenforrelease

Big Data Doesn’t Necessarily Mean Big Quality

The increasing volume of data collected and the corresponding analytical and logistical challenges around it are hot topics throughout the global clinical research enterprise, including our recent 28th Annual EuroMeeting. In this podcast, Detlef Nehrdich, Senior Associate for Waife and Associates in Germany, and co-lead of the EuroMeeting Theme on eHealth and Big Data, discusses the current and potential value of big data and eHealth with DIA Global Forum Deputy Editor Dr. Alberto Grignolo.

Why is Risk Based Monitoring (RBM) falling short of expected gains in productivity? Implementing RBM processes is proving harder than most companies anticipated, although considering the history of adopting significant process changes in clinical research, this should not be surprising.

 

It has now been almost three years since the FDA released their guidance on a “Risk Based Approach to Monitoring, and the concept has been discussed in industry for many years since electronic data capture (EDC) adoption spread. and the implementation of RBM into clinical operations is arguably not meeting its true potential. Since long before the guidance, industry has recognized that trying to discover every mishap or error in clinical trial data collection was enormously costly and time-consuming, versus focusing on the important issues and data in the development program. This is the point of the regulatory guidance. But despite this broad understanding and agreement, most companies are no further along in implementing a true risk-based monitoring environment. Why is this? Didn’t the agency give industry the green light to establish a focused approach? The answer lies in misplaced focus and a lack of effective processes designed to use RBM individual company situations.

 

Consensus vs Effectiveness

Let’s start with the consensus. Consensus is not a bad thing, in fact we all wish we could agree on everything since that would make our lives much easier, less stressful and certainly less confrontational. Of course this is unrealistic, and in life and business consensus becomes a matter of compromise. Compromise in and of itself would be fine as well, except when you start factoring in the number of interested parties that need to compromise. In this situation the compromise becomes self-defeating, as it becomes less about the issue itself but instead focused on horse-trading (he voted for my risk indicator, so I will vote for his).

 

This is not simply a behavioral annoyance. Let’s take the example of the RBM-critical element of Risk Indicators. The consensus approach has driven the too-large number of risk indicators that have evolved out of industry initiatives. At last check, there were approximately 141 risk indicators identified within the TransCelerate risk factor library, of which around 65 were categorized as high value. These numbers are self-defeating and unworkable, no matter how valid any one RI is any one company, and no matter how many reviewers reviewed them. This result is repeated over and over again when individual companies are asked to perform the same exercise of identifying risk indicators. With this many indicators to pick from, or defined, the company culture subtly shifts back to the more common “identify anything that could be risky” approach, which is a useless and regressive behavior, undercutting the original point of RBM.

 

With too many indicators the time and effort spent in just analyzing and responding to those indicators will offset any targeted gains in efficiency or cost savings. So how can one address the consensus piece? Well, there are a few common RIs that most people agree on in the industry with little debate, for example “# of AEs or SAEs”. There are probably around 10-20 of these common indicators that are widely useful and can be measured objectively and analyzed with some level of statistical rigor. These indicators would probably be a good place to start.

 

Another example where cross-industry initiatives fail is the implied imperative that commonality (“best practices”) must be better than whatever an individual company is doing. After this many years, company cultures are vastly different in their individual tolerance for risk. Each company should address their own RBM approach with some industry perspective, but focus on what they know to be the most important aspects of their data, rather than relying on other people to suggest to them what that data is. Often, companies benefit from organized third-party facilitated workshops that help company personnel navigate through the myriad of risk indicators to arrive at the select few they determine to be the most important, so as to target them for the initial implementation.

 

Getting the Right Data for RBM Design

Detail is the next item preventing successful implementation of RBM. When we discuss detail in this context we are referring to how triggering decisions are being made and what is being analyzed to arrive at those decisions. After defining the risk indicators we then have to measure them and decide what value or number at which the site or sites in question require additional scrutiny or action. The primary failures in this aspect of RBM are twofold: 1) subjectivity and 2) lack of historical context. These two items are intricately related to one another, but let’s address them individually first. The decision process on determining thresholds is often purely subjective. It starts with a group of people sitting in a room deciding the number or threshold at which a signal needs to be generated. There is often very little supporting data to validate these decisions, and the groups end up spending many weeks and months debating the merits of their decisions and attempting to rationalize the numbers. Many of these thresholds will either never be triggered, or be triggered so often that they will cease to be accurate reflections of where risk lies. It should also be noted that these measurements should not be evaluated individually, but should be assessed holistically with the knowledge that some indicators may carry more weight than others when deciding on what remedial action need be take. While some subjectivity is unavoidable, it only lends credence to the argument that RBM is not a “one size fits all” approach and thus cannot be templated.

 

Some people will argue that the reason RBM has not gained the foreseen traction is that it is a process of trial and error, and that thresholds and risk indicators will be re-assessed after they are put into practice. This argument is credible but the scope and scale of that trial and error can be limited significantly by statistically evaluation of historical data. The lack of historical context in RBM adoption is a key component of its current limitations and failures. Most companies have implemented their RBM programs as a “go forward strategy” and in doing so, left significant value on the table. Not all companies have a unified historical database of study data to draw on, but many do, and most could construct one from their archives if need be. Herein lies the true opportunity. Analyzing your chosen risk indicators against a robust historical database of your own company’s data will provide a much richer and accurate measurement of critical risk indicators and meaningful thresholds. The data gleaned from history will either support or refute previous assumptions made about risk indicators and associated thresholds and thereby significantly shorten the trial and error period. In addition, this historical data will provide companies with the sizable dataset needed to make informed decisions about additional risk indicators and thresholds. This cannot be achieved by looking at current data – there are just not enough datapoints to elicit a statistically robust result­ – and by the time that scale has been reached, the trial or trials will already be too far along for any RBM approach to have an impact.

 

Adding Focus on Process

Finally we arrive at “lack of focus”. This term should not be misconstrued to insinuate that people are not focused on RBM as a priority. In fact quite the contrary, many days and months of resource time are being applied to RBM initiatives. The lack of focus in this case is referring to where, and how, people are devoting their time in implementing RBM. As discussed previously, an enormous amount of effort is being expended defining risk indicators and thresholds. Unfortunately, not nearly enough effort is being spent on the process changes critical to a successful RBM implementation. A true RBM initiative for most companies often involves a significant change in both mindset and practice, especially within clinical operations. Many companies have decided to turn to technology to solve this problem, along the lines of “we just plug the numbers into our software and when a threshold is reached, we visit the site or take some other remedial action”. Unfortunately the process is far more complicated: it involves looking at the data differently and remotely. It involves the need for different expertise in roles that historically did not require such expertise. It requires a clear governance method and definition around decisions about when, or when not, to visit a site. And yes it also requires some technological need at the very least to document when risk indicators fired and how they were addressed. Furthermore and most importantly, it requires that the organization as a whole buy into the RBM as strategy and embrace the message from the regulatory agencies to refrain from expending tremendous amounts of time and resource to check everything. All of these requirements, especially the last one, are not easily achieved given the entrenchment the industry has fostered in pursuing the unattainable goal of eliminating risk. It is ironic that that we should take this approach when drug development by nature is fraught with risk. This is where companies can benefit most from third-party facilitation, by knowledgeable people who are unencumbered by bias, alliances or other interdepartmental dynamics.

 

RBM can be a success and can add significant value to today’s companies, but it needs to be implemented differently than has been done over the last three years. Purchasing the next piece of fancy software off the shelf and inputting some risk factors is not the place to start or focus, and will not result in a successful implementation. RBM is predominantly a process and mindset change that will need some form of technological support. Without addressing the process implications and focusing on current data, RBM will remain a phantom project in the biopharma world, in that large amounts of money and resources will have been spent, but after peeling away the layers we will find that we really are not doing things much differently than we were before we decided to take a risk-based approach to monitoring. Instead, we need to avoid consensus paralysis, exploit our own individual historical data trends, and tailor a process to embrace and exploit RBM.

W&A Staff will be presenting at major industry meetings in the next few months:

Ron Waife will be leading off a Session at the DIA Euro Meeting, April 6th in Hamburg, Germany, on Enhancing  Clinical Trials Efficacy:  Operational Excellence and Continuous Improvement of Clinical Research Processes.  He will be speaking on Pragmatic Approaches to Improving Productivity in Clinical Development.

On September 12th, Steve Shevel, Senior Associate, will be chairing a Panel at the SCDM Annual Meeting in San Diego, California entitled What Clinical Personnel Wish Data Managers Knew about Clinical Operations and Vice Versa.

We look forward to seeing you at these conferences.

“RACI has met the common fate of other time-worn jargon: it is now misused, misunderstood, and misleading.”

Fingers-pointing-blame-to-manOne of the operating assumptions widely used in biopharma process analysis is the “RACI” model. RACI stands for Responsible, Accountable, Consulted and Informed, referring to what role any particular person, job or department has in a particular project or process. The point of RACI is to provide a handy structure for teams or complex organizations to sort out, and document clearly, who is going to do what. But RACI has met the common fate of other time-worn jargon: it is now misused, misunderstood, and misleading.

Unless you are a sapphire-belt sixty-sigma facilitator, the RACI model has long outlived its usefulness. Flawed at birth, its failings are ever more manifest. And yet the RACI model lives on like Tang on the Space Station.

Perhaps you have never heard of the RACI model, in which case you have been spared. Each component of the model, in actual use today (not as it was originally conceived) is problematic. It is not enough to say, “well, people just aren’t using it correctly.” If the original definitions are forgotten or are no longer intuitive, then it’s the model, the language, that has to change. This is important for two reasons: the purpose of the RACI model is still a compelling notion – not everyone involved in a process has the same responsibilities (lower case “r”!). But further, the misuse of the individual R, A, C, I words contribute to the opposite effect: people misunderstanding their responsibilities, not least because the labels are made somehow holy by the jargon. And the cost of this mistake is that reams of SOPs and other control documentation is created using the RACI model, which is then auditable, and more importantly, adds complexity and time to the very processes we are trying to make more efficient.

Let’s look first at the “R” and the “A”. “R” is supposed to be, in the model, the person who does the work – a worker, a doer. Almost no one understands this correctly. The letter R is defined as standing for “Responsible,” but the word responsible, to almost everyone, means the person who is in charge, who is supposed to lead the work, whose head will roll if things go wrong. Sorry, in RACI that is the definition of the “A” word – “Accountable”. Everyone we’ve ever worked with to try and use RACI, or has already had RACI imposed on them, confuses the R and the A, to the point where deciding who is R and who is A becomes arbitrary, and therefore meaningless. Most importantly, things that are confusing, contradictory or illogical become unmemorable, and that makes the whole RACI effort a costly waste.

The “C” and the “I” are also flawed. Can there be any less sincere roles for people than who is “Consulted” and who is “Informed”? The time spent delineating the C and the I in the standard RACI workshop is not only time wasted, it is the opportunity for more misleading behavior. Too often, people labeled “C” are people who actually should be doing something but don’t want the responsibility. They are mollified with the C, as are those who don’t want to do anything but want to be able to express an opinion about what others are doing. Should we be officially codifying such wasteful and passive-aggressive behavior? And what about “Informed”? Unless you’re working in the NSA, is there anyone who shouldn’t be informed, and is there anyone who needs to be officially informed they qualify for this obvious, passive position? The “C” and the “I” are simply a fancy justification for the phenomenon I call “everybody into the loop!”, i.e., if you aren’t actually responsible for anything, we don’t want you to feel left out, so we will keep you “informed,” and if you’re someone we’re afraid of, we will make sure you are “consulted”. This is much like everyone on the kids’ soccer team getting a trophy for “participation”. Maybe we could give everyone on the project team a trophy at the first meeting and then disinvite them for the rest of the project! I can see that my replacement for RACI should be “RDT” – Responsible, Doing something, gets a Trophy.

Because of these misunderstandings, the worst aspect of using RACI in real life is that no one is actually assigned to do any work! You can be the one who is blamed (RA), you can be the one who gets to kibbitz (CI), but no one is assigned to do anything specific, which was the original point.

 

There are only two roles worth delineating when designing clinical development processes: the person who governs the work, and the person who does the work. If you are re-defining or creating new processes in your research organization, there are many other techniques other than RACI that will clarify responsibilities. Stick to the two categories: Govern, and Do. If someone or some function falls into neither bucket, they get the trophy and can go home. Finance? See you at budget time. Quality Assurance? You have your own chance to Govern and Do in QA processes. IT? Make sure the intranet is working.

It’s very important to clarify roles in the multiplexed world of clinical development. The key is to clarify for the sake of simplicity, not for the sake of inclusion. Productivity over Ego; Govern and Do. Erase the RACI and get back to working smarter.