Email us : info@waife.com
Management Consulting for Clinical Research

Why is Risk Based Monitoring Falling Short?

Why is Risk Based Monitoring (RBM) falling short of expected gains in productivity? Implementing RBM processes is proving harder than most companies anticipated, although considering the history of adopting significant process changes in clinical research, this should not be surprising.

 

It has now been almost three years since the FDA released their guidance on a “Risk Based Approach to Monitoring, and the concept has been discussed in industry for many years since electronic data capture (EDC) adoption spread. and the implementation of RBM into clinical operations is arguably not meeting its true potential. Since long before the guidance, industry has recognized that trying to discover every mishap or error in clinical trial data collection was enormously costly and time-consuming, versus focusing on the important issues and data in the development program. This is the point of the regulatory guidance. But despite this broad understanding and agreement, most companies are no further along in implementing a true risk-based monitoring environment. Why is this? Didn’t the agency give industry the green light to establish a focused approach? The answer lies in misplaced focus and a lack of effective processes designed to use RBM individual company situations.

 

Consensus vs Effectiveness

Let’s start with the consensus. Consensus is not a bad thing, in fact we all wish we could agree on everything since that would make our lives much easier, less stressful and certainly less confrontational. Of course this is unrealistic, and in life and business consensus becomes a matter of compromise. Compromise in and of itself would be fine as well, except when you start factoring in the number of interested parties that need to compromise. In this situation the compromise becomes self-defeating, as it becomes less about the issue itself but instead focused on horse-trading (he voted for my risk indicator, so I will vote for his).

 

This is not simply a behavioral annoyance. Let’s take the example of the RBM-critical element of Risk Indicators. The consensus approach has driven the too-large number of risk indicators that have evolved out of industry initiatives. At last check, there were approximately 141 risk indicators identified within the TransCelerate risk factor library, of which around 65 were categorized as high value. These numbers are self-defeating and unworkable, no matter how valid any one RI is any one company, and no matter how many reviewers reviewed them. This result is repeated over and over again when individual companies are asked to perform the same exercise of identifying risk indicators. With this many indicators to pick from, or defined, the company culture subtly shifts back to the more common “identify anything that could be risky” approach, which is a useless and regressive behavior, undercutting the original point of RBM.

 

With too many indicators the time and effort spent in just analyzing and responding to those indicators will offset any targeted gains in efficiency or cost savings. So how can one address the consensus piece? Well, there are a few common RIs that most people agree on in the industry with little debate, for example “# of AEs or SAEs”. There are probably around 10-20 of these common indicators that are widely useful and can be measured objectively and analyzed with some level of statistical rigor. These indicators would probably be a good place to start.

 

Another example where cross-industry initiatives fail is the implied imperative that commonality (“best practices”) must be better than whatever an individual company is doing. After this many years, company cultures are vastly different in their individual tolerance for risk. Each company should address their own RBM approach with some industry perspective, but focus on what they know to be the most important aspects of their data, rather than relying on other people to suggest to them what that data is. Often, companies benefit from organized third-party facilitated workshops that help company personnel navigate through the myriad of risk indicators to arrive at the select few they determine to be the most important, so as to target them for the initial implementation.

 

Getting the Right Data for RBM Design

Detail is the next item preventing successful implementation of RBM. When we discuss detail in this context we are referring to how triggering decisions are being made and what is being analyzed to arrive at those decisions. After defining the risk indicators we then have to measure them and decide what value or number at which the site or sites in question require additional scrutiny or action. The primary failures in this aspect of RBM are twofold: 1) subjectivity and 2) lack of historical context. These two items are intricately related to one another, but let’s address them individually first. The decision process on determining thresholds is often purely subjective. It starts with a group of people sitting in a room deciding the number or threshold at which a signal needs to be generated. There is often very little supporting data to validate these decisions, and the groups end up spending many weeks and months debating the merits of their decisions and attempting to rationalize the numbers. Many of these thresholds will either never be triggered, or be triggered so often that they will cease to be accurate reflections of where risk lies. It should also be noted that these measurements should not be evaluated individually, but should be assessed holistically with the knowledge that some indicators may carry more weight than others when deciding on what remedial action need be take. While some subjectivity is unavoidable, it only lends credence to the argument that RBM is not a “one size fits all” approach and thus cannot be templated.

 

Some people will argue that the reason RBM has not gained the foreseen traction is that it is a process of trial and error, and that thresholds and risk indicators will be re-assessed after they are put into practice. This argument is credible but the scope and scale of that trial and error can be limited significantly by statistically evaluation of historical data. The lack of historical context in RBM adoption is a key component of its current limitations and failures. Most companies have implemented their RBM programs as a “go forward strategy” and in doing so, left significant value on the table. Not all companies have a unified historical database of study data to draw on, but many do, and most could construct one from their archives if need be. Herein lies the true opportunity. Analyzing your chosen risk indicators against a robust historical database of your own company’s data will provide a much richer and accurate measurement of critical risk indicators and meaningful thresholds. The data gleaned from history will either support or refute previous assumptions made about risk indicators and associated thresholds and thereby significantly shorten the trial and error period. In addition, this historical data will provide companies with the sizable dataset needed to make informed decisions about additional risk indicators and thresholds. This cannot be achieved by looking at current data – there are just not enough datapoints to elicit a statistically robust result­ – and by the time that scale has been reached, the trial or trials will already be too far along for any RBM approach to have an impact.

 

Adding Focus on Process

Finally we arrive at “lack of focus”. This term should not be misconstrued to insinuate that people are not focused on RBM as a priority. In fact quite the contrary, many days and months of resource time are being applied to RBM initiatives. The lack of focus in this case is referring to where, and how, people are devoting their time in implementing RBM. As discussed previously, an enormous amount of effort is being expended defining risk indicators and thresholds. Unfortunately, not nearly enough effort is being spent on the process changes critical to a successful RBM implementation. A true RBM initiative for most companies often involves a significant change in both mindset and practice, especially within clinical operations. Many companies have decided to turn to technology to solve this problem, along the lines of “we just plug the numbers into our software and when a threshold is reached, we visit the site or take some other remedial action”. Unfortunately the process is far more complicated: it involves looking at the data differently and remotely. It involves the need for different expertise in roles that historically did not require such expertise. It requires a clear governance method and definition around decisions about when, or when not, to visit a site. And yes it also requires some technological need at the very least to document when risk indicators fired and how they were addressed. Furthermore and most importantly, it requires that the organization as a whole buy into the RBM as strategy and embrace the message from the regulatory agencies to refrain from expending tremendous amounts of time and resource to check everything. All of these requirements, especially the last one, are not easily achieved given the entrenchment the industry has fostered in pursuing the unattainable goal of eliminating risk. It is ironic that that we should take this approach when drug development by nature is fraught with risk. This is where companies can benefit most from third-party facilitation, by knowledgeable people who are unencumbered by bias, alliances or other interdepartmental dynamics.

 

RBM can be a success and can add significant value to today’s companies, but it needs to be implemented differently than has been done over the last three years. Purchasing the next piece of fancy software off the shelf and inputting some risk factors is not the place to start or focus, and will not result in a successful implementation. RBM is predominantly a process and mindset change that will need some form of technological support. Without addressing the process implications and focusing on current data, RBM will remain a phantom project in the biopharma world, in that large amounts of money and resources will have been spent, but after peeling away the layers we will find that we really are not doing things much differently than we were before we decided to take a risk-based approach to monitoring. Instead, we need to avoid consensus paralysis, exploit our own individual historical data trends, and tailor a process to embrace and exploit RBM.

Sorry, the comment form is closed at this time.