“IRT needs to be considered as essential clinical research technology that is procured and implemented on an enterprise basis”
Today’s question is: do we learn from the past? In clinical research process terms, the learning seems very slow. In particular, I am thinking about information technology adoption, and today’s situation regarding adoption of IRT (interactive response technology, i.e., what we used to call IVR). Using technology to facilitate subject randomization and drug supply accountability and distribution has become an essential element of clinical operations. As more and more regulatory and sponsor attention is focused on this area, it is surprising that IRT is generally approached with two key old-fashioned attributes: it is mostly still arranged for by individual study teams (i.e., clinical personnel), and it is still mostly delivered under the complete control of outside vendors. Both of these characteristics are antique in the twenty-first century world of clinical research informatics, of which IRT should be a part.
Briefly put, IRT needs to be considered as essential clinical research technology that is procured and implemented on an enterprise basis, just like electronic data capture (EDC) and clinical trial management systems (CTMS). The history of these two earlier technology adoptions teaches us that sponsors suffered (and suffer still) from poor strategies, low process comprehension, and underestimated techn0logy implementation efforts. IRT implementations in 2013 need to benefit from these experiences. Learning from the past, we need to get it right the third time.
Not Again!
We can see the same mistakes being made with IRT as were made with EDC and CTMS to a remarkable degree. We may observe that there is an inevitable arc to technology maturation but it would be unwise to take comfort in that. The pace of quality and compliance imperatives is much faster now than in the ‘90’s, and we should not be satisfied to watch IRT mosey along.
EDC and CTMS also started as technologies selected by individual study teams, with different vendors and processes used from group to group and year to year. Being new to the technology, sponsors accepted what the software vendors told them about how best to use it, and willingly paid the vendors to do the then-mysterious steps of design, configuration and programming necessary to get the tools to work for them. Sponsors were at the mercy of vendor quality, vendor schedules, and vendor opacity. Sponsors had to adapt their processes to the vendor, and had to trust whatever training and help desk services the vendor offered would be sufficient for their sites and staff. With so little internal focus, the fact that these technologies required internal change management was simply not recognized or prioritized.
All of these factors created considerable inefficiencies in EDC and CTMS adoption, created widespread user confusion and frustration, and undermined the benefits that were supposed to be derived. Sponsors generally have fixed these problems now, after many years of hard lessons learned. Thus it is sad and surprising that the above paragraph can be applied to IRT use today. Why is it important to treat IRT as an enterprise tool and apply comparable rigor to it? The answer should be obvious to all, and yet the dilemma is that randomization and drug supply optimization have been sequestered on the clinical side of the fence, away from the data folks who traditionally handled EDC, or the informatics folk off to the side. As with all clinical research technology adoption, it is only the occurrence of compliance audit issues that has triggered even the recognition, much less the action, that IRT perhaps should be addressed in a more organized fashion.
Key Points of IRT Failure
The risks of IRT failure should be apparent: one line off in a randomization table can trash the study data. Errors can lead to unintentional unblinding. Treating randomization and drug supply issues simultaneously (with the same technologies, vendors and internal staff) can hopelessly muddy the correct processes. And as with many clinical research technology issues, confusion or disagreement about internal governance and responsibility prevents a timely solution.
Unfortunately, for the most part, IRT technology vendors have presented IRT to their clients in an overly technical fashion, which has confused clinical personnel’s understanding of what configuration settings mean and what the system will do. The older vendors also have a deep vested interest in labor-intensive opaque processes. This has compounded the lack of knowledge at many sponsors about the inner workings of IRT, what it is capable of, and what it should not be used for. The danger is that misunderstandings on both sides of the vendor/sponsor aisle are only realized downstream, when people actually begin using the system.
Factors that have contributed to this are as follows:
§ Extremely long and overly complex system specifications that are more technical in nature than user-based.
§ “Scope creep” on the utilization of IRT for data capture that would be more appropriately handled in other systems such as EDC.
§ Not having a clear picture of the intended final product until just before first patient in (lack of visualization during the development process).
§ Treating each new trial as a custom build (usually caused by out-of-date technology and financially beneficial to vendors).
§ Lack of centralized expertise at the sponsor that can speak to all of the primary components of IRT, medication dispensation and management, randomization and stratification, and unblinding.
§ Overly complex protocols which are extremely difficult to implement at the operational level.
Getting It Right
Each of these points have clear solutions. It starts with clear delineation of responsibility among the various departments involved and a singular focus of governance. Sponsors should consider IRT as another enterprise research technology and select it, design processes, and manage the change accordingly. Vendors have to recognize a new, leaner model of providing service or face being iced out by newer and more nimble competitors with superior technology. Sponsors must consider the impact of their protocol designs on the IRT arena as they are starting to do with EDC or eCOA. There are technical and tactical improvements that can improve the IRT process once better roles and tools are in place. All of this must start with a much improved understanding of IRT among clinical staff.
What are the business consequences of ignoring how IRT is used in our companies today? They will likely include embarrassing audit findings and trial delays. What are the personal consequences if we screw up again? The baseball metaphor would be “three strikes and you’re out.” Much as I love them, baseball metaphors don’t always apply to business, and sadly, since no one in clinical research ever seems to be held accountable for process failure, it is likely that no one will lose their job over an inefficient IRT implementation. Let’s think positively: another saying is “third time’s a charm.” We can only hope so.
Sorry, the comment form is closed at this time.