Email us : info@waife.com
Finding Efficiency in Clinical Research

If pharmaceutical companies have a special Harry Potter “Defense Against the Dark Arts” class for their management team, one of the first techniques they must be learning is the Culture Defense. When confronted with evidence of their reluctance to change, they are apparently taught to point their wands out in front of them and say, “It ain’t me, it’s the culture here.” This turns out to be a marvelous, widely applicable spell—the easiest way out of an uncomfortable situation. There’s one problem: we are the culture.

We can’t all be the rebels, can we? If so, how would the “culture” ever form with beliefs different from our own? To claim that company culture is the reason that operational innovation fails to take root is to deny your own place in the company you work. Culture doesn’t kill efficiency, people do.

This common weakness of corporate organizations is particularly obstructive to the introduction of information technology because technology generates so much upheaval, especially in areas of clinical development still untouched, or merely grazed, by the productive use of software. Often standing in the way of that productivity is the Culture Defense.

Let’s look at the following examples of flawed process improvement where culture is often blamed as the cause of failure, and let’s ask ourselves if there might be other reasons lurking.

The Ubiquitous Culture Defense

We’re getting lousy data out of a great tool (an expensive enterprise clinical trials monitoring service [CTMS] for instance, or a state-of-the-art adverse event system). How does this happen? The old IT acronym, “GIGO” (garbage in, garbage out), applies. But why is it happening? Why are our staff waiting until the last minute to enter trial status information that is supposed to be feeding a highly accurate real-time CTMS? Or in the case of the adverse events system (AES), why are antique paper-based data flows being maintained, while the AES is an alien, unwelcome layer imposed on top. Why is this allowed to happen? The Culture Defense says, “Well, we’re not used to reporting data in real-time,” or “We want to review and double-check the information before anyone sees it.” Or in the safety case, “We won’t risk the importance of safety surveillance to software which may not work.” It’s a culture thing. Really?

Another example: A major process improvement project is organized into the ubiquitous “workstreams” and comes up with a flood of recommended changes. Several of the most important changes require re-organizing staff, and while the net headcount will stay the same, some people will probably not fit the new skills required. Impossible! Why? Because “we don’t (or can’t) fire people here – it’s our culture.”

And another example: We throw resources (human and monetary) at the database lock of our pivotal trial, with no restraint. At that moment, there is nothing more important to the company. If the data management processes are examined, however, you will likely find that the electronic data capture (EDC) tools you have used for years are being used sub-optimally and inefficiently. It’s the culture. Perhaps it is, but is that a good thing? Does the Culture Defense make all other options moot?

Yet another example: “We don’t measure here.” It’s our culture not to measure, or if we do, we don’t do it consistently, or with rigor, or learn from the results. There’s probably loads of data – indeed too much data – for you to measure from, but it’s not in the culture to act on this information. Is that culture or laziness or fear?

More pervasively, it is common to see clinical development executives across the industry turn a blind eye to what really happens at the operational level. Executives announce an impassioned commitment to a particular process improvement initiative, and tiptoe out of the room—leaving the implementation to middle management. In many companies, without the executive watching your back, there is little incentive for middle managers to execute on the vision. Is this disconnect a culture problem or a management problem?

 

It Is You, Babe

If individual study teams, or even entire therapeutic areas, don’t follow company- wide SOPs (but instead make up their own regulatory-compliant “standards”), is that culture or the acts of individual managers? (It may be a justifiable action on the manager’s part, but that’s logic, not culture, at the source.)

If we put training of the new CTMS tool in an e-learning environment (although most monitors won’t really pay attention and only click through it to get certified), can we blame our culture for being anti-training? It’s the individual who chose not to pay attention. If we rely on individuals’ cooperation in using new tools appropriately, and people fail to do so, isn’t that a series of individual decisions? If I fail to fill out all the fields in a template-based site visit report in my clinical trial management system (CTMS), isn’t that my choice? The culture didn’t make me do it, I chose not to do it.

The damaging side-effects of the Culture Defense are legion: it enables us to drag our feet when it comes to changing the way we are used to working; it gives us permission to abdicate responsibility without penalty; it enables us to stand in the way of progress with impunity for whatever our personal motivation may be (e.g., we’re overworked, we’re jealous, we want our pet project to get all the attention, we’re afraid of learning too many new process details).

Psychologists will tell us that the most powerful realization victims of damaging habits can have is that they have a choice to change. The Culture Defense is designed to prevent choice, to prevent individual responsibility, even to preclude individual initiative. The Culture Defense is defeated by individuals who choose not to go along with the easy path, to see the executive direction as good for themselves as well as the company, to embrace change as the inevitable condition of modern business, to risk getting information that may reveal true operating conditions quicker because it is better to do so, and to risk measuring because objective data about how we work can make us better workers.

We as individual pharmaceutical company staff, middle managers, and executives can choose to act in a manner that enables operational improvement to flourish. We can face down the Culture Defense so that our process redesigns are easily learned and pragmatic, so that our CTMS systems actually produce accurate, actionable data on clinical trial program performance, so that our CRO vendors are well managed; so that our technology investments are worth the effort to implement them; and so that our diverse and broadly skilled staff can be focused on productive work with urgency.

 

Walt Kelly, in his famous cartoon strip Pogo, memorably exclaimed, “We have met the enemy and he is us.” Culture isn’t the enemy, we are. Facing up to this fundamental truth will begin to enable operational innovation to meet our expectations.

One of my friends in the biotech industry explained the business with this metaphor: working in biotech was like running full speed at a brick wall, and at the last possible second, the brick wall would disappear, only to be replaced by another brick wall farther ahead. Those brick walls, of course, represented critical milestones, such as another round of venture funding, or a research result, a regulatory filing, and so on. It was the idea of running full speed that stayed with me. While common enough in small entrepreneurial companies, that sense of speed, focus and anxiety is rarely found in pharma, despite lip service to the contrary. Where is the sense of urgency in clinical development?

This is not the say that we do not all work hard. It is not to say we don’t care about the progress of our work. But it is to say that at most pharmaceutical companies, day-to-day, we have neither the energy, direction or discipline to conduct our operations urgently. And there are so many reasons to do so! Deadlines, stock options, competition, everyday failures, demanding bosses – not to mention the patients with few, unsatisfactory options waiting for our new therapies.

Some of us (people and companies) certainly may start with enthusiasm. But particularly at the clinical stage, so many factors build up to weigh us down: the myriad inherent delays, the disappointing scientific results, the bureaucracy of corporations and regulations, the unavoidable time intervals of research itself. This is all true, but that’s what we are here for – “that’s why they call it work.”

Most companies have institutionalized processes for complacency, rather than for urgency. Some have become standard behavior since they are so familiar:

  • Slow contracting with CROs
  • Slow payments to vendors and investigators
  • Slow IT projects that are completed years after originally estimated
  • Slow adoption of already-approved process changes
  • Slow responses to poor performance metrics
  • Slow reporting of information requested by operational staff from report programmers
  • Slow protocol development
  • Slow document review and approval
  • Slow study start up.

How many of these do you take for granted, and assume they are inevitable? But they are not inevitable; they are all human-driven! These are not immutable laws of nature; these activities are slow because we allow them to be! There is nothing standing in the way of speed except the lack of will, the lack of urgency.

Another anecdote: during the beginning of one of my first consulting assignments, I mentioned to my client (a junior vice president) that my invoice hadn’t been paid. He stood up, told me to wait there, and left his office for about 20 minutes. He came back with a paper check and handed it to me, apologizing. Ok, I was spoiled for life, but the point is, of course it’s possible to get a check cut, a report run, a contract signed, a meeting scheduled! It just takes a person to do it.

Not all delays – maybe not even most – are caused by perverse obstinance. Think of the many things that fill our days instead of urgent work – emails, meetings back-to-back and triple-scheduled, teleconferences where you can’t hear what most of the people are saying. It’s all too easy for our days to slip away. What most of us are not doing is comparing our tasks, our to-do lists, our schedules, to the most important work list of all: what are the goals of my organization, my department, my project? How is what I am doing right now serving those goals? What does deciding this issue, or reading this email, have to do with moving closer to these goals?

Changing an environment from complacency to urgency requires some bravery and lots of leadership. Let’s look at some examples:

  • You’ve been in a team meeting all morning, getting close to the end of a long project which is supposed to develop a new set of evaluation criteria for your CRO’s. The leader asks if all are in agreement, and one key member says, “maybe, but I have to check with my boss. We’ll get back to you.”
  • You’re working with a statistician on completing the FSR. It’s not due until next month, but you’re very nearly done and it would be advantageous to get it submitted early. You call her up for the third time that day, and find out she’s gone home, and will be on vacation for two weeks ­– something she neglected to tell you about.
  • You got approval to add someone to your staff at the beginning of the year but HR still hasn’t send you qualified resumes. When you pick someone to interview, it takes weeks to schedule her (or she has already found another job). When you try to take matters into your own hands, you are scolded for not following procedures.
  • You’ve finally scheduled a teleconference with a key opinion leader who is very hard to reach. You need the data manager in on the call but he is in another building on campus and says it’s too far to walk. You could tie him into the telecon, but he points out (correctly) that his accent is too thick to be well understood over the phone.
  • Marketing has been warning for years that you need real world patient experience data to be competitive with your new allergy medication. But despite what your competitors were accomplishing, regulatory was still skeptical about approval. Instead of engaging with data management on the issue, they keep asking to see one more demo from one more vendor.

I am sure you can provide many examples from your own organization. What’s missing in each of these situations is someone to speak up ­– not to argue the issue but to remind all involved that we are holding up the improvements, the decision, the work. And that our work is urgent: we needed to hire that new person yesterday, we needed that new software yesterday, we needed that data yesterday, we needed those sites ready for FPI yesterday. And once having spoken up, we need to pursue the resolution to a quick closure, using whatever channels of authority are necessary. Equally essential is the commitment and vocal backing of executive leadership to make clear that urgency is an organizational value and priority.

To a healthcare team in your local Emergency Department, questions of priority, focus and speed are regularly and clearly answered. They know how to triage, how to follow emergency care protocols, how to choose and listen and analyze and solve with calm, professional urgency. We all need this essence­ – to triage our work lives and cut through the low priorities. And we need to encourage our colleagues to do the same, so we can bring our collective focus and precious energy to the meaningful work our companies and organizations are doing. It’s why we chose this profession; let’s do it with urgency.

“A particularly sad consequence of operational mediocrity is its impact on innovation.”

What happened to “Operational Excellence”? It is a beautiful phrase and a worthy goal. But as biopharmas and CROs start to dismantle or de-fund their Operational Excellence departments, we should ask what is happening. Do we no longer desire excellence? Do we think we have achieved it?

Operational Excellence arrived as a melodious piece of jargon because of disillusionment with what came before it: TQM first, then Process Improvement, then various branded methodologies (hungry-eight-omega, you know who you are). The “excellence” efforts have suffered similar fates to the earlier incarnations – underfunding, insincere management commitments, skepticism, fatigue and fundamental misunderstandings about what process improvement can and should be. Changing the branding does not change the results because of these key flaws, and they all contribute to a negative feedback loop. Missed expectations leads to skepticism, poor techniques lead to change fatigue, underfunding prevents sustained effort, and insincere commitments makes re-prioritizing all too easy.

Improving clinical development’s methods is still very much needed. The fundamental inefficiency of biopharma clinical development is driven by many external factors, true, but we don’t do well with the hand we are dealt. And we’ve seen that simply outsourcing the problem (by far the most common solution today) has only created variable-cost inefficiency instead of fixed.

The irrelevance of outsourcing to improving efficiency is another column in itself. Sponsors like CROs to use methods they recognize, no matter how suboptimal, and CROs know they will be paid regardless, so the system has no meaningful incentives to efficiency besides competing billing rate charts. For all the many failings of biopharma outsourcing procurement departments, their failure to make an impact on overall industry methods may be the most damning.

Process improvement is ripe for action in all aspects of clinical development: protocol design, subject enrollment, data management, study team conduct, trial operations oversight, safety surveillance, use of information technology, investigative site communication and performance, monitoring and more. Your company probably has had multiple initiatives in most of these areas already, but meaningful results are rare and usually fleeting. We live in operational mediocrity instead of operational excellence. Nonetheless, we can no more give up on process change because it fails often than we can give up on early stage drug research because it fails often. Improving processes is still worthwhile; indeed it is an unavoidable imperative.

A particularly sad consequence of operational mediocrity is its impact on innovation. If we look at the current appealing innovations in clinical development – things like risk-based monitoring, fully electronic Trial Master Files, exploiting mHealth technologies, next-generation EDC, professionalized CRO oversight, and so on – each involve significant workflow and responsibility changes that must be as innovative as the technology used. The industry’s long experience with trying to exploit EDC and eCOA technologies has taught us this: underlying every innovation is a change in the way we work. Otherwise there is no innovation. And to make that change, organizations and internal thought leaders must understand and respect the nitty-gritty process changes which need to be defined, agreed to, tested and trained.

How do we steer back towards something approximating excellence? I have seen considerable success in what I call a “pragmatic” approach ­– one that takes on change step by step. It is grounded on several key essential items:

  • Committed and visible executive management
  • Traceability to key enterprise goals
  • Breaking the task into manageable, iterative pieces which, once achieved, serve as positive examples which break the skepticism cycle
  • Which must be followed immediately by additional improvement pieces to maintain momentum and convince staff it is “real this time.”

One way of thinking of this is that is akin to “evidence-based” medicine. As used here, it is an evidence-based method. EBM is a useful distinction from JBM (jargon-based methods), which is used instead all too frequently. It can be generalized that jargon is the refuge of those with little else to offer.

The building blocks of pragmatic process improvement will certainly sound familiar (identifying key business drivers, interviewing stakeholders, designing processes in workshop settings, documenting and implementing the changes and monitoring first use). This is like saying that basketball is dribbling down the court and putting the ball in that hoop up there. The hard part is overcoming all the typical obstacles that can so easily undermine improvement projects, some of which we have alluded to.

Let’s take the ubiquitous “workshop” as an example. Everybody in pharma has been to many workshops. What are the characteristics of those you remember as being productive? The workshop needs to have a crystal-clear purpose achievable in the time allotted. It needs a domain-knowledgeable facilitator. It requires some organizing mechanical technique to make the discussion and results tangible. Most important is the selection of the participants: 18 people chosen for their political affiliation does not a workshop make. That is better a definition of a circus. Instead a small group of stakeholders who can truly devote the necessary time to the task will be essential. It all sounds familiar, but the subtlety of applying pragmatism to each step is the heart of the matter.

Underlying the success of pragmatic process improvement is the correct governance – who is in charge, who funds, who decides, who staffs, who is accountable? The answer is always a little different from company to company. Should the people who do the work being improved be responsible for improving it? (Seems logical and essential to me.) Can process improvement cost less by creating a central dedicated department (which risks separating domain knowledge from the process knowledge)? Should it be outsourced like everything else? Should it be lumped in with the IT, HR or training departments? Every company will try it differently, but tying performance accountability to the management of the process in question is the most powerful solution.

 

Change fatigue, change skepticism, wasteful projects and unmet expectations are all real challenges to improving the way we work. They all can be overcome by a pragmatic approach to process improvement that is properly governed, with visible management commitment, taken in manageable steps that demonstrate success, and grounding those improvements permanently in our work environment. This steers us back toward excellence, which is the only direction worth traveling.

Why is Risk Based Monitoring (RBM) falling short of expected gains in productivity? Implementing RBM processes is proving harder than most companies anticipated, although considering the history of adopting significant process changes in clinical research, this should not be surprising.

 

It has now been almost three years since the FDA released their guidance on a “Risk Based Approach to Monitoring, and the concept has been discussed in industry for many years since electronic data capture (EDC) adoption spread. and the implementation of RBM into clinical operations is arguably not meeting its true potential. Since long before the guidance, industry has recognized that trying to discover every mishap or error in clinical trial data collection was enormously costly and time-consuming, versus focusing on the important issues and data in the development program. This is the point of the regulatory guidance. But despite this broad understanding and agreement, most companies are no further along in implementing a true risk-based monitoring environment. Why is this? Didn’t the agency give industry the green light to establish a focused approach? The answer lies in misplaced focus and a lack of effective processes designed to use RBM individual company situations.

 

Consensus vs Effectiveness

Let’s start with the consensus. Consensus is not a bad thing, in fact we all wish we could agree on everything since that would make our lives much easier, less stressful and certainly less confrontational. Of course this is unrealistic, and in life and business consensus becomes a matter of compromise. Compromise in and of itself would be fine as well, except when you start factoring in the number of interested parties that need to compromise. In this situation the compromise becomes self-defeating, as it becomes less about the issue itself but instead focused on horse-trading (he voted for my risk indicator, so I will vote for his).

 

This is not simply a behavioral annoyance. Let’s take the example of the RBM-critical element of Risk Indicators. The consensus approach has driven the too-large number of risk indicators that have evolved out of industry initiatives. At last check, there were approximately 141 risk indicators identified within the TransCelerate risk factor library, of which around 65 were categorized as high value. These numbers are self-defeating and unworkable, no matter how valid any one RI is any one company, and no matter how many reviewers reviewed them. This result is repeated over and over again when individual companies are asked to perform the same exercise of identifying risk indicators. With this many indicators to pick from, or defined, the company culture subtly shifts back to the more common “identify anything that could be risky” approach, which is a useless and regressive behavior, undercutting the original point of RBM.

 

With too many indicators the time and effort spent in just analyzing and responding to those indicators will offset any targeted gains in efficiency or cost savings. So how can one address the consensus piece? Well, there are a few common RIs that most people agree on in the industry with little debate, for example “# of AEs or SAEs”. There are probably around 10-20 of these common indicators that are widely useful and can be measured objectively and analyzed with some level of statistical rigor. These indicators would probably be a good place to start.

 

Another example where cross-industry initiatives fail is the implied imperative that commonality (“best practices”) must be better than whatever an individual company is doing. After this many years, company cultures are vastly different in their individual tolerance for risk. Each company should address their own RBM approach with some industry perspective, but focus on what they know to be the most important aspects of their data, rather than relying on other people to suggest to them what that data is. Often, companies benefit from organized third-party facilitated workshops that help company personnel navigate through the myriad of risk indicators to arrive at the select few they determine to be the most important, so as to target them for the initial implementation.

 

Getting the Right Data for RBM Design

Detail is the next item preventing successful implementation of RBM. When we discuss detail in this context we are referring to how triggering decisions are being made and what is being analyzed to arrive at those decisions. After defining the risk indicators we then have to measure them and decide what value or number at which the site or sites in question require additional scrutiny or action. The primary failures in this aspect of RBM are twofold: 1) subjectivity and 2) lack of historical context. These two items are intricately related to one another, but let’s address them individually first. The decision process on determining thresholds is often purely subjective. It starts with a group of people sitting in a room deciding the number or threshold at which a signal needs to be generated. There is often very little supporting data to validate these decisions, and the groups end up spending many weeks and months debating the merits of their decisions and attempting to rationalize the numbers. Many of these thresholds will either never be triggered, or be triggered so often that they will cease to be accurate reflections of where risk lies. It should also be noted that these measurements should not be evaluated individually, but should be assessed holistically with the knowledge that some indicators may carry more weight than others when deciding on what remedial action need be take. While some subjectivity is unavoidable, it only lends credence to the argument that RBM is not a “one size fits all” approach and thus cannot be templated.

 

Some people will argue that the reason RBM has not gained the foreseen traction is that it is a process of trial and error, and that thresholds and risk indicators will be re-assessed after they are put into practice. This argument is credible but the scope and scale of that trial and error can be limited significantly by statistically evaluation of historical data. The lack of historical context in RBM adoption is a key component of its current limitations and failures. Most companies have implemented their RBM programs as a “go forward strategy” and in doing so, left significant value on the table. Not all companies have a unified historical database of study data to draw on, but many do, and most could construct one from their archives if need be. Herein lies the true opportunity. Analyzing your chosen risk indicators against a robust historical database of your own company’s data will provide a much richer and accurate measurement of critical risk indicators and meaningful thresholds. The data gleaned from history will either support or refute previous assumptions made about risk indicators and associated thresholds and thereby significantly shorten the trial and error period. In addition, this historical data will provide companies with the sizable dataset needed to make informed decisions about additional risk indicators and thresholds. This cannot be achieved by looking at current data – there are just not enough datapoints to elicit a statistically robust result­ – and by the time that scale has been reached, the trial or trials will already be too far along for any RBM approach to have an impact.

 

Adding Focus on Process

Finally we arrive at “lack of focus”. This term should not be misconstrued to insinuate that people are not focused on RBM as a priority. In fact quite the contrary, many days and months of resource time are being applied to RBM initiatives. The lack of focus in this case is referring to where, and how, people are devoting their time in implementing RBM. As discussed previously, an enormous amount of effort is being expended defining risk indicators and thresholds. Unfortunately, not nearly enough effort is being spent on the process changes critical to a successful RBM implementation. A true RBM initiative for most companies often involves a significant change in both mindset and practice, especially within clinical operations. Many companies have decided to turn to technology to solve this problem, along the lines of “we just plug the numbers into our software and when a threshold is reached, we visit the site or take some other remedial action”. Unfortunately the process is far more complicated: it involves looking at the data differently and remotely. It involves the need for different expertise in roles that historically did not require such expertise. It requires a clear governance method and definition around decisions about when, or when not, to visit a site. And yes it also requires some technological need at the very least to document when risk indicators fired and how they were addressed. Furthermore and most importantly, it requires that the organization as a whole buy into the RBM as strategy and embrace the message from the regulatory agencies to refrain from expending tremendous amounts of time and resource to check everything. All of these requirements, especially the last one, are not easily achieved given the entrenchment the industry has fostered in pursuing the unattainable goal of eliminating risk. It is ironic that that we should take this approach when drug development by nature is fraught with risk. This is where companies can benefit most from third-party facilitation, by knowledgeable people who are unencumbered by bias, alliances or other interdepartmental dynamics.

 

RBM can be a success and can add significant value to today’s companies, but it needs to be implemented differently than has been done over the last three years. Purchasing the next piece of fancy software off the shelf and inputting some risk factors is not the place to start or focus, and will not result in a successful implementation. RBM is predominantly a process and mindset change that will need some form of technological support. Without addressing the process implications and focusing on current data, RBM will remain a phantom project in the biopharma world, in that large amounts of money and resources will have been spent, but after peeling away the layers we will find that we really are not doing things much differently than we were before we decided to take a risk-based approach to monitoring. Instead, we need to avoid consensus paralysis, exploit our own individual historical data trends, and tailor a process to embrace and exploit RBM.

“RACI has met the common fate of other time-worn jargon: it is now misused, misunderstood, and misleading.”

Fingers-pointing-blame-to-manOne of the operating assumptions widely used in biopharma process analysis is the “RACI” model. RACI stands for Responsible, Accountable, Consulted and Informed, referring to what role any particular person, job or department has in a particular project or process. The point of RACI is to provide a handy structure for teams or complex organizations to sort out, and document clearly, who is going to do what. But RACI has met the common fate of other time-worn jargon: it is now misused, misunderstood, and misleading.

Unless you are a sapphire-belt sixty-sigma facilitator, the RACI model has long outlived its usefulness. Flawed at birth, its failings are ever more manifest. And yet the RACI model lives on like Tang on the Space Station.

Perhaps you have never heard of the RACI model, in which case you have been spared. Each component of the model, in actual use today (not as it was originally conceived) is problematic. It is not enough to say, “well, people just aren’t using it correctly.” If the original definitions are forgotten or are no longer intuitive, then it’s the model, the language, that has to change. This is important for two reasons: the purpose of the RACI model is still a compelling notion – not everyone involved in a process has the same responsibilities (lower case “r”!). But further, the misuse of the individual R, A, C, I words contribute to the opposite effect: people misunderstanding their responsibilities, not least because the labels are made somehow holy by the jargon. And the cost of this mistake is that reams of SOPs and other control documentation is created using the RACI model, which is then auditable, and more importantly, adds complexity and time to the very processes we are trying to make more efficient.

Let’s look first at the “R” and the “A”. “R” is supposed to be, in the model, the person who does the work – a worker, a doer. Almost no one understands this correctly. The letter R is defined as standing for “Responsible,” but the word responsible, to almost everyone, means the person who is in charge, who is supposed to lead the work, whose head will roll if things go wrong. Sorry, in RACI that is the definition of the “A” word – “Accountable”. Everyone we’ve ever worked with to try and use RACI, or has already had RACI imposed on them, confuses the R and the A, to the point where deciding who is R and who is A becomes arbitrary, and therefore meaningless. Most importantly, things that are confusing, contradictory or illogical become unmemorable, and that makes the whole RACI effort a costly waste.

The “C” and the “I” are also flawed. Can there be any less sincere roles for people than who is “Consulted” and who is “Informed”? The time spent delineating the C and the I in the standard RACI workshop is not only time wasted, it is the opportunity for more misleading behavior. Too often, people labeled “C” are people who actually should be doing something but don’t want the responsibility. They are mollified with the C, as are those who don’t want to do anything but want to be able to express an opinion about what others are doing. Should we be officially codifying such wasteful and passive-aggressive behavior? And what about “Informed”? Unless you’re working in the NSA, is there anyone who shouldn’t be informed, and is there anyone who needs to be officially informed they qualify for this obvious, passive position? The “C” and the “I” are simply a fancy justification for the phenomenon I call “everybody into the loop!”, i.e., if you aren’t actually responsible for anything, we don’t want you to feel left out, so we will keep you “informed,” and if you’re someone we’re afraid of, we will make sure you are “consulted”. This is much like everyone on the kids’ soccer team getting a trophy for “participation”. Maybe we could give everyone on the project team a trophy at the first meeting and then disinvite them for the rest of the project! I can see that my replacement for RACI should be “RDT” – Responsible, Doing something, gets a Trophy.

Because of these misunderstandings, the worst aspect of using RACI in real life is that no one is actually assigned to do any work! You can be the one who is blamed (RA), you can be the one who gets to kibbitz (CI), but no one is assigned to do anything specific, which was the original point.

 

There are only two roles worth delineating when designing clinical development processes: the person who governs the work, and the person who does the work. If you are re-defining or creating new processes in your research organization, there are many other techniques other than RACI that will clarify responsibilities. Stick to the two categories: Govern, and Do. If someone or some function falls into neither bucket, they get the trophy and can go home. Finance? See you at budget time. Quality Assurance? You have your own chance to Govern and Do in QA processes. IT? Make sure the intranet is working.

It’s very important to clarify roles in the multiplexed world of clinical development. The key is to clarify for the sake of simplicity, not for the sake of inclusion. Productivity over Ego; Govern and Do. Erase the RACI and get back to working smarter.