Email us : info@waife.com
Management Consulting for Clinical Research

Why is Risk Based Monitoring (RBM) falling short of expected gains in productivity? Implementing RBM processes is proving harder than most companies anticipated, although considering the history of adopting significant process changes in clinical research, this should not be surprising.

 

It has now been almost three years since the FDA released their guidance on a “Risk Based Approach to Monitoring, and the concept has been discussed in industry for many years since electronic data capture (EDC) adoption spread. and the implementation of RBM into clinical operations is arguably not meeting its true potential. Since long before the guidance, industry has recognized that trying to discover every mishap or error in clinical trial data collection was enormously costly and time-consuming, versus focusing on the important issues and data in the development program. This is the point of the regulatory guidance. But despite this broad understanding and agreement, most companies are no further along in implementing a true risk-based monitoring environment. Why is this? Didn’t the agency give industry the green light to establish a focused approach? The answer lies in misplaced focus and a lack of effective processes designed to use RBM individual company situations.

 

Consensus vs Effectiveness

Let’s start with the consensus. Consensus is not a bad thing, in fact we all wish we could agree on everything since that would make our lives much easier, less stressful and certainly less confrontational. Of course this is unrealistic, and in life and business consensus becomes a matter of compromise. Compromise in and of itself would be fine as well, except when you start factoring in the number of interested parties that need to compromise. In this situation the compromise becomes self-defeating, as it becomes less about the issue itself but instead focused on horse-trading (he voted for my risk indicator, so I will vote for his).

 

This is not simply a behavioral annoyance. Let’s take the example of the RBM-critical element of Risk Indicators. The consensus approach has driven the too-large number of risk indicators that have evolved out of industry initiatives. At last check, there were approximately 141 risk indicators identified within the TransCelerate risk factor library, of which around 65 were categorized as high value. These numbers are self-defeating and unworkable, no matter how valid any one RI is any one company, and no matter how many reviewers reviewed them. This result is repeated over and over again when individual companies are asked to perform the same exercise of identifying risk indicators. With this many indicators to pick from, or defined, the company culture subtly shifts back to the more common “identify anything that could be risky” approach, which is a useless and regressive behavior, undercutting the original point of RBM.

 

With too many indicators the time and effort spent in just analyzing and responding to those indicators will offset any targeted gains in efficiency or cost savings. So how can one address the consensus piece? Well, there are a few common RIs that most people agree on in the industry with little debate, for example “# of AEs or SAEs”. There are probably around 10-20 of these common indicators that are widely useful and can be measured objectively and analyzed with some level of statistical rigor. These indicators would probably be a good place to start.

 

Another example where cross-industry initiatives fail is the implied imperative that commonality (“best practices”) must be better than whatever an individual company is doing. After this many years, company cultures are vastly different in their individual tolerance for risk. Each company should address their own RBM approach with some industry perspective, but focus on what they know to be the most important aspects of their data, rather than relying on other people to suggest to them what that data is. Often, companies benefit from organized third-party facilitated workshops that help company personnel navigate through the myriad of risk indicators to arrive at the select few they determine to be the most important, so as to target them for the initial implementation.

 

Getting the Right Data for RBM Design

Detail is the next item preventing successful implementation of RBM. When we discuss detail in this context we are referring to how triggering decisions are being made and what is being analyzed to arrive at those decisions. After defining the risk indicators we then have to measure them and decide what value or number at which the site or sites in question require additional scrutiny or action. The primary failures in this aspect of RBM are twofold: 1) subjectivity and 2) lack of historical context. These two items are intricately related to one another, but let’s address them individually first. The decision process on determining thresholds is often purely subjective. It starts with a group of people sitting in a room deciding the number or threshold at which a signal needs to be generated. There is often very little supporting data to validate these decisions, and the groups end up spending many weeks and months debating the merits of their decisions and attempting to rationalize the numbers. Many of these thresholds will either never be triggered, or be triggered so often that they will cease to be accurate reflections of where risk lies. It should also be noted that these measurements should not be evaluated individually, but should be assessed holistically with the knowledge that some indicators may carry more weight than others when deciding on what remedial action need be take. While some subjectivity is unavoidable, it only lends credence to the argument that RBM is not a “one size fits all” approach and thus cannot be templated.

 

Some people will argue that the reason RBM has not gained the foreseen traction is that it is a process of trial and error, and that thresholds and risk indicators will be re-assessed after they are put into practice. This argument is credible but the scope and scale of that trial and error can be limited significantly by statistically evaluation of historical data. The lack of historical context in RBM adoption is a key component of its current limitations and failures. Most companies have implemented their RBM programs as a “go forward strategy” and in doing so, left significant value on the table. Not all companies have a unified historical database of study data to draw on, but many do, and most could construct one from their archives if need be. Herein lies the true opportunity. Analyzing your chosen risk indicators against a robust historical database of your own company’s data will provide a much richer and accurate measurement of critical risk indicators and meaningful thresholds. The data gleaned from history will either support or refute previous assumptions made about risk indicators and associated thresholds and thereby significantly shorten the trial and error period. In addition, this historical data will provide companies with the sizable dataset needed to make informed decisions about additional risk indicators and thresholds. This cannot be achieved by looking at current data – there are just not enough datapoints to elicit a statistically robust result­ – and by the time that scale has been reached, the trial or trials will already be too far along for any RBM approach to have an impact.

 

Adding Focus on Process

Finally we arrive at “lack of focus”. This term should not be misconstrued to insinuate that people are not focused on RBM as a priority. In fact quite the contrary, many days and months of resource time are being applied to RBM initiatives. The lack of focus in this case is referring to where, and how, people are devoting their time in implementing RBM. As discussed previously, an enormous amount of effort is being expended defining risk indicators and thresholds. Unfortunately, not nearly enough effort is being spent on the process changes critical to a successful RBM implementation. A true RBM initiative for most companies often involves a significant change in both mindset and practice, especially within clinical operations. Many companies have decided to turn to technology to solve this problem, along the lines of “we just plug the numbers into our software and when a threshold is reached, we visit the site or take some other remedial action”. Unfortunately the process is far more complicated: it involves looking at the data differently and remotely. It involves the need for different expertise in roles that historically did not require such expertise. It requires a clear governance method and definition around decisions about when, or when not, to visit a site. And yes it also requires some technological need at the very least to document when risk indicators fired and how they were addressed. Furthermore and most importantly, it requires that the organization as a whole buy into the RBM as strategy and embrace the message from the regulatory agencies to refrain from expending tremendous amounts of time and resource to check everything. All of these requirements, especially the last one, are not easily achieved given the entrenchment the industry has fostered in pursuing the unattainable goal of eliminating risk. It is ironic that that we should take this approach when drug development by nature is fraught with risk. This is where companies can benefit most from third-party facilitation, by knowledgeable people who are unencumbered by bias, alliances or other interdepartmental dynamics.

 

RBM can be a success and can add significant value to today’s companies, but it needs to be implemented differently than has been done over the last three years. Purchasing the next piece of fancy software off the shelf and inputting some risk factors is not the place to start or focus, and will not result in a successful implementation. RBM is predominantly a process and mindset change that will need some form of technological support. Without addressing the process implications and focusing on current data, RBM will remain a phantom project in the biopharma world, in that large amounts of money and resources will have been spent, but after peeling away the layers we will find that we really are not doing things much differently than we were before we decided to take a risk-based approach to monitoring. Instead, we need to avoid consensus paralysis, exploit our own individual historical data trends, and tailor a process to embrace and exploit RBM.

“RACI has met the common fate of other time-worn jargon: it is now misused, misunderstood, and misleading.”

Fingers-pointing-blame-to-manOne of the operating assumptions widely used in biopharma process analysis is the “RACI” model. RACI stands for Responsible, Accountable, Consulted and Informed, referring to what role any particular person, job or department has in a particular project or process. The point of RACI is to provide a handy structure for teams or complex organizations to sort out, and document clearly, who is going to do what. But RACI has met the common fate of other time-worn jargon: it is now misused, misunderstood, and misleading.

Unless you are a sapphire-belt sixty-sigma facilitator, the RACI model has long outlived its usefulness. Flawed at birth, its failings are ever more manifest. And yet the RACI model lives on like Tang on the Space Station.

Perhaps you have never heard of the RACI model, in which case you have been spared. Each component of the model, in actual use today (not as it was originally conceived) is problematic. It is not enough to say, “well, people just aren’t using it correctly.” If the original definitions are forgotten or are no longer intuitive, then it’s the model, the language, that has to change. This is important for two reasons: the purpose of the RACI model is still a compelling notion – not everyone involved in a process has the same responsibilities (lower case “r”!). But further, the misuse of the individual R, A, C, I words contribute to the opposite effect: people misunderstanding their responsibilities, not least because the labels are made somehow holy by the jargon. And the cost of this mistake is that reams of SOPs and other control documentation is created using the RACI model, which is then auditable, and more importantly, adds complexity and time to the very processes we are trying to make more efficient.

Let’s look first at the “R” and the “A”. “R” is supposed to be, in the model, the person who does the work – a worker, a doer. Almost no one understands this correctly. The letter R is defined as standing for “Responsible,” but the word responsible, to almost everyone, means the person who is in charge, who is supposed to lead the work, whose head will roll if things go wrong. Sorry, in RACI that is the definition of the “A” word – “Accountable”. Everyone we’ve ever worked with to try and use RACI, or has already had RACI imposed on them, confuses the R and the A, to the point where deciding who is R and who is A becomes arbitrary, and therefore meaningless. Most importantly, things that are confusing, contradictory or illogical become unmemorable, and that makes the whole RACI effort a costly waste.

The “C” and the “I” are also flawed. Can there be any less sincere roles for people than who is “Consulted” and who is “Informed”? The time spent delineating the C and the I in the standard RACI workshop is not only time wasted, it is the opportunity for more misleading behavior. Too often, people labeled “C” are people who actually should be doing something but don’t want the responsibility. They are mollified with the C, as are those who don’t want to do anything but want to be able to express an opinion about what others are doing. Should we be officially codifying such wasteful and passive-aggressive behavior? And what about “Informed”? Unless you’re working in the NSA, is there anyone who shouldn’t be informed, and is there anyone who needs to be officially informed they qualify for this obvious, passive position? The “C” and the “I” are simply a fancy justification for the phenomenon I call “everybody into the loop!”, i.e., if you aren’t actually responsible for anything, we don’t want you to feel left out, so we will keep you “informed,” and if you’re someone we’re afraid of, we will make sure you are “consulted”. This is much like everyone on the kids’ soccer team getting a trophy for “participation”. Maybe we could give everyone on the project team a trophy at the first meeting and then disinvite them for the rest of the project! I can see that my replacement for RACI should be “RDT” – Responsible, Doing something, gets a Trophy.

Because of these misunderstandings, the worst aspect of using RACI in real life is that no one is actually assigned to do any work! You can be the one who is blamed (RA), you can be the one who gets to kibbitz (CI), but no one is assigned to do anything specific, which was the original point.

 

There are only two roles worth delineating when designing clinical development processes: the person who governs the work, and the person who does the work. If you are re-defining or creating new processes in your research organization, there are many other techniques other than RACI that will clarify responsibilities. Stick to the two categories: Govern, and Do. If someone or some function falls into neither bucket, they get the trophy and can go home. Finance? See you at budget time. Quality Assurance? You have your own chance to Govern and Do in QA processes. IT? Make sure the intranet is working.

It’s very important to clarify roles in the multiplexed world of clinical development. The key is to clarify for the sake of simplicity, not for the sake of inclusion. Productivity over Ego; Govern and Do. Erase the RACI and get back to working smarter.

After the Compromise

Ronald S. Waife

img_After_the_Compromise

“Indeed the safest road to hell is the gradual one–the gentle slope, soft underfoot, without sudden turnings, without milestones, without signposts…

C.S. Lewis

The biopharmaceutical industry is very much on trend with its emphasis on collaboration in daily work. Collaboration can take many forms and the intent to use it may not match its execution. For instance, collaboration is not inevitably achieved by consensus, but consensus inevitably involves compromise. Each of these terms are used, and misused, as sacred mantras in the meeting rooms of clinical research. Admirable, perhaps, but what happens after the compromise?

Too often, collaboration is seen as an end in itself. This is even reflected in very high-stakes gambles: big pharma is racing to build new labs in concentrated areas like Massachusetts’ Kendall Square and Harvard Medical Area on the sole premise that being next to each other will spark collaboration, which in turn will spark innovation. New office buildings across the industry all feature interior designs with the same purpose, essentially filling available meeting space with “inside coffee shops,” which ironically are doing little but creating meeting room availability crises. Ah, the law of unintended consequences.

But even if we are meeting more “collaboratively” (whatever that might mean), what are we getting out of it? The pressure of the collaborative culture is to get along, and yet show progress. Can these co-exist? The corollary to collaboration –requiring consensus ­– is wrong-headed in itself (why do we all have to agree in order to collaborate or be innovative?), and achieving that consensus predictably requires compromise. These are so highly valued that we specifically judge managers and staff on their ability to succeed in a consensus environment.

But here’s the thing: achieving consensus, and crafting a compromise, is not the end of the story. Someone, somehow, has to implement the compromised solution, without compromising (pun intended) the purpose of the project/action/solution/remediation. Indeed, by definition, a compromised solution is usually a political minefield, containing illogical or contradictory components to achieve the compromise. The devil is in the details.

Compromises

Let’s look at some common modern day compromises in clinical development. In each case, executive or functional leadership have to compromise to endorse the required change, but the implications are left unsettled:

  • RBM. The company commits to using Risk-Based Monitoring in the abstract, but has not planned for the changes in field monitoring, site relationships, and data management responsibilities.
  • Patient Centricity. In clinical development at least, this new name for a 20-year old concept (ePRO/eCOA) requires new looks at technology and new relationships between development, postmarketing and vendors.
  • eTMF. Moving ahead with an Electronic Trial Master File is an activity riddled with compromise among the competing opinions and needs of regulatory affairs, clinical operations, quality assurance and more, and the disagreements are about more or less everything: who, when, where and what.
  • Organizing Clinical Development. Since there is no right answer to a “best practice” for this, a high-level solution is inevitably chosen for political, personal, geographic and financial reasons. To reach agreement on your “best practice,” compromise is essential. But then what?
  • Strategies for Enabling IT. Ensuring that the use of information technology to enable clinical development is efficient, up-to-date and right-sized is nearly always an unhappy compromise between technology users and corporate IT owners.
  • Someone Else’s EDC. A classic compromise in this century is the dilemma sponsors or CROs face in desiring to use only one electronic data capture tool but finding the unsuitability of Phase II/III oriented tools to early phase or postmarketing studies is too painful. Now what?

Obstacles

So life is full of compromises. Why don’t we just get to work? The problem is that the compromise is just the beginning, not the end, but the institutional energy was spent on the compromise and the execution is too easily put off. What is standing in the way of execution?

  • Passive resistance, the most common and effective form of corporate obstruction. You made the compromise because of all the attention in the moment; that doesn’t mean you have to help implement it. Just lay low and it will go away.
  • Executive disappearance. Leadership may have forced the compromise, but they don’t stay around for the dirty work. Without sustained executive focus, it is too easy for the compromise to be watered down or rescinded.
  • One compromise wasn’t enough. How often have you found yourself sweating though a highly visible set of meetings to reach a difficult compromise, only to find out that the differences are entrenched and will continue to be fought out in the details?
  • No budget for execution. Implementing a compromise usually costs time, resources, tools and vendor costs. Rarely is this ever budgeted for, especially when (legitimately), the need for compromise was not anticipated.
  • It’s not your compromise. A significant obstacle to executing a compromise solution is that the compromise came from another place and time – your boss’ previous job, your consultants’ formulaic answer, your CRO’s insistence ­– without being arrived at in the context of your actual circumstances.
  • The compromise was illogical or unworkable. Often there might be a very good reason why a compromise was hard to achieve – it did not make sense. You may be stuck with the consequences of the decision, but the facts don’t go away.

“There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.”

Michael Crichton

 

Saving the Compromise

If you are responsible for executing the compromise, beware. You will “pay” for the satisfaction of those who reached the compromise, if the follow-through was not planned for. The original purpose of the compromise may never be achieved and you, the implementer, may be blamed (if anybody remembers your project after a while).

Sometimes compromise is necessary, even essential, and sometime it is wrong. It is wrong if it was arrived at for its own sake, if it was to mollify someone or some function, if the implied solution is unworkable. Sometimes nothing is better than something; maybe it just isn’t the right time to do the right thing. Sometimes a compromise can be saved if given enough room (time, money, cooperation) to fix it.

Several things are paramount to ensuring compromised business decisions in clinical development are beneficial and not harmful:

  • Develop a realistic estimate of the cost and consequences of the compromised solution
  • Re-evaluate the compromise in the light of your company’s strategic objectives
  • Ensure sustained executive involvement
  • Be open, sincerely, to re-considering compromises that are either unworkable or no longer compelling.

In clinical development, with lives and health on the line, there is no room for collaboration and consensus for its own sake. We only have time for what works, be it solutions discovered through compromise, or single-minded actions that bring beneficial results.

 

Operating Assumptions   December 2014

Ronald S. Waife

 

Honesty is the Only Policy

When was the last time you told someone the whole truth? When was the last time you thought someone was telling you the whole truth? Have you told that under-performing monitor that he is, in fact, under-performing? Did you tell your sponsor about the problems you started having with that investigative site? Did you tell your customer about the true delivery date for the next software release? Did you tell your employees how much bonus you are making from their long hours or work? These are all examples of how, every day, we treat each other dishonestly. You can argue that these “white lies” make work easier, but I would argue they strongly contribute to the highly inefficient state of clinical research. Honesty isn’t the best policy, it’s the only policy.

 

Widespread and Too Easy

Let me emphasize from the start that when I discuss “dishonesty” in this column I am not talking about criminal behavior – dishonest handling of trial data, noncompliance with regulations, cutting corners in analytical rigor, insider trading, and so on. I’ve never seen them and they are not my point. I’m talking about “process dishonesty” ­– dishonesty in the way we treat each other every day.

There are so many examples of dishonesty in everyday research life that any reader can quickly provide their own list. Here are just a few:

  • Sponsors lie to CROs about when the trial is going to start.
  • CROs lie to sponsors about the qualifications of who is going to work on their trial.
  • Clinicians lie when they say “this is the final protocol.”
  • Quality Assurance is often lying when they say “the FDA requires this.”
  • Every department lies to each other when they set a deadline, knowing the deadline will be missed and leaving themselves room to maneuver.
  • Sponsors lie to their own staff about plans for job elimination or outsourcing.
  • Consultants lie when they tell clients what they want to hear, instead of what they found.
  • Sponsors lie when they tell consultants they want help with “x”, when they actually want justification for “y”.
  • Upper management lies when they tell their direct reports how much money is in the budget for next year, and their direct reports lie about how much money they need.

So okay, this is hardly unique to clinical research and is universal and as old as time. It has survived as standard business practice for so long because it doesn’t seem to matter. But I suggest it does matter, a lot, not just on ethical grounds, but to sponsors who are trying to change their approach to research and make the development of new therapies faster and less costly.

 

Inefficient Dishonesty

How does all this hurt clinical development, if it is such a standard business practice? Dishonesty and efficiency don’t mix:

  • If you put somebody on a team because you have to, instead of because they are qualified, that leads to inefficiency.
  • If you can’t tell someone they are not good enough at their job, your employing inefficiency.
  • If you avoid the opportunity to provide constructive criticism, that’s permitting inefficiency.
  • If requestor and bidder play a game of “chicken” when negotiating a budget, resulting in weeks of delay in the name of “best practice contracting” or Sarbanes-Oxley, the only best practice we are achieving is expert time-wasting.
  • We all know the truism that it takes three times, or ten times more resources to fix a problem downstream rather than when it happens. But because being honest upfront is not safe, we’d rather “kick the can down the road”, wasting more time.
  • You will not have a chance to really know if your in-house staff has the ideas, skills and flexibility to perform better, if you don’t tell them that their performance issues so dire that you’ve already decided to outsource their jobs.

I think that white lies are so ingrained, they are second nature. We congratulate ourselves for avoiding confrontation, which usually means we avoided the truth. The financial, personal and professional pressures to bypass hard truths are all too real and way too strong. Our slippery sidesteps are all too understandable. And yet if we start to believe our own white lies, no one can save us from ourselves.

 

Mistrust and Passivity as Symptoms

Our problem with honesty in our research process creates deeper and more subtle negative effects. Mistrust and passivity can be the direct byproducts of a failure of honesty. Indeed they contribute to dishonesty in a nasty feedback loop. We mistrust our CROs and they mistrust their customers because disinformation has become the mode of communication. Our staff mistrust our managers when outsourcing or acquisitions are announced out of the blue. And nobody believes the software vendor’s delivery date, or the first specifications of a protocol, because it is not acceptable for us to acknowledge unpredictability or unreasonable deadlines.

Passivity is a cousin of mistrust and dishonesty. If I don’t trust you, and I’ve learned to speak in white lies, my best course of action is to lay low, do only as I’m told, and otherwise stay passive. No wonder research executives today decry the rampant lack of urgency and energy in their organizations!

img_Honesty_is_the_Only_Policy

Getting to Honest

Diogenes is described as searching high and low in ancient Athens, in vain, for an honest man. Diogenes was also a philosopher of Cynicism, whose modern meaning may be too harsh and excessively pessimistic for this topic. Not only can we find honest people, we can make them.

How do we get more honest in our treatment of each other?

  • Make it safe to be honest: I think few of us think we can be honest with our bosses, staff, providers or customers without negative consequences. We need to make honesty permissible and desirable.
  • Make it a cultural imperative: we should not only make it safe to be honest, we need to recognize it as a necessity for our organizations’ efficiency, productivity, and respect.
  • Educate each other about our jobs or “walk a mile in their shoes”. The more we understand the work, the motivations, the potential and the limitations of the work of others, the more we will understand and accept honestly delivered information, and the more honest we can be in our own communication.
  • Demonstrate and model honesty in our own behaviors. We’ve described how this can be risky – who’s going to go first? I don’t think we can wait for the other guy to blink. Instead, we need the courage to trust and respect others enough to treat them with honesty and bear the consequences. In time, we can start a safe and productive feedback loop.

I think I hear the Golden Rule echoing nearby.

 

“Generic process improvement principles and tools…. have more in common with political rallies than serious analysis”

All good terms must come to an end. “Change management” is a phrase that has been so misused, I believe it may be time to retire it. It’s not that change management is no longer necessary – indeed it is needed now more than ever. But because the term has been diluted, misunderstood and used as a sinister euphemism, we need to kill the name and save the meaning.
What It Has Become

At first blush, change management seems like an oxymoron. How can one “manage” change? If it’s manageable, your company doesn’t need a department or consultants to handle it. The change we seek to manage is by definition threatening – in its speed, extent, personal impact or uncertainty. How can such a threatening change be managed? In fact it can be, and must be, but not how clinical research sponsors do it now.

Change management has been too often co-opted by human resources departments, IT consultants and senior management as a cover for downsizing, outsourcing or defending politically-driven, illogical reorganizations or force-fed mergers. In this context, the change to be managed is staff acceptance of events that are not in their best personal interest.

Change management is used to underpin broad and thin executive initiatives that produce little more than PowerPoint slides and posters in the hallways. The resulting mistrust and cynicism is deadly when true change management is needed: who is going to listen, or commit, to a project that looks and smells like the one last month that outsourced their colleagues?

In such an environment, the change management activities that are applied are equally superficial. Employing generic process improvement principles and tools, and led by generically-trained facilitators, the workshops and training and newsletters and self-congratulatory celebrations have more in common with political rallies than serious analysis and commitment to improving process efficiency.

 

What It Should Be

True change management (let’s call it “TCM”) is characterized by a set of values and actions that reflect seriousness of need, long-term commitment, and demonstrable results. As presently staffed and prepared, it is unlikely clinical research sponsors can pursue TCM on their own without some major attitudinal adjustment.

TCM can and should be about, in part:

  • Discovering facts
  • Disseminating information with clarity
  • Speaking the truth
  • Asserting clear command
  • Simplifying cross-currents and contradictions
  • Recognizing and considering self-interest
  • Taking small, frequent steps
  • Seeking quick proofs of success (i.e., value)
  • Sustaining the effort.

Focusing on these values would be an enormous step forward. It is critical that the emphasis not be placed on tools and training – these are essential, but are the means, not the end. Currently, those clinical research sponsors who claim that they are serious about TCM have only the investments in these mechanics to show for it.

Instead, for any time of change, we have to ask:

  • What do we keep
  • What do we throw away
  • What do we add
  • What do we innovate.

It is said that when comparing humans to chimpanzees, 99% of the genome is the same. Quite a difference, that one percent! But the lesson is that change is essentially conservative (i.e., conserving), and therefore should be less scary and more embraced.

It may be useful to think of TCM in Stages familiar to a pharmaceutical enterprise: Discovery, Diagnosis, Therapy and Maintenance. Much as in drug development, TCM cannot be complete without all Stages being covered. It seems that we are always in a hurry when there is a problem, and we jump to the Therapy Stage, skipping all others. When we think we have no problems, why bother with any TCM at all? This is a bit like saying we do not have to do anything for our health until after the first heart attack. The best time to begin the Stages of TCM is now, regardless of your perceived state of health. And all of the Stages are necessary – you have to analyze what you have discovered rather than jump to assumed therapies; you have to maintain the therapy after the initial enthusiasm or fear energized you.

Whatever the right term is – TCM or otherwise – the old “change management” is not leading us to demonstrable solutions to real research operations problems. Rather than using change management as window dressing, we need to institutionalize it as a permanent and sincere effort to better use our resources for the important work we do.