Email us : info@waife.com
Management Consulting for Clinical Research

Identifying possible “ends” is one thing; understanding their meaning to the user and the enterprise requires more thought.

“Begin with the end in mind” is one of those classic business phrases which is no less valuable for the number of times it is ignored. Clinical research sponsors are guilty of often fatal forgetfulness of this key concept when planning the development, implementation and use of new software applications or major organizational change.

Clinical research sponsors generally start an organizational change or a software acquisition not with the end in their mind, but with some stimulus in their back: a department is complaining that everybody else gets new tools except them; our competitors all changed their outsourcing model and so we should too; I met a salesperson on the airplane; the vendor just announced an upgrade and they won’t support our version anymore; we just hired a new vice president and she prefers vendor “x” over vendor “y”. While some or all of these situations may justify change, they do not in themselves sufficiently define the “why” and “what”.

Starting Isn’t the Hard Part

Sponsors may also start from some business trigger which gives them the illusion that the end is mind: we need to save headcount so let’s use EDC (electronic data capture); or we’re frustrated with having multiple overlapping and out-of-date investigator databases so let’s buy a CTMS (clinical trial management system); we just acquired Teeny Biotech and we don’t have anyone in-house in their therapeutic area, or our new translational medicine VP says we’re going to have a flood of pharmacogenomics data coming in so let’s get one of those “data warehouses.”

What’s missing from these situations is the company’s consideration of the strategic benefit, what the daily operational impact will be, what the software’s users will have to change to use it properly, and overall what will the benefit be two years from now? What is the tie-in between the initial impetus – the needle in the back or the business trigger – and the actual output the change will provide? This disconnect is particularly critical in enterprise software projects or major business acquisitions because we all know that the cost in money, time, headcount and disruption will be high. The benefit therefore must be high as well, or the cost reduced, to be in line with the diminished (and realistic) results.

Analyzing a potential project’s end-user benefit compared to the initial impetus need not be fatally time-consuming, which is the usual reaction to the suggestion. But it can save a large amount of wasted time and money. We should recognize that it is very easy to fall into the disconnect trap. For instance, let’s consider the situation where clinical operations gets that frustration over the multiple investigator databases. The complaint is forwarded to the IT department (or worse, a naïf goes to a booth at a trade show), and the answer comes back: there is no “investigator database fixer” product out there, but there are these CTMS packages and boy, they do everything. Before you know it, you are installing a multi-million dollar application over multiple years, you’ve doubled the amount of training everyone has to go through, and you have all this rich functionality, and no one can or wants to use it because it’s not relevant – neither to the original trigger or actual user circumstances.

I would suggest that even a good understanding of how the end user works, and what he or she needs, is not sufficient in today’s business environment. We have fewer and fewer in-house staff, we are narrowing our “distinctive competencies,” we have uncertain economic and reimbursement conditions, and we have unrelenting competitive pressures. All of this mitigates against expensive multi-year infrastructure projects unless we do more to predict and understand the future end user business need. What are the future identity, purpose and constitution of our business, and therefore, what changes and tools do we need to get there?

Even for projects where the pain and the solution appear more clear and pragmatic, we are usually missing a robust and detailed visualization of how a tool will be used, and without this, we will mis-configure and mis-spend our time and money. For instance, h0w does a shift to outsourcing change who the users are for a CTMS, document management system, EDC, and similar programs? How useful are e-tools if the “back end” of the workflow stays “paper-minded” in its policies and procedures, reflected in unchanged workflows, double-checks, and review practices?

And Vendors Too

The developers of software used in clinical research are equally guilty of forgetting the context of how customers use their tools. Vendors have a great opportunity to add significant value to their customers by helping sponsors see the possibilities that their tools open up, and by knowing the clinical research business as deeply and broadly as possible. This knowledge should translate into more focused and anticipatory designs, creating more powerful and efficient tools. Too often, however, vendors and CROs see educating their clients as a danger to future sales, and try to over-simplify change.

Typical software development, even the industry-specific kind we in clinical research usually encounter, tends to chase after customer-driven enhancement requests that are often shortsighted for all the reasons cited above (responding to the “needle in the back”). The result is needlessly complex software with features even the requesting sponsor may forget they wanted! More damaging than needless complexity is that the effort to chase enhancements takes money away from the literal “end” – the output, reporting and visualization of information which is all a tool is really good for.

This irony plagues each aspect of the research software universe. Vendors may see the whole gamut of functionality possible, but as professional engineers, they see it, and build it, linearly (they begin at the beginning and end with the end). As a consequence, they inevitably run out of time and money before they reach the output function (reporting). How many times do we hear vendors do their demos this way: they start with the very first point of data entry, move through to the point everyone is waiting for (getting something back for all that entry), and then they say, “well, there was no point in re-inventing a report writer so use something standard, off-the-shelf.” It is the “data out” that matters in the actual business context, but to a software engineer it looks like a data processing problem, not a business use problem. If this were true, and off-the-shelf reporting was adequate, so too would be off-the-shelf entry – so actually, let’s forget the whole thing. And yet there really is a utility for clinical research specific software products, if built with the end in mind.

Today’s software vendors need a knowledgebase and a discipline not commonly found. The need for vendor domain knowledge is greater than ever, plus an understanding and vision of where their customers are going. Certainly sponsors have the bulk of the responsibility in teaching this. For the vendors, the discipline is in rejecting enhancements for enhancements’ sake and leading their customers towards being enabled to handle the future.

Is There an End?

Another way that clinical research sponsors get ahead of themselves is to assume that once the first wave of interest and urgency is sated, the project is done. This is hardly the case. Yes, processes may have been re-written, software configured, and newly re-organized staff trained in their altered jobs. But the work does not, and cannot, stop there. The second and third waves of change wash over the organization as the “lower priority” staff need to be oriented, and as the new processes need to be iterated to reflect actual experiences versus the original assumptions.

It sounds like continuous improvement, except for those sponsors who have process improvement staff, those folks themselves are moving from project to project – working continuously perhaps, but not necessarily improving. They too get bored (or run out of resources) with the first wave of the project, and are not there to reconsider the impact and effectiveness of new work models or software applications. So in some senses there is no end, but rather steady re-examination of purpose, needs and solutions.

“Begin with the end in mind” is certainly the start of a solution. Begin with an understanding of the end is probably more profound. Identifying possible “ends” is one thing; understanding their meaning to the user and the enterprise requires more thought, breadth and management than most sponsors or vendors are used to providing.

Centricity in Search of its Center
Ronald S. Waife, as printed in Clinical Researcher
“Any unnecessary delays in drug development are a disservice to our patients (and few delays are truly necessary).”
Jargon comes from three sources:
  • to quickly express a concept in shorthand that would otherwise take too many words to express;
  • to protect professional knowledge and impress the ignorant;
  • and three, as a placeholder to fill the air until real thought and knowledge can fill the gap.
Jargon is only justified if it comes from the first impulse, and let’s assume for the moment that “patient centricity” is meant to be a justifiable shorthand.
 
Patient centricity has a cloud of obfuscation hanging over it akin to other neologisms like “pre-boarding” and “post-marketing”. (How does one board a plane before one boards a plane? If we are in post-marketing, haven’t we stopped selling the product?). Are there drugs we are developing that are not for patients? Are there trials we are doing that don’t involve patients? If not, then shame on us.  But perhaps what is meant is a matter of degree.
 
Jargon for Justification
Various players in clinical research are seizing on the patient centricity phrase to justify or promote concepts and products that, for the most part, have been with us for decades:
  • Patient centricity is the latest in a long line of frustrating attempts at justifying what should be obvious – gathering data on drug effects as close to the patient as possible. The original electronic patient diaries movement (constantly renamed as ePRO, eCOA, mHealth) continues to struggle for industry acceptance (inexplicably) and it is unlikely that a new turn of phrase will do the trick.
  • Patient centricity is a euphemism for another perpetual bugaboo – slow rates of subject enrollment in trials. Maybe this is a case of justifiable shorthand, if patient centricity means improving the practicality (from the subject’s viewpoint) of the trial protocol, or improved outreach to potential subjects, or more compelling reasons for trial participation.
  • Patient centricity is also being used, less admirably, as a way to express the chronic frustration sponsors have with their investigators, the implication being if we were only more patient-centric we could skip over those pesky investigators altogether. This is either a cynical method of selling new software or a shortcut around an important philosophical and scientific debate.
Patient centricity is supposed to mean we should care more about patients in protocol design, in data collection methods, in information sharing, and so on. Ok, sure. But some of this does not ring true: for instance, if we feel our investigators aren’t respecting, communicating, informing and sharing enough with the subjects in our trials, is this all the investigators’ fault? The history of clinical trial operations over the past 40 years has been the sponsors’ steady march away from the sites (not coincidentally simultaneous with the rise of CROs to do the work), and therefore away from the patients they see, so why are we surprised that there is a disconnect? 
We can’t ask investigators to be more connected to our subjects if we have disconnected from our investigators. Some of the efforts in the name of patient centricity seem to suggest we bypass those frustrating old-fashioned sites and get right to the patients. Why are we likely to do a better job centering on patients than we did centering on sites? In fact, sites have the best opportunity, knowledge and training to connect with patients. If we have failed the sites, let’s fix that and not just run after another elusive technology-plus-jargon fix.
 
The Center of Centricity
There is great value in patient centricity if we can find the true meaning of the term. If we have drifted from patient focus to profit focus, we need to correct that. If we have forgotten why we do clinical research, we must remember. If we have ignored the patient in pursuit of elegant statistics, or in fear of regulatory unpredictability, we have to fix this.
 
I propose a jargon-free understanding of patient centricity. It probably doesn’t mean you need new software or need to hire a “chief centricity officer”. It means re-examining, or even re-thinking, how we do clinical research to better serve our patients:
  • Any unnecessary delays in drug development are a disservice to our patients (and few delays are truly necessary). Clinical research remains widely inefficient at all sponsor companies and supporting CROs. Our tolerance for this inefficiency over decades remains baffling, and inexcusable. Centering on our patients includes eliminating the actions and activities that delay our drugs getting to market.
  • We should be more responsive to the needs of investigative sites, and more proactive in improving their performance in recruitment and quality data, or more thorough in questioning their continued participation. We should so the same in how we handle the data we receive and what we do with it. Some companies are beginning to realize the richness of already collected data in their possession, which both values more the contribution subjects have made by being the source of the data, and provides more knowledge to our companies and to medicine.
  • Everyone seems to recognize that study protocols are too often onerous for our patient volunteers, in time and travel requirements, in the number of procedures, and in paperwork. Like most improvements in study conduct, the realization of the need for simpler and more respectful protocols is trickling through the industry very slowly, despite the ubiquitous lip service.
  • Overall, as I have previously written, we need to ratchet up our collective sense of urgency. This may be the most useful and sincere way for the industry to express patient centricity. If we all care more about accelerating the timeline from discovery to marketing, and act as soon as we can on the next step in the process, we will do more for the patients who are waiting for our innovations than any other bundle of trendy concepts.
 

Patient centricity should mean doing a better job for patients, and doing our job better. Let’s not let jargon drain the meaning out of language: focusing on patients, if done correctly, could not be more worthwhile.

If pharmaceutical companies have a special Harry Potter “Defense Against the Dark Arts” class for their management team, one of the first techniques they must be learning is the Culture Defense. When confronted with evidence of their reluctance to change, they are apparently taught to point their wands out in front of them and say, “It ain’t me, it’s the culture here.” This turns out to be a marvelous, widely applicable spell—the easiest way out of an uncomfortable situation. There’s one problem: we are the culture.

We can’t all be the rebels, can we? If so, how would the “culture” ever form with beliefs different from our own? To claim that company culture is the reason that operational innovation fails to take root is to deny your own place in the company you work. Culture doesn’t kill efficiency, people do.

This common weakness of corporate organizations is particularly obstructive to the introduction of information technology because technology generates so much upheaval, especially in areas of clinical development still untouched, or merely grazed, by the productive use of software. Often standing in the way of that productivity is the Culture Defense.

Let’s look at the following examples of flawed process improvement where culture is often blamed as the cause of failure, and let’s ask ourselves if there might be other reasons lurking.

The Ubiquitous Culture Defense

We’re getting lousy data out of a great tool (an expensive enterprise clinical trials monitoring service [CTMS] for instance, or a state-of-the-art adverse event system). How does this happen? The old IT acronym, “GIGO” (garbage in, garbage out), applies. But why is it happening? Why are our staff waiting until the last minute to enter trial status information that is supposed to be feeding a highly accurate real-time CTMS? Or in the case of the adverse events system (AES), why are antique paper-based data flows being maintained, while the AES is an alien, unwelcome layer imposed on top. Why is this allowed to happen? The Culture Defense says, “Well, we’re not used to reporting data in real-time,” or “We want to review and double-check the information before anyone sees it.” Or in the safety case, “We won’t risk the importance of safety surveillance to software which may not work.” It’s a culture thing. Really?

Another example: A major process improvement project is organized into the ubiquitous “workstreams” and comes up with a flood of recommended changes. Several of the most important changes require re-organizing staff, and while the net headcount will stay the same, some people will probably not fit the new skills required. Impossible! Why? Because “we don’t (or can’t) fire people here – it’s our culture.”

And another example: We throw resources (human and monetary) at the database lock of our pivotal trial, with no restraint. At that moment, there is nothing more important to the company. If the data management processes are examined, however, you will likely find that the electronic data capture (EDC) tools you have used for years are being used sub-optimally and inefficiently. It’s the culture. Perhaps it is, but is that a good thing? Does the Culture Defense make all other options moot?

Yet another example: “We don’t measure here.” It’s our culture not to measure, or if we do, we don’t do it consistently, or with rigor, or learn from the results. There’s probably loads of data – indeed too much data – for you to measure from, but it’s not in the culture to act on this information. Is that culture or laziness or fear?

More pervasively, it is common to see clinical development executives across the industry turn a blind eye to what really happens at the operational level. Executives announce an impassioned commitment to a particular process improvement initiative, and tiptoe out of the room—leaving the implementation to middle management. In many companies, without the executive watching your back, there is little incentive for middle managers to execute on the vision. Is this disconnect a culture problem or a management problem?

 

It Is You, Babe

If individual study teams, or even entire therapeutic areas, don’t follow company- wide SOPs (but instead make up their own regulatory-compliant “standards”), is that culture or the acts of individual managers? (It may be a justifiable action on the manager’s part, but that’s logic, not culture, at the source.)

If we put training of the new CTMS tool in an e-learning environment (although most monitors won’t really pay attention and only click through it to get certified), can we blame our culture for being anti-training? It’s the individual who chose not to pay attention. If we rely on individuals’ cooperation in using new tools appropriately, and people fail to do so, isn’t that a series of individual decisions? If I fail to fill out all the fields in a template-based site visit report in my clinical trial management system (CTMS), isn’t that my choice? The culture didn’t make me do it, I chose not to do it.

The damaging side-effects of the Culture Defense are legion: it enables us to drag our feet when it comes to changing the way we are used to working; it gives us permission to abdicate responsibility without penalty; it enables us to stand in the way of progress with impunity for whatever our personal motivation may be (e.g., we’re overworked, we’re jealous, we want our pet project to get all the attention, we’re afraid of learning too many new process details).

Psychologists will tell us that the most powerful realization victims of damaging habits can have is that they have a choice to change. The Culture Defense is designed to prevent choice, to prevent individual responsibility, even to preclude individual initiative. The Culture Defense is defeated by individuals who choose not to go along with the easy path, to see the executive direction as good for themselves as well as the company, to embrace change as the inevitable condition of modern business, to risk getting information that may reveal true operating conditions quicker because it is better to do so, and to risk measuring because objective data about how we work can make us better workers.

We as individual pharmaceutical company staff, middle managers, and executives can choose to act in a manner that enables operational improvement to flourish. We can face down the Culture Defense so that our process redesigns are easily learned and pragmatic, so that our CTMS systems actually produce accurate, actionable data on clinical trial program performance, so that our CRO vendors are well managed; so that our technology investments are worth the effort to implement them; and so that our diverse and broadly skilled staff can be focused on productive work with urgency.

 

Walt Kelly, in his famous cartoon strip Pogo, memorably exclaimed, “We have met the enemy and he is us.” Culture isn’t the enemy, we are. Facing up to this fundamental truth will begin to enable operational innovation to meet our expectations.

One of my friends in the biotech industry explained the business with this metaphor: working in biotech was like running full speed at a brick wall, and at the last possible second, the brick wall would disappear, only to be replaced by another brick wall farther ahead. Those brick walls, of course, represented critical milestones, such as another round of venture funding, or a research result, a regulatory filing, and so on. It was the idea of running full speed that stayed with me. While common enough in small entrepreneurial companies, that sense of speed, focus and anxiety is rarely found in pharma, despite lip service to the contrary. Where is the sense of urgency in clinical development?

This is not the say that we do not all work hard. It is not to say we don’t care about the progress of our work. But it is to say that at most pharmaceutical companies, day-to-day, we have neither the energy, direction or discipline to conduct our operations urgently. And there are so many reasons to do so! Deadlines, stock options, competition, everyday failures, demanding bosses – not to mention the patients with few, unsatisfactory options waiting for our new therapies.

Some of us (people and companies) certainly may start with enthusiasm. But particularly at the clinical stage, so many factors build up to weigh us down: the myriad inherent delays, the disappointing scientific results, the bureaucracy of corporations and regulations, the unavoidable time intervals of research itself. This is all true, but that’s what we are here for – “that’s why they call it work.”

Most companies have institutionalized processes for complacency, rather than for urgency. Some have become standard behavior since they are so familiar:

  • Slow contracting with CROs
  • Slow payments to vendors and investigators
  • Slow IT projects that are completed years after originally estimated
  • Slow adoption of already-approved process changes
  • Slow responses to poor performance metrics
  • Slow reporting of information requested by operational staff from report programmers
  • Slow protocol development
  • Slow document review and approval
  • Slow study start up.

How many of these do you take for granted, and assume they are inevitable? But they are not inevitable; they are all human-driven! These are not immutable laws of nature; these activities are slow because we allow them to be! There is nothing standing in the way of speed except the lack of will, the lack of urgency.

Another anecdote: during the beginning of one of my first consulting assignments, I mentioned to my client (a junior vice president) that my invoice hadn’t been paid. He stood up, told me to wait there, and left his office for about 20 minutes. He came back with a paper check and handed it to me, apologizing. Ok, I was spoiled for life, but the point is, of course it’s possible to get a check cut, a report run, a contract signed, a meeting scheduled! It just takes a person to do it.

Not all delays – maybe not even most – are caused by perverse obstinance. Think of the many things that fill our days instead of urgent work – emails, meetings back-to-back and triple-scheduled, teleconferences where you can’t hear what most of the people are saying. It’s all too easy for our days to slip away. What most of us are not doing is comparing our tasks, our to-do lists, our schedules, to the most important work list of all: what are the goals of my organization, my department, my project? How is what I am doing right now serving those goals? What does deciding this issue, or reading this email, have to do with moving closer to these goals?

Changing an environment from complacency to urgency requires some bravery and lots of leadership. Let’s look at some examples:

  • You’ve been in a team meeting all morning, getting close to the end of a long project which is supposed to develop a new set of evaluation criteria for your CRO’s. The leader asks if all are in agreement, and one key member says, “maybe, but I have to check with my boss. We’ll get back to you.”
  • You’re working with a statistician on completing the FSR. It’s not due until next month, but you’re very nearly done and it would be advantageous to get it submitted early. You call her up for the third time that day, and find out she’s gone home, and will be on vacation for two weeks ­– something she neglected to tell you about.
  • You got approval to add someone to your staff at the beginning of the year but HR still hasn’t send you qualified resumes. When you pick someone to interview, it takes weeks to schedule her (or she has already found another job). When you try to take matters into your own hands, you are scolded for not following procedures.
  • You’ve finally scheduled a teleconference with a key opinion leader who is very hard to reach. You need the data manager in on the call but he is in another building on campus and says it’s too far to walk. You could tie him into the telecon, but he points out (correctly) that his accent is too thick to be well understood over the phone.
  • Marketing has been warning for years that you need real world patient experience data to be competitive with your new allergy medication. But despite what your competitors were accomplishing, regulatory was still skeptical about approval. Instead of engaging with data management on the issue, they keep asking to see one more demo from one more vendor.

I am sure you can provide many examples from your own organization. What’s missing in each of these situations is someone to speak up ­– not to argue the issue but to remind all involved that we are holding up the improvements, the decision, the work. And that our work is urgent: we needed to hire that new person yesterday, we needed that new software yesterday, we needed that data yesterday, we needed those sites ready for FPI yesterday. And once having spoken up, we need to pursue the resolution to a quick closure, using whatever channels of authority are necessary. Equally essential is the commitment and vocal backing of executive leadership to make clear that urgency is an organizational value and priority.

To a healthcare team in your local Emergency Department, questions of priority, focus and speed are regularly and clearly answered. They know how to triage, how to follow emergency care protocols, how to choose and listen and analyze and solve with calm, professional urgency. We all need this essence­ – to triage our work lives and cut through the low priorities. And we need to encourage our colleagues to do the same, so we can bring our collective focus and precious energy to the meaningful work our companies and organizations are doing. It’s why we chose this profession; let’s do it with urgency.

“A particularly sad consequence of operational mediocrity is its impact on innovation.”

What happened to “Operational Excellence”? It is a beautiful phrase and a worthy goal. But as biopharmas and CROs start to dismantle or de-fund their Operational Excellence departments, we should ask what is happening. Do we no longer desire excellence? Do we think we have achieved it?

Operational Excellence arrived as a melodious piece of jargon because of disillusionment with what came before it: TQM first, then Process Improvement, then various branded methodologies (hungry-eight-omega, you know who you are). The “excellence” efforts have suffered similar fates to the earlier incarnations – underfunding, insincere management commitments, skepticism, fatigue and fundamental misunderstandings about what process improvement can and should be. Changing the branding does not change the results because of these key flaws, and they all contribute to a negative feedback loop. Missed expectations leads to skepticism, poor techniques lead to change fatigue, underfunding prevents sustained effort, and insincere commitments makes re-prioritizing all too easy.

Improving clinical development’s methods is still very much needed. The fundamental inefficiency of biopharma clinical development is driven by many external factors, true, but we don’t do well with the hand we are dealt. And we’ve seen that simply outsourcing the problem (by far the most common solution today) has only created variable-cost inefficiency instead of fixed.

The irrelevance of outsourcing to improving efficiency is another column in itself. Sponsors like CROs to use methods they recognize, no matter how suboptimal, and CROs know they will be paid regardless, so the system has no meaningful incentives to efficiency besides competing billing rate charts. For all the many failings of biopharma outsourcing procurement departments, their failure to make an impact on overall industry methods may be the most damning.

Process improvement is ripe for action in all aspects of clinical development: protocol design, subject enrollment, data management, study team conduct, trial operations oversight, safety surveillance, use of information technology, investigative site communication and performance, monitoring and more. Your company probably has had multiple initiatives in most of these areas already, but meaningful results are rare and usually fleeting. We live in operational mediocrity instead of operational excellence. Nonetheless, we can no more give up on process change because it fails often than we can give up on early stage drug research because it fails often. Improving processes is still worthwhile; indeed it is an unavoidable imperative.

A particularly sad consequence of operational mediocrity is its impact on innovation. If we look at the current appealing innovations in clinical development – things like risk-based monitoring, fully electronic Trial Master Files, exploiting mHealth technologies, next-generation EDC, professionalized CRO oversight, and so on – each involve significant workflow and responsibility changes that must be as innovative as the technology used. The industry’s long experience with trying to exploit EDC and eCOA technologies has taught us this: underlying every innovation is a change in the way we work. Otherwise there is no innovation. And to make that change, organizations and internal thought leaders must understand and respect the nitty-gritty process changes which need to be defined, agreed to, tested and trained.

How do we steer back towards something approximating excellence? I have seen considerable success in what I call a “pragmatic” approach ­– one that takes on change step by step. It is grounded on several key essential items:

  • Committed and visible executive management
  • Traceability to key enterprise goals
  • Breaking the task into manageable, iterative pieces which, once achieved, serve as positive examples which break the skepticism cycle
  • Which must be followed immediately by additional improvement pieces to maintain momentum and convince staff it is “real this time.”

One way of thinking of this is that is akin to “evidence-based” medicine. As used here, it is an evidence-based method. EBM is a useful distinction from JBM (jargon-based methods), which is used instead all too frequently. It can be generalized that jargon is the refuge of those with little else to offer.

The building blocks of pragmatic process improvement will certainly sound familiar (identifying key business drivers, interviewing stakeholders, designing processes in workshop settings, documenting and implementing the changes and monitoring first use). This is like saying that basketball is dribbling down the court and putting the ball in that hoop up there. The hard part is overcoming all the typical obstacles that can so easily undermine improvement projects, some of which we have alluded to.

Let’s take the ubiquitous “workshop” as an example. Everybody in pharma has been to many workshops. What are the characteristics of those you remember as being productive? The workshop needs to have a crystal-clear purpose achievable in the time allotted. It needs a domain-knowledgeable facilitator. It requires some organizing mechanical technique to make the discussion and results tangible. Most important is the selection of the participants: 18 people chosen for their political affiliation does not a workshop make. That is better a definition of a circus. Instead a small group of stakeholders who can truly devote the necessary time to the task will be essential. It all sounds familiar, but the subtlety of applying pragmatism to each step is the heart of the matter.

Underlying the success of pragmatic process improvement is the correct governance – who is in charge, who funds, who decides, who staffs, who is accountable? The answer is always a little different from company to company. Should the people who do the work being improved be responsible for improving it? (Seems logical and essential to me.) Can process improvement cost less by creating a central dedicated department (which risks separating domain knowledge from the process knowledge)? Should it be outsourced like everything else? Should it be lumped in with the IT, HR or training departments? Every company will try it differently, but tying performance accountability to the management of the process in question is the most powerful solution.

 

Change fatigue, change skepticism, wasteful projects and unmet expectations are all real challenges to improving the way we work. They all can be overcome by a pragmatic approach to process improvement that is properly governed, with visible management commitment, taken in manageable steps that demonstrate success, and grounding those improvements permanently in our work environment. This steers us back toward excellence, which is the only direction worth traveling.