Email us : info@waife.com
Management Consulting for Clinical Research

Judging from the inquiries we have received in the past year, biopharma companies both experienced and naïve are looking beyond traditional clinical data management systems (CDMS), or even CDMS plus electronic data capture (EDC). They are looking to something new, which may be a new dawn over the mountaintops of data processing in clinical trials. Or, it may look like Wyoming after the strip miners have left.

I am talking about the much misunderstood phrase, “clinical data warehousing” (CDW). And like any other clinical IT initiative, CDW has been hijacked by technologists, and/or a lack of follow-through from the sincere business–side visionaries who leave the job of implementing innovation to others.

To briefly set the scene, we know that CDMS have been used to store clinical trial data for several decades now. Although this was the first area in clinical development to be widely computerized, many other uses of computers have since developed, and thus we have a plethora of individual, or “silo’ed” applications: trials management systems, EDC, electronic patient diaries, document management systems, and more. This multiplex of applications has also been a common development in other industries and even other sectors of biopharma companies, and one solution in recent years to try and pull these threads together is to build a common place to store the data generated by these applications – a “warehouse”.

It’s not the warehouse, it’s what you get out of it
Casting this change in terms of a “warehouse” places the technical emphasis on data storage, which while technically important, is totally subservient to what you need the data for, to how you are going to use it. Yet most people start with designing the shelf space and forgetting about the aisles and the doors needed to get the data out, and more so, where the data is going to be sent and why. This is not to say that the warehouse architecture question is trivial, but rather than it is premature until a dozen other questions have been answered by the business first.

Getting the data out is often called “data mining”, which mixes the metaphor immediately, and gives me permission to mix the metaphors for the rest of this column! Tools (or building blocks) for data warehousing and mining have been around for years, and successful data mining in healthcare has been used for a decade in health claims analysis and even pharma marketing. The key point is that mining data does not have to happen in a warehouse; indeed successful (i.e., meeting a business need) mining can be done on the databases you have right now, even those scattered about the enterprise.

You have to ask why
So the first question we ask our clients inquiring about a CDW is “why?”. It should be the first question asked of every new process or technology change, and yet at least half the time it is never asked: instead we start from a new technology or process fad, and look around for a reason for it much later. Even in this column, instead of talking about all the possibilities of CDW (which we will later on), I want to talk about why one would do warehousing in clinical development first.

Exploring this question with clients has often revealed very limited perspectives on what the felt need really is:
“I wish I had better reports from my clinical data.”
“I wish SAS export was easier, or earlier, or more direct, in the trial flow.”
“I wish all my documents were in one place.”
“Everybody else is doing it so I think maybe we should too.”
In other words, people are very much in need of better reporting, which may mean some data mining, but is unlikely to mean something requiring the sophistication of a data warehousing effort.

As I write this column, the scandal surrounding the FBI’s huge new software tool that doesn’t work is making the news. The following quote from an FBI spokesman sums up the data warehousing dilemma beautifully:
“The Investigative Data Warehouse, while perhaps a useful tool, does not manage case workflow and does not substitute for an effective case management system. Consequently, the FBI continues to lack critical tools necessary to maximize the performance of both its criminal investigative and national security missions¡¦”
You need to change only a couple of words and this could be some pharma’s sad CDW story three years from now unless it’s done right and done well.

Dawn on the Mountaintop
So now let’s look at why a clinical data warehouse may be a terrific idea for biopharmas if done correctly. We have proposed for many years that acquiring applications in individual actions, without regard to each other, may maximize the value to a very narrow function but is costly to the enterprise and highly inefficient. In addition, the dominance of CDMS orientation has a) held back EDC adoption for years, b) distracted clinical development personnel from the importance of trial management also, rather than only data management, c) and has kept an entire profession (data managers) from seeing above the trees.

For any biopharma examining how they do data management in 2005 (and many companies are), it is very appropriate to be considering an “EDC + repository” strategy rather than a traditional CDMS. The extensive traditional CDMS functions have little or no usefulness when an effective EDC “front end” is applied (assuming the EDC tool is working well). But how should the “back end” repository be architected, what should we expect from it, and what else can it do?

Some of the repository architecture guidelines come from work being done by CDISC and is also influenced by regulatory compliance issues, the guidelines for which are also in flux. But staying above the “data model” level for the moment, what some kind of repository does is begin to enable the “write once, ready many” paradigm that technologists have preached (and some industries have come close to) for many years. In clinical research, this means significant efficiencies on both the trial management side (how many times do you enter the investigator’s address today, and how do you know it is correct?) and the data integration side (getting central lab data, ePRO data, drug safety and CRF data in a location, or reportable, so that an integrated view is available for the human mind to collate). Once you start thinking this way, the best benefit of CDW is to break down the acronym silos of CDMS, CTMS, EDMS, EDC, EPD, IVRS, AES.

The Path to the View
So if we want to get the mountaintop, which path should we take? Some CDMS vendors will say they can offer it all to you, so “stay within our world”. Other choices may be more productive sooner, but are by definition innovative and therefore riskier — risky not being a usually appealing word to pharma operations. As we used to preach about EDC for years, however, if you wait until it isn’t risky any more, you will be far behind your rivals. We see this now in EDC, with earlier adopters flying ahead of those who waited (although some early adopters are no farther along than they were in the mid 90’s). Finding the sweet spot between “bleeding edge” and “failing follower” is always the challenge.

One thing is clear: the path begins with understanding why, before any vendor walks in the door, before the project itself gets the wrong name. Maybe what you really need is simply better reporting from the databases you already have. Maybe you need a more innovative approach to analysis. And just maybe, you need (and almost certainly can benefit from) a new approach to clinical data processing.

For clinical operations folks, there are two important implications to prepare for:

Changes like this will be a new distraction to you and your colleagues

You need to be proactive and get involved in these decisions: don’t let Informatics or CDM do this in a vacuum, which continues to happen way too often.
A new day will dawn when data processing is re-invented and the silos torn down. Our approach so far seems to be to build more buildings on the farm instead, with only a steam shovel available to tear the sides off when we need to get something out. We need to avoid strip mining the future by first considering the reasons behind business change.

In keeping with this month’s issue on Best Practices in clinical trial conduct, the editors asked me to reflect on Best Practices in clinical development process, as seen in real life situations. Being somewhat of a non-believer in Best Practices (who’s to say any practice is “best”?, and why is someone else’s “best” necessarily best for you?), I did conclude that I could cite a number of “pretty good” process examples which are worthy of emulating by other companies. All of the following are actual examples; the company identities have been omitted to protect their competitive advantage. If we were giving Best Process Practices Awards this year, here are the winners.

Merging Boldly

Mergers are rampant in our industry, amongst companies large and small, and as far as most investors can tell, they run smoothly, even if they don’t produce dramatic improvement in the flow of new drugs. But as anyone who has been inside a merger knows, the operational impact can be devastating, and at the very least, highly disruptive. Merger failures include poor understanding of where the strengths lie in the new organization, missing the opportunity to make real operational improvements, trying to keep people happy without regard to business effectiveness, keeping unproductive functions alive, and so on.

One company took a different approach. They decided to act swiftly to analyze strengths and weaknesses in the merged operations, and where necessary, create a new organization chart with re-articulated roles and responsibilities before any final management changes were announced. The analysis was rapid but intense, and informed with extra-company experience. Major re-assignments were made, even of line managers, and in some cases new units were formed for tasks the new company would be much in need of. Instead of the usual post-merger behavior, where a company just looks at the management players and leaves the rest up to them, often resulting in years of slow, distracting upheaval, this company sought to get as much change as possible accomplished quickly. And not change for change’s sake, but change that was incisive and potentially transforming. An excellent approach to exploit the operational opportunity mergers present.

Investing in Change Governance

We have written before how critical it is to govern change effectively – that there must be empowered leadership, with money and executive backing, and an infrastructure to handle the myriad of tasks necessary to make process improvement successful. Too often in clinical development, we retreat to leaderless teams or matrixed relationships; major change is cast as an “initiative,” which leaves individuals and managers free to choose whether they will commit. And without resources, both human and monetary, even strong visionaries cannot succeed.

A large pharma introducing a major change in clinical development made a significant investment in change governance this year. They wanted to move quickly but knew this change was mission-critical and high stakes. They created a new department, which would operate globally to manage this multi-year process transition. It is a permanent unit, with permanent members formally transferred in. Former line operations staff are now fulltime change agents. Top executives are fully informed and committed to its success. The department is fully budgeted and is resourcing ahead of the curve of need, instead of behind – in itself a major accomplishment at most companies. They have sought to staff all the roles necessary for change management, from study team mentors to investigative site trainers, and they expect to achieve successful, compliant process change in much less time than most of their big pharma peers. The investment in governance is paying off.

Learning from Mistakes, Sincerely and Concretely

Many clinical development groups routinely talk about “lessons learned”; they properly seek to understand what went right or wrong with a project or particular trial, so they can avoid repeating past mistakes and remember to do again what they did right. Unfortunately, such efforts are usually little more than one long meeting, a flipchart pad full of observations, and a bunch of action items that individuals are supposed to remember for next time. Too often, the “learning” dies on the spot. There is no formal means of documenting the discussion, no one keeping track of actions taken, and no protection from that learning just walking out the door in the head of a departing employee. Worse, the learning is often incomplete: messengers of bad news are afraid of being shot, or the attempt to seek information may not be cast widely enough. Equally common is that the lessons are articulated too vaguely, in words and opinions too familiar to be truly heard or acted upon.

One biopharma with many lessons from a just-concluded pivotal trial decided that learning as usual was not enough. Instead they used a formal exercise designed to comprehensively analyze and document what they had learned about this trial’s conduct. They looked for “root causes” of problems they encountered, which enabled them to identify broad but meaningful themes whose improvement will have wide impact on future trial conduct. These root causes are actionable, not theoretical, and provide a foundation for a diverse set of clinical operations process improvement steps.

Measuring the Vital Few

We have written many times that measuring our performance can be so useful to companies understanding where and how to devote process change energies. As we have described, those companies who get “metrics religion” often go into overdrive, and end up with a vast array of data, and pages of reports, which managers ignore and staff resent. Another flaw in most companies metrics programs is a focus only on time intervals (last patient visit to database lock, for example), which are determined by too many contributing variables so as to make correct conclusions impossible to derive.

An international biopharma has demonstrated that in one area of their operation, clinical data management, they can apply metrics for resource planning, performance monitoring and technology assessment without falling into either of these traps. By focusing on the “vital few” metrics which are both reasonable to collect and have operational significance to know, they have been able to minimize the practical burden of performance data collection while increasing the usefulness of the results. As importantly, they are focusing on the units of work which best describe operations, rather than abstract time intervals. Using units of work translates much more accurately into management decision-making.

Knowing It’s Never Too Soon for Compliance

Emerging biopharmas who are just beginning to see their first drug candidate approach Phase III usually continue the pattern of operations which got them there: use a lot of CROs, focus on the science, and ignore clinical development or postmarketing infrastructure. They follow the same philosophy they followed through discovery: I’ll buy that next piece of equipment or hire that next person I need the day I need it and not a moment sooner. That strategy can work in discovery, but having the organizational infrastructure for a successful submission and postapproval support is not something one can buy in a moment – it takes time, anticipation, skill and practice

Just such an emerging biopharma has demonstrated visionary practice in its small clinical group as the company gets close to submission. They have instituted a broad and deep review of their clinical SOPs and fleshed them out in the many areas where they were thin. These SOPs also serve to ensure that their clinical program is robust right now, as they expand the scope of their candidate’s indications, instead of continuing to rely on outsiders’ standards. And they have also recognized the importance of creating a legitimate pharmacovigilance function, with the appropriate tools, so that they are prepared to handle internally the safety monitoring needs an approved drug will require. In both cases – SOPs and pharmacovigilance – the company was able to internalize a compliant infrastructure at modest cost with many long-term benefits.

Using the Right Reasons for Vendor Selection

My last example for the year’s Pretty Good Practices is a company who has gone about technology vendor selection more efficiently than most companies do, because they focused on the right reasons for choice. Instead of spending months of intense effort to develop, define and document the functional requirements for a clinical IT software application (i.e., how should the commands appear on tab 12), they recognized that this would be reinventing the wheel. Clinical IT applications are so well defined in purpose and use that developing functional requirements for them de novo is akin to specifying how a word processor should work.

This company instead focused on business requirements: that is, what did the company need from the software and its vendor, rather than how the software should look or act. They focused on the imperatives they were facing, the skills of their staff, and the financial and time constraints of their particular development program at this point in time and the foreseeable future. These characteristics are not the same among all biopharmas; indeed they can vary greatly. And having identified what was important to them, they found specific, meaningful differences amongst the products and vendors that a functional review would never have revealed.

Four Common Qualities

There are common elements in these Pretty Good Practices: boldness, speed, simplicity, honesty. These are qualities sorely lacking in most pharmaceutical development operations. When you see them, good things are likely to follow. Whether or not the specifics of these stories apply to your company at this moment in time, these qualities will always serve you well.

For many years now, it has been fashionable to work on corporate processes by encouraging a “customer-vendor” relationship between departments. I have probably been as guilty of this as other consultants have been. In listening to the problems facing clinical development departments recently, however, I have come to realize that this concept is either mis-applied, mis-understood, or simply a mistake. Unless great discipline is exerted on all sides, “customers” are poorly served and “vendors” are exploited.

You are My Customer, I Suppose
The original concept of treating each other like we are each other’s customers is drawn from many good intentions. And it converges from many process improvement philosophies and techniques. At my company, we use a “voice of the customer” technique, for instance, which emphasizes this style of thinking. TQM (Total Quality Management) teachings of various flavors use this terminology. Process Mapping – the visualization of complex workflows – is sometimes presented in this fashion: Who is your customer? Whose customer are you?

In many ways, talking about each other as customers is not unlike the Golden Rule: treat your organizational colleagues the way you would want to be treated. Using terms like “customer,” “vendor,” and “service” puts a warmer face to “inputs/outputs,” or talking about “deliverables”.

The customer orientation as a corporate process concept was long overdue when it first appeared. Most corporations – pharmaceutical companies among them – desperately needed to overcome the detrimental effects of “silos” (departments with their blinders on), eliminate the pointing fingers of blame, and get people to see the interdependencies they share in a complex organization.

Examples of such poor relationships at pharmaceutical companies were (and still are) legion: between clinical operations and clinical data management, biostatistics and medical affairs; regulatory affairs and product marketing; investigative sites and monitors (CRAs); chemists and pharmacologists; informatics and everybody; and so on.

But there are problems with this kind of language and way of teaching process improvement. For instance, if everyone is everybody’s else’s customer, is anybody “making” anything or are we all just “buyers”, expecting to be served? In fact, not everyone is a customer of someone else. Sometimes, for instance, people are other people’s bosses. Sure, you can treat your boss as a customer, but it may be more important to follow her leadership rather than think about delivering her a service. Sometimes our colleagues are simply sources of money, or information – neither customer nor vendor. And sometimes, we have nothing to do with each other, but if the mantra of customer-vendor is too ubiquitous, the corporation may be insisting that these folks develop a time-wasting connection in the blind pursuit of popular jargon.

Happy Customers, Missing the Boat
More importantly, there are two fatal flaws in the everyone-is-a-customer approach. First, even given the legitimacy of the concept, the true “customer” of our work may in fact be at a much higher level than who we are trained to serve. For instance, the customer who is worthy of that name may not be the next person in the handoff chain, but rather the Clinical Development function as a whole, or your biopharmaceutical corporation, or the regulatory agencies, or even the patients you hope to help. Unfortunately, the customer focus approach almost exclusively encourages a pragmatic, heads-down, make-the-next-guy-happy mentality. This may not actually help your organization at all, if the focus should be at a much higher plane.

Secondly, there is the mis-use of the concept of “service”. If we have customers, it is assumed that we are supposed to “serve” them. This is where the customer concept can get really deadly. When we start to think about “providing a service,” and then turn this into “being of service” to our customers, if we are not careful we can end up being “subservient”. These words all sound pretty similar, but subservience means being a servant; it implies that the customer is better than we are, or more importantly, subservience means the customer’s needs are more urgent than ours. It doesn’t have to be this way, but when we overlay the customer focus superficially and without diligent rigor on age-old interdepartmental rivalries, that’s what we end up with.

Let’s look at some clinical research examples:

Clinical Data Management takes the customer focus philosophy to heart and concentrates on serving the clinical study manager. In doing so, they are forced to choose between losing their carefully built-up standards, structures and efficiency, for the sake of serving the study team with flexibility.

Monitors focus on sites (their customers) to such an extent that they allow consistently poor site performance in enrollment and data quality for the sake of “satisfying” that Key Opinion Leader.

Clinical operations is intimidated by their site ¡®customers’ into paying them more to use EDC (electronic data capture) — a tool which will in fact lower site workload if everyone sticks it out to the end (and uses the right tool to begin with, with the correct processes).

Biostatisticians give their physician customers in Medical Affairs everything they ask for, despite what this does to the timeline, or despite the fact that the data may not yet be properly cleaned or of sufficient volume, or despite the fact that the questions the physicians are chasing are taking the company’s eye off the ball.

Clinical operations tries “serving” CDM, and in trying to meet CDM’s stringent view of data cleaning, ends up with an inefficient monitoring schedule and that classically unnecessary task: 100% SDV (source data verification).
In fact, having customers and serving them are not inextricably linked. It is possible to be customer-focused and yet not be subservient to your customers. “The customer is always right” does not mean that the vendor is always wrong.

Customers as Equals
The other way of looking at this, at preserving the idea of “customer” without the idea of “subservience,” is to see each other as equals, not servants one to another. After all, if everybody is somebody’s customer, we are equal. And equality brings several characteristics into play which are very healthy for a corporation: mutual respect, boundaries, expectations with limits, and a common multilateral goal, not a linear production line goal. It allows the Clinical Development department to focus above the level of the next handoff, and to focus instead on the plan, the submission or the patient. And it ensures that each profession can determine which of its values are corporate-critical, and hold onto them, while, yes, “serving” the needs of other professions within clinical research.

How do we get to be equals? Good will and strong leadership are certainly essential, but human behavior and entrenched practices often require a little extra help to change. Fortunately there is a well-established concept in this regard: the Service Level Agreement (SLA).

SLAs are an invention of our service economy, and while they have been used in many industries for many years, the concept is not widely applied yet in pharmaceutical companies, especially in Clinical Development. An SLA looks very much like a contract. It spells out responsibilities, timelines, expectations, deliverables, authority, change procedures and problem resolution. As such, it documents what one may have documented in a Process Mapping exercise or similar, but does so in the language that societies have developed over centuries to bind people to a common purpose.

Initially, people often react badly to this concept. Why do we need a contract when we have worked together for so long? Why don’t you trust me/like me anymore? Spiteful behavior can ensue: “well, if you’re going to write that down, I’m going to write down this!“ “If you want this done on time, how about you doing this for me on time!” And as petulant as these sound, this is exactly the point. The SLA becomes a tool to generate an open dialogue that, without it, simmers below the service. The SLA’s formality uncovers flaws, resentments, or missed expectations that were never articulated otherwise.

In the end, by going through this process, you may not need the formal document – the process of trying to write one may be therapeutic enough. But because we are so increasingly interdependent across Clinical Development, and indeed across all pharmaceutical company departments, the documentation of how that interdependency will be managed to the corporate good, and to the successful treatment of patients, may be necessary, at least for a few years, until old habits die off, and mutual respect and understanding become second nature.

When your waiter tells you “I will be your server tonight,” he or she represents in fact a range of products and services, a range of outputs — the chef’s creativity, the cook’s execution, even the delivery truck that brought the raw materials. And you have your responsibilities as well – to arrive at the restaurant on time, to know what you like, to treat the staff well. The delicate flavors, the perfect ambience, a lovely wine are all the result of a combination of lots of people providing you a terrific meal, plus your ability to be ready to enjoy it. They are not your servants, and you are not burdens for them to bear. And as equals, server and served, customers all, we can go home well satisfied with a meal, and a job, well done.

As the old joke goes, the way to get to Carnegie Hall is practice, practice, practice. In the clinical research industry, as we constantly struggle to improve work processes widely known for their inefficiency, it is very fashionable to talk about “best practices”: if we can learn how the best pianists play their instrument, and copy them, we can be great pianists too. Entire businesses are built around selling best practices to biopharmaceutical researchers. I object. We are falling for best practices instead of learning more about our own.

The concept of best practices is based on the assumption that in any particular field, many organizations do the same tasks, and make similar mistakes, and can learn from these successes and mistakes with varying degrees of benefit. The assumption goes further, that one can derive the “best” practices by surveying how all organizations that do a similar process perform.

While learning from others is a great idea, too many clinical research executives have fallen in love with the best practices concept without examining it very carefully. There are several flaws in the best practice concept:

– that performance is a constant continuum, and better performance must mean best practice
– that what is best practice for the pianist is best practice for the violinist
– that we can recognize a best practice when we see one.
– There are two further problems:
– a practice is only “best” if it fits with your business strategy, and not all biopharmas have the same operational or commercial strategies
– and most importantly, that being enraptured with best practice takes our eyes off what we should be looking at: how to improve our own practices.

Your Performance is My Rehearsal
Best practices are usually claimed on the basis of some achievement a rapid database lock, a significant cost savings, a reduction in headcount, a speedy drug approval. Such achievements are rightly applauded, but their relevance to your company is nearly non-existent, unless you know how closely the circumstances of that company©ös past performance match those of your upcoming project. And worse yet, a company©ös “best” database lock may not even be better than yours, when the circumstances underlying the performance are examined.

The main problem with best practices as applied in biopharma is the huge variability amongst the organizational, resource and process parameters in each clinical research department. The pseudo-scientific assertions of best practices and benchmarks cannot hold up under the scrutiny we would apply to clinical trials results: are the comparators controlled, are we using common denominators, are we even using the same definitions? The answers are all “no”.

If a company reports that they can lock a database, using traditional paper processes, in a week, is that a best practice, or excellence in working overtime? If a company says they saved millions of dollars using a new technology, but did so only because they were using expensive contractors to do the work, and you don©öt use contractors, will you reap the same savings following this best practice? If a company claims best practice in regulatory approval strategy by achieving simultaneous multi-country registrations, is any of that relevant to your drug, in its particular therapeutic area, at the time it attempts market entry vis-?-vis its competitors? The variables are endless.

This is not to say we don©öt have much to learn from others. The challenge is in obtaining reliable (truthful or accurate) information from others, and then knowing ourselves well enough to recognize if what we are hearing from others is applicable to our situation. Self-ignorance undermines any meaningful learning from best practices, assuming there are such things.

A Better Practice
I propose a different, more useful definition of best practice: “A best practice is a process which enables a group to meet its employer’s properly defined business objective.” In other words, a best practice is not a universal truth. What is best is what fits your business strategy, not someone else©ös, and you can find best practices if you look within your own relatively homogeneous operational circumstances.

Knowing how long it took you to register a first-to-market oncology drug in three prime markets is very relevant to the second time you seek to register a first-to-market oncology drug in three prime markets. If you beat your previous time, and controlled for any other variables, you have achieved a best practice one to measure yourself against the third time you try it. Similarly, if you can review retrospectively and accurately the database lock times on a series of trials whose dozens of parameters (number of data fields and edit checks, process and tools used, hours worked, quality of sites and time to reconcile adverse events, etc., etc.) are nearly identical, you can establish your best practice, and know what target to aim for to beat it.

An Example
Let’s take one simple example. Company A is examining how it organizes and uses its in-house monitoring staff. At an industry conference, Company B claims it has developed a best practice in organizing monitors, based on regionalization plus an expanded in-house documentation staff. Every one on the conference panel applauds and says Company B did a great job. The idea that this is a best practice is endorsed. Consultant X then writes a White Paper based on Company B©ös experience, and the solution is officially canonized. Company A, suitably impressed, moves quickly to adopt the same organizational structure and process.

What can wrong? Lots of things. Company B could have more resources than Company A and therefore afford a more efficient geographic distribution of monitors. Company B could have already established better knowledge of high performing investigators. Company B could have different personnel policies and pay scales that allow the efficient build up of low-cost in-house workers. Company B©ös regulatory SOPs may be more amenable to this particular division of labor. And so on, ad infinitum. This best practice is just a practice; what is “best” is in the heart and head of each of us.

The answer to this situation is for executives to stop coveting their competitors©ö processes and work much harder on understanding their own. Every biopharma already has best practices (and worst practices), waiting to be uncovered, analyzed and learned from.

Truly understanding how your own company works is the first step to the Carnegie Hall of clinical development performance. Refining your own best practice — the way you play the piano, not how your colleague plays it will determine whether your clinical research will deserve a standing ovation.

And on the seventh day, the Lord rested. If only we could have her schedule! For many of us in clinical research, we find ourselves working part of every day. Our regular work days are filled with meetings and teleconferences, our nights and weekends are filled with “real work”: reading, writing, planning, thinking. And all too little of the latter, because we are too busy following through on “action items” from all those meetings.

Meaningful process improvement in clinical research is hindered by a cultural cataclysm: a Julian calendar in a Digital age. We are still scheduling our time by the rhythms of the planets, while communicating in digital time. This cultural dissonance results in two fatal flaws: the unnecessary weekly meeting, and the under-use of enterprise information which could take its place.

The Myth of the Weekly Meeting
The reason why our work days are filled with meetings is the tyranny of arbitrary frequency. When we get our department or team together, what is the first thing we talk about? We decide to meet once a week! Why? What is the meaning of seven solar days to the needs of the work at hand? Maybe we should be meeting every 3 days. Maybe we only need to meet every eleven days. We never consider these possibilities; instead we book yet another weekly meeting into our PDAs. The result is that we are either meeting too often, or not enough — rarely “just right”.

Unnecessary meetings are perpetuated for several flawed reasons. One is the tyranny of the team, something I have written about previously. Indeed, it is often hard to tell which is worse — the teams, or the meetings they generate. We also suffer through “meetings of habit.” Think about the spectrum of your weekly meetings, and ask yourself: when did this meeting first begin? There may have been a good reason for it originally, but is there still? Or are you meeting out of habit, because it’s the way we always do it. When I was once running a large organization with a number of senior managers as direct reports to me, we of course started meeting once a week. After a while, the meetings were getting thin in content; mostly we talked about the latest gossip or personal news. I realized the management team was running well enough that we didn’t need to meet, and changed it to a “meeting on demand” schedule: any of the managers could call a management meeting when each other’s counsel was needed. As long as I did my job correctly, of staying in touch closely with each of them individually, this new system worked very effectively, and an hour of the work week had been liberated.

We also suffer from “meetings of inclusion”, not dissimilar from the ubiquitous team meeting. These are the meetings we have when we don’t want to leave anybody out, or hurt someone’s feelings. We want to keep up with corporate political correctness, or we’re trying to be inclusive of others. Inclusion is only worthwhile if it is sincere, and if so, it can be insightful and mind-bending. If we are including people for the wrong reasons, you can be sure they will feel it very quickly, resent the waste of their time, and thereby undercut our original cynical purpose.

The worst sin of course, and most common, is having meetings where people look at each other and have nothing to say or learn. Many observers have advised cogent fixes to this problem. In the words of one successful manager, “if people need an agenda for a meeting, they don’t belong there.” One of the famous and most effective ways to make meetings efficient is to hold them in rooms without chairs. It’s amazing how fast those meetings go.

The Frequency Flaw
But beyond making meetings more efficient, we have to question their frequency. Our work rhythm is no more dictated by the rotation of the earth and moon than any other natural phenomenon. So why are we meeting every week?

What is a week? A biblical invention perhaps. It is at best an arbitrary subdivision of the lunar cycle, adjusted to the frequency with which the sun rises and set, the two cycles of which do not line up mathematically. And anyone familiar with the tortured history of the creation of the Julian calendar will remember that our months are even more arbitrary (indeed the calendar looks very much like the product of a committee meeting!).

Similarly, when we ask for reports, we ask for them monthly. Why monthly? Is that frequently enough? Is it too often? Who knows? We let the moon decide how frequently we will summarize and communicate information. What’s important is that meetings, and reports (i.e., information), are tightly interrelated. If we had more timely information (reports), do we need the meetings?

Well, sure, you are saying, but weeks and months are what everyone is used to and it’s easier this way. Weekly meetings for instance, are the safest way to fight that fiercest of corporate battles — booking the conference room! But to accept these arbitrary schedules as inevitable is a cop-out.

From Daily Work to Weekends and Back Again
Before the rise of the corporation in the twentieth century as the dominant form of employment in industrialized societies, we all worked every day. We had to, to keep animals fed, wood chopped, water carried. But we had a lot fewer meetings! And each day had a rhythm which naturally included some fresh air, exercise, family and quiet. In the last century we became boxed into the structure of “the work week”, creating the phenomenon of “the weekend”. No longer did we have a fluid continuum of daily tasks with little discrimination; instead the lines were clearly drawn between work and leisure, and an artificiality was introduced.

With the advent of ubiquitous, intrusive and all too easily accessible communication technologies, our weekends have now all but disappeared. The globalization of clinical research, and its resultant round-the-clock phone calls and its air travel demands, have further eaten into what’s left of “free” time. Thus is the price of the Digital Age. But if our world is digital, why are we still meeting once a week?

If technology has ruined our free time, it is because we are keeping both behavioral archetypes in place: the Julian and the Digital (the weekly meeting and the cellphone). The bottom line is that I suspect most meetings do not have to happen weekly, and that most reports are needed more often than monthly.

In Business, Digital Wins
In clinical research, all of our operational processes are about generating information, some of it of a defensive nature (regulatory record-keeping), and some of it mission-critical to enterprise decision-making (keep developing the drug or kill it?). Most organizations have long recognized that information technology — listen to the semantics of that phrase! — can help “manage” this information. Unfortunately, as anyone who has tried to acquire or design a good CTMS knows, these software applications have focused much more on getting the data in than on getting useful information out. Once we get better at this, we can envision information when we need it, not prepared on a schedule determined by a cold white orbiting celestial body.

Perhaps technologies can be put into service to restore free time, and a near-agrarian rhythm to our lives, by enabling “just-in-time” meetings and “real-time” reports. Instead of double-booking unnecessary weekly meetings (whose agenda is often filled with speculation about information unavailable because we don’t have our monthly report yet!), we can start meeting only as often as needed. And the meetings we will have will be so much more informed, and therefore shorter, because operational data will be at our fingertips.

For the soundtrack of The Lion King, Elton John wrote:

* From the day we arrive on the planet
* And blinking step into the sun
* There’s more to be seen than can ever be seen
* More to do than can ever be done.
Let’s not waste that time in unnecessary meetings, scheduled by the arbitrary rhythm of our planet. There is so much to be done.