As the old joke goes, the way to get to Carnegie Hall is practice, practice, practice. In the clinical research industry, as we constantly struggle to improve work processes widely known for their inefficiency, it is very fashionable to talk about “best practices”: if we can learn how the best pianists play their instrument, and copy them, we can be great pianists too. Entire businesses are built around selling best practices to biopharmaceutical researchers. I object. We are falling for best practices instead of learning more about our own.
The concept of best practices is based on the assumption that in any particular field, many organizations do the same tasks, and make similar mistakes, and can learn from these successes and mistakes with varying degrees of benefit. The assumption goes further, that one can derive the “best” practices by surveying how all organizations that do a similar process perform.
While learning from others is a great idea, too many clinical research executives have fallen in love with the best practices concept without examining it very carefully. There are several flaws in the best practice concept:
– that performance is a constant continuum, and better performance must mean best practice
– that what is best practice for the pianist is best practice for the violinist
– that we can recognize a best practice when we see one.
– There are two further problems:
– a practice is only “best” if it fits with your business strategy, and not all biopharmas have the same operational or commercial strategies
– and most importantly, that being enraptured with best practice takes our eyes off what we should be looking at: how to improve our own practices.
Your Performance is My Rehearsal
Best practices are usually claimed on the basis of some achievement a rapid database lock, a significant cost savings, a reduction in headcount, a speedy drug approval. Such achievements are rightly applauded, but their relevance to your company is nearly non-existent, unless you know how closely the circumstances of that company©ös past performance match those of your upcoming project. And worse yet, a company©ös “best” database lock may not even be better than yours, when the circumstances underlying the performance are examined.
The main problem with best practices as applied in biopharma is the huge variability amongst the organizational, resource and process parameters in each clinical research department. The pseudo-scientific assertions of best practices and benchmarks cannot hold up under the scrutiny we would apply to clinical trials results: are the comparators controlled, are we using common denominators, are we even using the same definitions? The answers are all “no”.
If a company reports that they can lock a database, using traditional paper processes, in a week, is that a best practice, or excellence in working overtime? If a company says they saved millions of dollars using a new technology, but did so only because they were using expensive contractors to do the work, and you don©öt use contractors, will you reap the same savings following this best practice? If a company claims best practice in regulatory approval strategy by achieving simultaneous multi-country registrations, is any of that relevant to your drug, in its particular therapeutic area, at the time it attempts market entry vis-?-vis its competitors? The variables are endless.
This is not to say we don©öt have much to learn from others. The challenge is in obtaining reliable (truthful or accurate) information from others, and then knowing ourselves well enough to recognize if what we are hearing from others is applicable to our situation. Self-ignorance undermines any meaningful learning from best practices, assuming there are such things.
A Better Practice
I propose a different, more useful definition of best practice: “A best practice is a process which enables a group to meet its employer’s properly defined business objective.” In other words, a best practice is not a universal truth. What is best is what fits your business strategy, not someone else©ös, and you can find best practices if you look within your own relatively homogeneous operational circumstances.
Knowing how long it took you to register a first-to-market oncology drug in three prime markets is very relevant to the second time you seek to register a first-to-market oncology drug in three prime markets. If you beat your previous time, and controlled for any other variables, you have achieved a best practice one to measure yourself against the third time you try it. Similarly, if you can review retrospectively and accurately the database lock times on a series of trials whose dozens of parameters (number of data fields and edit checks, process and tools used, hours worked, quality of sites and time to reconcile adverse events, etc., etc.) are nearly identical, you can establish your best practice, and know what target to aim for to beat it.
An Example
Let’s take one simple example. Company A is examining how it organizes and uses its in-house monitoring staff. At an industry conference, Company B claims it has developed a best practice in organizing monitors, based on regionalization plus an expanded in-house documentation staff. Every one on the conference panel applauds and says Company B did a great job. The idea that this is a best practice is endorsed. Consultant X then writes a White Paper based on Company B©ös experience, and the solution is officially canonized. Company A, suitably impressed, moves quickly to adopt the same organizational structure and process.
What can wrong? Lots of things. Company B could have more resources than Company A and therefore afford a more efficient geographic distribution of monitors. Company B could have already established better knowledge of high performing investigators. Company B could have different personnel policies and pay scales that allow the efficient build up of low-cost in-house workers. Company B©ös regulatory SOPs may be more amenable to this particular division of labor. And so on, ad infinitum. This best practice is just a practice; what is “best” is in the heart and head of each of us.
The answer to this situation is for executives to stop coveting their competitors©ö processes and work much harder on understanding their own. Every biopharma already has best practices (and worst practices), waiting to be uncovered, analyzed and learned from.
Truly understanding how your own company works is the first step to the Carnegie Hall of clinical development performance. Refining your own best practice — the way you play the piano, not how your colleague plays it will determine whether your clinical research will deserve a standing ovation.
And on the seventh day, the Lord rested. If only we could have her schedule! For many of us in clinical research, we find ourselves working part of every day. Our regular work days are filled with meetings and teleconferences, our nights and weekends are filled with “real work”: reading, writing, planning, thinking. And all too little of the latter, because we are too busy following through on “action items” from all those meetings.
Meaningful process improvement in clinical research is hindered by a cultural cataclysm: a Julian calendar in a Digital age. We are still scheduling our time by the rhythms of the planets, while communicating in digital time. This cultural dissonance results in two fatal flaws: the unnecessary weekly meeting, and the under-use of enterprise information which could take its place.
The Myth of the Weekly Meeting
The reason why our work days are filled with meetings is the tyranny of arbitrary frequency. When we get our department or team together, what is the first thing we talk about? We decide to meet once a week! Why? What is the meaning of seven solar days to the needs of the work at hand? Maybe we should be meeting every 3 days. Maybe we only need to meet every eleven days. We never consider these possibilities; instead we book yet another weekly meeting into our PDAs. The result is that we are either meeting too often, or not enough — rarely “just right”.
Unnecessary meetings are perpetuated for several flawed reasons. One is the tyranny of the team, something I have written about previously. Indeed, it is often hard to tell which is worse — the teams, or the meetings they generate. We also suffer through “meetings of habit.” Think about the spectrum of your weekly meetings, and ask yourself: when did this meeting first begin? There may have been a good reason for it originally, but is there still? Or are you meeting out of habit, because it’s the way we always do it. When I was once running a large organization with a number of senior managers as direct reports to me, we of course started meeting once a week. After a while, the meetings were getting thin in content; mostly we talked about the latest gossip or personal news. I realized the management team was running well enough that we didn’t need to meet, and changed it to a “meeting on demand” schedule: any of the managers could call a management meeting when each other’s counsel was needed. As long as I did my job correctly, of staying in touch closely with each of them individually, this new system worked very effectively, and an hour of the work week had been liberated.
We also suffer from “meetings of inclusion”, not dissimilar from the ubiquitous team meeting. These are the meetings we have when we don’t want to leave anybody out, or hurt someone’s feelings. We want to keep up with corporate political correctness, or we’re trying to be inclusive of others. Inclusion is only worthwhile if it is sincere, and if so, it can be insightful and mind-bending. If we are including people for the wrong reasons, you can be sure they will feel it very quickly, resent the waste of their time, and thereby undercut our original cynical purpose.
The worst sin of course, and most common, is having meetings where people look at each other and have nothing to say or learn. Many observers have advised cogent fixes to this problem. In the words of one successful manager, “if people need an agenda for a meeting, they don’t belong there.” One of the famous and most effective ways to make meetings efficient is to hold them in rooms without chairs. It’s amazing how fast those meetings go.
The Frequency Flaw
But beyond making meetings more efficient, we have to question their frequency. Our work rhythm is no more dictated by the rotation of the earth and moon than any other natural phenomenon. So why are we meeting every week?
What is a week? A biblical invention perhaps. It is at best an arbitrary subdivision of the lunar cycle, adjusted to the frequency with which the sun rises and set, the two cycles of which do not line up mathematically. And anyone familiar with the tortured history of the creation of the Julian calendar will remember that our months are even more arbitrary (indeed the calendar looks very much like the product of a committee meeting!).
Similarly, when we ask for reports, we ask for them monthly. Why monthly? Is that frequently enough? Is it too often? Who knows? We let the moon decide how frequently we will summarize and communicate information. What’s important is that meetings, and reports (i.e., information), are tightly interrelated. If we had more timely information (reports), do we need the meetings?
Well, sure, you are saying, but weeks and months are what everyone is used to and it’s easier this way. Weekly meetings for instance, are the safest way to fight that fiercest of corporate battles — booking the conference room! But to accept these arbitrary schedules as inevitable is a cop-out.
From Daily Work to Weekends and Back Again
Before the rise of the corporation in the twentieth century as the dominant form of employment in industrialized societies, we all worked every day. We had to, to keep animals fed, wood chopped, water carried. But we had a lot fewer meetings! And each day had a rhythm which naturally included some fresh air, exercise, family and quiet. In the last century we became boxed into the structure of “the work week”, creating the phenomenon of “the weekend”. No longer did we have a fluid continuum of daily tasks with little discrimination; instead the lines were clearly drawn between work and leisure, and an artificiality was introduced.
With the advent of ubiquitous, intrusive and all too easily accessible communication technologies, our weekends have now all but disappeared. The globalization of clinical research, and its resultant round-the-clock phone calls and its air travel demands, have further eaten into what’s left of “free” time. Thus is the price of the Digital Age. But if our world is digital, why are we still meeting once a week?
If technology has ruined our free time, it is because we are keeping both behavioral archetypes in place: the Julian and the Digital (the weekly meeting and the cellphone). The bottom line is that I suspect most meetings do not have to happen weekly, and that most reports are needed more often than monthly.
In Business, Digital Wins
In clinical research, all of our operational processes are about generating information, some of it of a defensive nature (regulatory record-keeping), and some of it mission-critical to enterprise decision-making (keep developing the drug or kill it?). Most organizations have long recognized that information technology — listen to the semantics of that phrase! — can help “manage” this information. Unfortunately, as anyone who has tried to acquire or design a good CTMS knows, these software applications have focused much more on getting the data in than on getting useful information out. Once we get better at this, we can envision information when we need it, not prepared on a schedule determined by a cold white orbiting celestial body.
Perhaps technologies can be put into service to restore free time, and a near-agrarian rhythm to our lives, by enabling “just-in-time” meetings and “real-time” reports. Instead of double-booking unnecessary weekly meetings (whose agenda is often filled with speculation about information unavailable because we don’t have our monthly report yet!), we can start meeting only as often as needed. And the meetings we will have will be so much more informed, and therefore shorter, because operational data will be at our fingertips.
For the soundtrack of The Lion King, Elton John wrote:
* From the day we arrive on the planet
* And blinking step into the sun
* There’s more to be seen than can ever be seen
* More to do than can ever be done.
Let’s not waste that time in unnecessary meetings, scheduled by the arbitrary rhythm of our planet. There is so much to be done.
My Fiefdom, Right or Wrong
If asked, I imagine that most any pharmaceutical executive would say that their company works very well across individual functions. They would say that no one can develop drugs without thorough cooperation throughout the discovery and development process. “After all,” they would say, “we have all these inter-departmental teams, don’t we?”
Yes indeed they do have those teams, but a team doesn’t create respect, trust, efficiency, or productivity by itself. And standing in the way of cross-functional nimbleness are the kings and queens of the pharmaceutical fiefdoms and their loyal subjects. Too often, it’s “my good” over the “greater good”; it’s “my fiefdom, right or wrong”.
While all sponsors eventually get out good therapeutic products, the industry remains plagued by chronic inefficiencies. Clinical development has many sources of inefficiency, and one of them is the failure of the diverse professions involved in clinical research to work together productively on a consistent basis. Certainly there are particular projects, or particular leaders, or particular moments in time when the stars and planets align, where people do get along well. But these are moments, and do not represent standard practice.
This Land is My Land
Most interdepartmental activities (project teams, process change projects, technology implementations, acquisition teams) show little tolerance for pressure, unexpected events, disagreements or changes in management direction (in other words, life). We see teams, carefully constructed with the best of intentions, all too frequently dissolve into recrimination, passive-aggressive withholding, and hallway politicking when the going gets tough.
And this isn’t just about teams. It is about the very organizational structure of clinical development itself. Obviously clinical development needs physicians, clinical operations staff, CRAs, data managers, software programmers, statisticians and medical writers. What we don’t need is for each group to place its allegiance first to their department, second to their profession, and last to the company trying to improve human health.
Then you have one of the common methods of process adaptation in vogue today: the creation of “roles”, as distinct from jobs ¡© essentially a superset of responsibilities assigned to higher performing individuals without any increase in compensation or diminution of their original responsibilities. By proliferating roles without letting go of profession-based structures, you have double the bureaucracy and double the pressure without really making an improvement.
Organizing for Failure
Organization charts are the language of a company. A company speaks through these documents: it tells us what is important to them and what is not important, what it values, and how well it understands itself. Too often we see companies organizing for failure.
The first clue comes when you ask to see the organization chart and are told that human resources won’t let you see it. We know of several companies where managers can’t even see the organization chart of their own department! Then you know the company language has lost its tongue.
What are common ways that companies organize for failure? Organizing to protect egos is organizing for failure. Organizing to “prop up” someone’s headcount to justify their title is organizing for failure. If you cannot achieve at least “logical intimacy” among functions, much less true interconnectedness, then you are organizing for failure. If you organize around visionaries instead of managers, you are organizing for failure. But one of the most dangerous trends in pharmaceutical companies today is organizational fragmentation.
This Village is My Village
Warring fiefdoms is one thing. War within the fiefdom is much worse. There is a growing tendency at some pharma companies to respond to process problems by further fragmenting their organization chart into smaller and smaller pieces. The thinking is that professional specialization will somehow inspire higher performance (ironic, at a time when CRA specialization by therapeutic area is nearly universally rejected in favor of regionally-based generalists). Examples of fragmentation in clinical departments are moves like organizing support functions into their own department, breaking up the monitoring function into several specialized jobs, or breaking up clinical data management into micro-constituencies (programmers, database managers, data managers, data analysts). The worst symptom of fragmentation is when the pieces are sprinkled hither and yon in odd ways organizationally.
Why is fragmentation so damaging? When you create a micro-profession you compound the essential problem we are describing: the creation of tribes who define themselves by who they are not. The more tribal we are, the more distrustful and disrespectful they are, by human nature. Worse yet, each new fragment (say, clinical site document specialists) has to create its own department. And what does that mean? It has to have its own meetings, its own representative on the interdepartmental team, its own career path and job ladder. The micro-profession becomes self-fulfilling and self-perpetuating. A tactic designed to innovate roles ends up instead creating yet another principality, with full regalia. In sum, instead of a source of efficiency, it is a cost-generating machine.
This Fiefdom is Our Land
This is not meant as a polemic against organizational innovation. Rather, I encourage companies to examine their initiatives in terms of their output. Are objective measures of performance improving? Can managers and workers alike say there is less finger-pointing and more respect?
One method of organizational innovation that breaches borders is to lead with focus. Ask yourself, “what is the most thing (or two) which our company needs the organization (clinical development as a whole) to accomplish in the next two years?” Figure that out and organize around those objectives, not around disciplines or roles. If the job of clinical development is to get this drug (or three drugs) to submission, then that’s what you need to focus on. If clinical development needs to develop this one drug for sure, and needs to change its processes to electronic trials, then those are the two things you should focus on. Nothing else. Organize around these projects and don’t chicken out. Ongoing professional development can always be fed by the many ample means of professional education and communication which exist for every discipline. You don’t need to re-create that in your company; organize around what you are trying to do, not who you are.
If people are organized around a sense of purpose, and only so, you are likely to see much greater success because people need to learn to trust and respect each other in order for the work to get done. Imagine if Major League Baseball was organized by position: all the second basemen in one department, the catchers in another, the third base coaches in another. And each day they were asked to come together as a “team” and win a ballgame. That’s what we try to do every day of the year.
Aretha Franklin was the “Queen of Soul”. The keys to her kingdom were R-E-S-P-E-C-T. If we all follow Aretha’s way, avoiding mistrust and fragmentation, then maybe all of clinical development can be one productive land, and even a nice place to work, combining professional and corporate success.
Process improvement projects start in many ways — top-down, bottom-up and sideways — but they succeed only one way, with proper governance. Someone needs to be in charge: someone to be respected by those participating in the process, and empowered by those who need to fund it or enforce the results. It is governance that is too often the downfall of the best improvement plans.
Governance is critical to any process change or improvement, any technology introduction, mergers and acquisitions, and all other organizational change. Governance is a combination of people, empowerment, legitimacy, procedure, politics, communication and financing that ensures change gets done. Because change, by definition, is so disruptive to any group focused on its daily work, proper governance can be the difference between that change being a positive force, and being a never-ending torment.
A team instead of a leader
How is governance of a change management project usually handled? The most common approach is to appoint a “team”. We have previously excoriated teams in this column, for too often being formed because of a lack of imagination in ways to get work done. Teams have many uses in change management: ensuring all affected constituencies are involved, subdividing tasks and responsibilities, serving as a communications exchange. But they can’t govern.
A team can have a leader, but by itself it cannot lead. Indeed, teams are often appointed because the organization or executive is afraid of choosing a leader. By stepping on no one’s toes, you only step to the right or left, and never forward.
Teams are too readily victims of our overcrowded calendars. The team meeting is one of dozens every month and they all begin to blur. More importantly, teams usually consist of individual contributors without any power in their own department, much less the power to handle an interdepartmental project.
In over his head
Another common way to fail at governance is to appoint a low-level manager as the leader of the change –a junior member of the management team, or worse, a “business analyst”. What message does this send to the rest of the organization? That this project is not important enough for a more visible, empowered leader. Often we see someone purposefully put in over his head. Usually when executives say the person will “grow into it”, what they mean is “not in my lifetime,” and use the tactic to ensure delay or inaction. Even if this choice is sincere, rarely can someone in fact “grow into it” without active mentoring. This method of governance ultimately lacks the legitimacy, in the political sense of the word, to marshal action, follow through, and success.
The third party
One of the more dangerous mistakes of governance in change management is choosing a “disinterested third party” as the project manager. Too often these folks are not only disinterested, but uninformed. Typically such managers are pulled from information services (IS) groups, or perhaps an organizational development group (i.e., generic trainers), or corporate process group. The rationale is often stated as, “we’re the project management experts,” or “we’re the process change experts”, which may be true, but such skills are not universally applicable or necessarily sufficient.
IS departments do implement large scale projects, but so do a lot of other groups in your company. Would you make them your clinical study manager tomorrow? Then why make them your clinical trial process improvement manager? If there are no other strong managers to look to for project governance within clinical research, then maybe generic managers can help, or maybe you should be developing more process management skill within the clinical disciplines.
No one has the time
When confronted with these suboptimal alternatives for project governance, most executives will say that no one who would be really appropriate for the job has the time. If the process improvement, or technology adoption, or acquisition is worth doing, then someone of sufficient skill and power needs to be assigned the time. Do you not have time to breathe? Do you not have time to achieve your goal for First Patient In, or Database Lock? Do you not have time to meet your regulatory filing dates? Of course you do. We make time for what is important to us.
The cost of poor governance
What’s so bad about poor project governance? Maybe you feel you have very strong political, personal, or financial reasons to make one or more of the compromises outlined above. The cost of poor governance is high. It directly and quickly leads to:
– A lack of focus: a weakly governed project will inevitably drift, as different forces jump in to fill the vacuum of power, even in all sincerity and goodwill;
– A lack of pace: the fatal start-and-stop of a major change process, which undermines staff motivation, stretches the timeline painfully, and is very costly;
– A lack of decisiveness: no governance, no government — critical decisions stall;
– A lack of learning: people will move in and out of the project, without much buy-in, and therefore have little to gain from learning how to “do it right next time”;
– A lack of gravitas: the absence of the credibility of a true leader — the embodiment of the project itself –someone who can look in the eye of the naysayers, the obstructionists, the skeptics, and the newcomers, and say “I was there; this is what we did.”Not just a champion
So how do we properly run a process change project? It is not just about picking a strong leader. You will need to decide carefully how important this project is and how much political weight it deserves. The leader must be backed by upper management, and be able to discuss frankly with management the obstacles she is finding to achieve success. The leader must indeed have a team — one made of people who have the time and knowledge to devote to the project and who are willing to be led. The leader must have the money she needs to get the work done, to see it through, especially from one budget year to the next. And he must have a process of governance to use, with an effective range of communication options, clear decision milestones, contingency plans, and a framework of purpose.
Governance can make or break your initiative. If it’s not clear who’s in charge here, then no one is. Stop and find yourself a leader. Give her the respect and the funding she needs, and follow her to the future.
As the information technology used in clinical research has evolved, matured and somewhat stabilized in recent years, many companies and clinical research professionals have gained great confidence in their understanding of technology options and even technology implementation. This increasing awareness and sophistication among staff from all clinical research functions is gratifying, but as the saying goes, a little knowledge is a dangerous thing.
Knowing
After a decade of product demonstrations, industry presentations and column reading, biopharmaceutical clinical research staff understandably think they’ve seen and heard it all. They know about systems for clinical data management, and electronic submissions, adverse event tracking, patient randomization, and a myriad of other computer-based tools. After seeing their twelfth EDC demo, or their fifth document management demo, what else is there to look at? After hearing speakers give very similar presentations year after year, what else is there to hear? After reading dozens of columns by writers nagging them about how to select and implement software for clinical research, what else is there to read?
Many people think they now know most everything there is to know about clinical IT. People think they know what the various application spaces are, and are confident about how the combination of each individual application’s niche makes up the whole solution. People think they know what these applications should do – primarily automate what they do on paper. People think they understand who should be responsible for implementing the technology (themselves!, regardless of their function). People even think they understand all about how buying a new software tool means change, by which they mostly mean that they are ready to open up a laptop computer instead of a spiral notebook to do their work.
Ultimately, people don’t know what they don’t know. A false sense of confidence, even smugness, has settled in at a number of companies whose management firmly believe there is nothing new under the sun, and what was once new they have fully absorbed. This perception can also be well-intentioned, as project teams go off down the path of acquiring software confident that they’ve done their homework, without realizing they haven’t finished the curriculum.
Consequences
This phenomenon is manifest across the spectrum of clinical research. For instance, one of the truisms that everyone “knows” is that great advantages can be achieved when functions are approached, in the IT context, in an integrated fashion – not as separate individual entities each with their own standalone application purchased, implemented and used individually. Instead, wherever possible, and particularly in new opportunities where multiple software replacements are sought, full consideration should be given to how each function’s work can be accomplished through a shared system or collection of applications which draw and feed data from and to each other. Everybody knows this, but in 2004 companies still pursue their software needs vertically by function, each in a vacuum. An example would be a company with pharmacovigilance, medical affairs and product quality functions all pursuing new solutions for tracking events simultaneously, yet independently, while ignoring the enormous potential for efficiency and business advantage in doing so as a single coordinated project. Why is this happening when everyone knows integrated solutions are a good idea?
Another example all too common in our industry is the adoption of EDC while sticking to all the business rules and conventions, standards and policies that were used by the company to do paper-based studies. Everybody knows that EDC changes the workflows and dataflows of trial conduct, but companies still gravitate to the familiar, and when faced with difficult decisions requiring alterations of a policy on data review, or interdepartmental approvals, or monitoring scheduling, or site selection in order to optimize EDC’s power, the response, when push comes to shove, is to shoehorn EDC into the way they work now. This despite the dozens of times that company’s staff will have learned and even repeated the mantra that EDC requires process change.
Another manifestation of not knowing what you don’t know is that the wrong people get involved in the implementation of new technologies. The ones who “know” all about new technology (i.e., they saw the demo, they heard the speech) are not at all necessarily those who should be responsible for implementing them. There are three very distinct roles in leading change: the catalyst , or change agent; the authority , or person with the budget; and the implementer , the one who is truly able to manage the myriad tasks required to get a new technology working properly in your company. When these roles are confused (and who is best suited to each is very different from company to company), the technology project will go astray. For instance, the catalyst is often rewarded for her initiative by being “awarded” the implementation job, when the personality and experiential requirements for each role are very different. This is most frequently seen when the informatics department, whose responsibility it may be to be on the lookout for new enabling technologies (i.e., to play the catalyst role), is assigned the implementation role, perhaps even unwillingly, instead of the business user being responsible for the implementation. The result is that the end user has abdicated responsibility to the success of its own technology to a third party, and the initiative is insufficiently informed with the perspective of the all-important end user.
Each of these folks may know all about the technology in question, but there is a difference between knowing and doing.
Doing
Knowing about something is a good thing. If you were going to build a table out of a wooden log, it would help to know a great deal about woodworking, the hand and power tools you will need to use, and the characteristics of the wood you were about to saw into. But if all you had done was read about these things, or seen a demonstration, you are likely to have a painful experience ahead of you, and may well chew up too much of the log making inevitable mistakes so that, by the time you know how to really make the table, there isn’t enough log left to make it.
Knowing is speedy compared to doing. Doing means understanding consequences, good and bad, and being able to predict them and mitigate them. Doing means planning without creating paralyzing delay (this is where knowing can help). Doing means confronting your own organization with the knowledge you are bringing into a previously stable environment, and overcoming the antibodies that your functional organism will generate prolifically to fight this foreign knowledge. In short, doing has very little to do with knowing, except that it is dependent on it.
Doing takes time and resources, special skills and more money than you want to think. Most of all it requires an awareness of this dichotomy, a recognition that the path from awareness to execution is measured in miles, not inches. The key for companies seeking to implement enabling technologies in clinical research is to both know and do – to harness knowledge but ward off complacency and overconfidence. Rather than thinking of a little knowledge as being dangerous, try to use it as the start of a highly beneficial, well planned and detailed triumph of doing.