Email us : info@waife.com
Management Consulting for Clinical Research

Knowing, Not Doing

As the information technology used in clinical research has evolved, matured and somewhat stabilized in recent years, many companies and clinical research professionals have gained great confidence in their understanding of technology options and even technology implementation. This increasing awareness and sophistication among staff from all clinical research functions is gratifying, but as the saying goes, a little knowledge is a dangerous thing.

Knowing
After a decade of product demonstrations, industry presentations and column reading, biopharmaceutical clinical research staff understandably think they’ve seen and heard it all. They know about systems for clinical data management, and electronic submissions, adverse event tracking, patient randomization, and a myriad of other computer-based tools. After seeing their twelfth EDC demo, or their fifth document management demo, what else is there to look at? After hearing speakers give very similar presentations year after year, what else is there to hear? After reading dozens of columns by writers nagging them about how to select and implement software for clinical research, what else is there to read?

Many people think they now know most everything there is to know about clinical IT. People think they know what the various application spaces are, and are confident about how the combination of each individual application’s niche makes up the whole solution. People think they know what these applications should do – primarily automate what they do on paper. People think they understand who should be responsible for implementing the technology (themselves!, regardless of their function). People even think they understand all about how buying a new software tool means change, by which they mostly mean that they are ready to open up a laptop computer instead of a spiral notebook to do their work.

Ultimately, people don’t know what they don’t know. A false sense of confidence, even smugness, has settled in at a number of companies whose management firmly believe there is nothing new under the sun, and what was once new they have fully absorbed. This perception can also be well-intentioned, as project teams go off down the path of acquiring software confident that they’ve done their homework, without realizing they haven’t finished the curriculum.

Consequences
This phenomenon is manifest across the spectrum of clinical research. For instance, one of the truisms that everyone “knows” is that great advantages can be achieved when functions are approached, in the IT context, in an integrated fashion – not as separate individual entities each with their own standalone application purchased, implemented and used individually. Instead, wherever possible, and particularly in new opportunities where multiple software replacements are sought, full consideration should be given to how each function’s work can be accomplished through a shared system or collection of applications which draw and feed data from and to each other. Everybody knows this, but in 2004 companies still pursue their software needs vertically by function, each in a vacuum. An example would be a company with pharmacovigilance, medical affairs and product quality functions all pursuing new solutions for tracking events simultaneously, yet independently, while ignoring the enormous potential for efficiency and business advantage in doing so as a single coordinated project. Why is this happening when everyone knows integrated solutions are a good idea?

Another example all too common in our industry is the adoption of EDC while sticking to all the business rules and conventions, standards and policies that were used by the company to do paper-based studies. Everybody knows that EDC changes the workflows and dataflows of trial conduct, but companies still gravitate to the familiar, and when faced with difficult decisions requiring alterations of a policy on data review, or interdepartmental approvals, or monitoring scheduling, or site selection in order to optimize EDC’s power, the response, when push comes to shove, is to shoehorn EDC into the way they work now. This despite the dozens of times that company’s staff will have learned and even repeated the mantra that EDC requires process change.

Another manifestation of not knowing what you don’t know is that the wrong people get involved in the implementation of new technologies. The ones who “know” all about new technology (i.e., they saw the demo, they heard the speech) are not at all necessarily those who should be responsible for implementing them. There are three very distinct roles in leading change: the catalyst , or change agent; the authority , or person with the budget; and the implementer , the one who is truly able to manage the myriad tasks required to get a new technology working properly in your company. When these roles are confused (and who is best suited to each is very different from company to company), the technology project will go astray. For instance, the catalyst is often rewarded for her initiative by being “awarded” the implementation job, when the personality and experiential requirements for each role are very different. This is most frequently seen when the informatics department, whose responsibility it may be to be on the lookout for new enabling technologies (i.e., to play the catalyst role), is assigned the implementation role, perhaps even unwillingly, instead of the business user being responsible for the implementation. The result is that the end user has abdicated responsibility to the success of its own technology to a third party, and the initiative is insufficiently informed with the perspective of the all-important end user.

Each of these folks may know all about the technology in question, but there is a difference between knowing and doing.

Doing
Knowing about something is a good thing. If you were going to build a table out of a wooden log, it would help to know a great deal about woodworking, the hand and power tools you will need to use, and the characteristics of the wood you were about to saw into. But if all you had done was read about these things, or seen a demonstration, you are likely to have a painful experience ahead of you, and may well chew up too much of the log making inevitable mistakes so that, by the time you know how to really make the table, there isn’t enough log left to make it.
Knowing is speedy compared to doing. Doing means understanding consequences, good and bad, and being able to predict them and mitigate them. Doing means planning without creating paralyzing delay (this is where knowing can help). Doing means confronting your own organization with the knowledge you are bringing into a previously stable environment, and overcoming the antibodies that your functional organism will generate prolifically to fight this foreign knowledge. In short, doing has very little to do with knowing, except that it is dependent on it.

Doing takes time and resources, special skills and more money than you want to think. Most of all it requires an awareness of this dichotomy, a recognition that the path from awareness to execution is measured in miles, not inches. The key for companies seeking to implement enabling technologies in clinical research is to both know and do – to harness knowledge but ward off complacency and overconfidence. Rather than thinking of a little knowledge as being dangerous, try to use it as the start of a highly beneficial, well planned and detailed triumph of doing.

Sorry, the comment form is closed at this time.