Email us : info@waife.com
Management Consulting for Clinical Research

Baby Steps

Technology Today

——————————————————————————–

 

Baby Steps

 

There is good news and bad news these days as we assess the state of the industry in implementing clinical research software applications. The good news is that more and more biopharmaceutical companies are recognizing that clinical research software is not “plug and play”: you don’t just unwrap the box, slip in the CD, and start up an eClinical drug development process. The bad news is that this realization is not matched consistently with good process practices. The result is some common, well-meaning, but ineffective process work related to technology implementation.

 

Tripping Over Our Own Feet

 

Let’s look at some examples of poor technology implementation behavior, more or less in sequential order. The first is when we fall in love with the vision without having a path to get there. An executive, or even a middle management team, may be totally committed to an exciting eClinical strategy, but the strategy is delineated in only the thinnest of detail. The implementation of that strategy is neither properly staffed nor funded. Since the goal is presumably innovative, to achieve a new strategy a company must find the talent which can see beyond their daily work and yet still remember that it is the daily work which is being revolutionized. Without the operational vision to add to the strategic vision, you can’t get from here to there. Without operational vision, you cannot estimate what the strategy may cost – in cash and people’s time – and without cost estimates you have no budget, or an under-funded budget.

 

The tension grows as leaders push the vision but the practicalities of implementation lag behind. Visions are often announced with great fanfare. The executive usually feels they are now off the hook and the burden falls on those below. If middle managers are not empowered with the skill and money to implement, they will fail, and be blamed for the failure.

 

Another common failing of technology implementation these days is how companies approach the development of business requirements for their clinical research applications. There are two common mistakes here: one is the level at which requirements are developed, and the second is how companies determine requirement priorities.

 

A common error in requirements development, especially by those trained in traditional IT methodologies, or by those who are naïve to software development, is to jump into what is called “the solution space” at a very detailed level. Instead, we need to hang back in “the problem space” and take time to explore the operational challenges and business priorities. It does not help much to talk to an operational staffer and find out that she wants the next page button in the lower left of the screen instead of the lower right. What is important is to know how your staff spends their time, when they are most unproductive, and how that prevents them from meeting their department obligations to the corporation.

 

A second example of naïve requirements analysis is artificial, complex quantitative schemes for prioritizing requirements and then scoring vendor solutions. This is very common and most egregious. By following these methods we insult the professional statistical environment of clinical research we work in! By definition, these various quantitative schemes have no scientific legitimacy – they cannot be “tested” and “validated” because each requirements development effort is unique. So when requirement A scores 0.6 points higher than B, how do we know if that is at all meaningful? And when vendor A’s total score is 163 points, and vendor B’s is 122, how do we know the difference is as significant as it looks? The numbers alone are meaningless, and yet many companies place great stock in such analyses.

 

Training is another area where implementers fail to achieve their goals. As software projects move along and an application is ready to be used, most companies acknowledge that some kind of training is important. Almost always, the only training provided, either because of budget constraints or because of a lack of fundamental understanding, is technical training: how do I move around the page, what does this error message mean, how do I get a network connection to upload my data, etc. Even if the implementation team has spent time on the “softer side” of technology implementation – defining the myriad of business rules intended to dictate how the software will be used operationally – they usually decide that writing concomitant SOPs is sufficient. “Let the users read the SOPs,” is the attitude. This is a fatal failing.

 

Implementing eClinical software applications will require the participation and assistance from several “partners” – the vendors you use, your IT department, and whatever service providers (internal or external) who will provide Help Desk services, network setup, hardware provisioning, software enhancements, and so on. Too often, biopharmas rely on interdepartmental commitments, or vendor-sponsor commitments, that are not much more than interpersonal pledges, made at some weighty meeting. Without formal agreements, even if they are only inside your company between sister departments, mission-critical support functions will be dependent only on goodwill, individual memories, and lots of executive arm-twisting.

 

Last in this list of common process failings is a misplaced enthusiasm for metrics, which leads to dozens of measures being tracked about operational performance and software’s possible effect on them. You certainly want to measure what you do, and to examine in some key quantitative ways whether the eClinical initiatives are improving operations. But most companies who get “metrics religion” swing the pendulum too far over. If you generate dozens of metrics, usually at a very low level of operational detail, then your staff will rebel: they will feel they are spending more time on measures than on work, and even the managers for whom the metrics are for will not wade through the resulting stacks of reports. Instead, companies should focus on a few, meaningful metrics for which reliable data can be collected without burdening staff, and which you are really likely to act on. Nothing is worse than collecting and analyzing lots of metrics data, and then not being able to do anything with the results.

 

Steps Forward

 

These examples all point to some obvious fixes that biopharmas can try and use to ensure that the baby steps they are taking toward the eClinical future end up as long, confident strides:

 

— The work of the visionary does not end with expressing the vision. He or she must follow through by identifying those with the skill and financial resources to implement the vision, and then they must continue to mentor and encourage these staff throughout the work.

 

— Requirements development must be done within the context of key business imperatives. Techniques should be used to properly weigh requirements in the context of operational necessities, not through artificial number games.

 

— The extensive training efforts required for eClinical implementation must include true process training, and trainers must be well-grounded in the operations they are improving.

 

— Formal agreements must be used with vendors or internal support providers to avoid the dependency on fragile and fleeting interpersonal commitments.

 

— Implementers should examine their roster of metrics harshly, and repeatedly slash the list to a meaningful, actionable “vital few”.

Sorry, the comment form is closed at this time.