When Bad Things Happen to Good Software
Software has such power over our daily work life it commands our eyeballs and frustrates our souls that we are eager and joyous when something goes wrong. Finally, an outlet for our frustration! We yell at Help Desks, stick pins in vendor dolls, and send flaming emails to colleagues warning them of impending disaster.
I am entirely sympathetic to this attitude. I suspect that most software applications developed for clinical research are receiving this treatment as I write, and yet, I must admit they may not always be fairly criticized.
These days, there are a fair number of clinical research applications which, while not perfect, are probably pretty good. Good enough to carry significant process improvement on their backs; good enough to be deserving of wider adoption. Many things hold them back, including industry conservatism and poor vendor management, but the problem is probably not really the software.
The Avoidable Failures
So-called enterprise software, the kind used for mission-critical applications like data management, data collection, document management and submission, trials management, and knowledge management, is complex stuff. Most “enterprises”, by definition, approach their work somewhat differently, and further complicate matters by asking that these applications be adapted to their specific circumstances. Opportunities abound for failure.
How is this failure manifested? We hear of users not using the software, managers who can’t get the reports they need, installations which take way too long to finish and cost too much. We hear of hardware failures and network failures. We hear about Help Desks that can’t help, and training programs that don’t teach. We hear about multiple enterprise applications, each underpinning the operation of one particular department, that can’t talk to each other (neither the applications nor the departments!). We hear about applications carefully specified, acquired and implemented that lay dormant for years, unused and unbudgeted. We hear about cautious “pilot” implementations which fail fantastically, sinking all previously held expectations.
These are the failures we gleefully whisper to our colleagues, and in a flash, everyone in our intimate industry knows the score: another piece of software bites the dust.
Is all this failure necessary, deserved, or inevitable? I suspect most of it is not. Most of it is certainly not the fault of the software, some of it may be the fault of the vendor, but much of the fault lies within us in how we handled the research into, specification of, planning for, and implementation of that application.
Examples
Let’s look at a few examples.
A company implements a worldwide trials management system. It has lots of marvelous features (for management) which all depend on the timely and accurate input of data from users (not management). Management pounds on users to get that data in on a timely basis! For our reports! But the application does nothing for the user: it doesn’t make their job easier, in fact it makes it more onerous.
A pilot trial is designed to test the promised time-to-market savings of an EDC application. Because the company is afraid to use EDC in a “real” trial, it chooses a Phase I study. The result? No particular time saved an easily predicted result, since EDC’s substantial benefits accrue best in studies stretched across many sites, patients and visits. Nonetheless, the project team will report that EDC brought no benefit.
A company using a third-party CDM application thinks it’s got the support process figured out: it forms an internal Help Desk to field questions from its users first, with the intent of filtering the harder questions to the vendor’s Help Desk. But the vendor and the company haven’t really mapped out the process flow on this, and in any case mistrust each other deeply. The result? Savvy end users do an end run around their less-trained internal Help Desk and call the vendor Help Desk, who is under-manned with higher level engineers not expecting this flood of simple questions. Accusations fill the air like migrating geese.
These are bad things happening to good software. The fix is what we call “organizational preparedness” the responsibility of the buyer to understand what’s in store when they bring in an enterprise application: how their processes will change, how their employees’ roles will change, how to manage the vendor’s resources to ensure success. Often, vendors are ill-prepared to anticipate these needs, but usually they can respond if given a clear idea of what help is needed.
Shakespeare, of course, anticipated software and its problems: “Men at some time are masters of their fate. The fault, dear Brutus, is not in our stars, but in ourselves” (Julius Caesar). Do not go into the Forum of software implementation without being better prepared than your predecessors.
Sorry, the comment form is closed at this time.