An extensive requirements definition effort leaves no patience or budget for the real hard work and the real key to success.
Herewith a radical proposition: the least useful requirement for a successful software acquisition project is a set of requirements. This seems pretty radical, doesn’t it? How can this be true, when developing requirements is in all the standard technology and consulting teachings?
I can support this proposition in two ways:
1) the usual method for gathering, documenting, quantifying and evaluating requirements for clinical research software is deeply flawed and expensive
2) the focus on functional requirements distracts from other, equally or more important evaluation criteria, and drains resources away from the implementation efforts needed after software selection.
A Fatal Assumption
Typical software development or selection methods require that internal or external personnel go out and interview staff in the departments projected to use the new tool about what they “need”. From this, using various methodologies, a list of requirements (sometimes even in three-tiers of detail) is delineated, often with admirable complexity and at great length. Some method is used to sift through these requirements, prioritize them and document them elegantly. Almost always, such an effort takes many months, many meetings and many iterations. Lots of money, in other words. And why does it take so long? Because so many opinions are being solicited and the myriad inputs must be reconciled. And this is just the tip of the iceberg of a traditional software description and evaluation effort.
Worse yet, neither solutions nor needs articulated by those working in the functions being supported by this effort take into account the business strategy of the organization, the context of the work in the larger enterprise, or how conditions will change in the future. Instead the typical method jumps right down into “I’d like my safety tool to be able to spell-check the case narrative,” or “my status summary in my EDC tool should use checkmarks instead of stoplights.”
The development of software applications requirements is not trivial nor should it be, if one is developing software from scratch (from where this method came). But as one hears from every biopharmaceutical company, “we are not in the software business,” by which they mean they want to buy software “off-the-shelf” and not develop and maintain software internally. And in our narrow, limited marketplace of clinical research IT (as distinct from, say, office automation, bookkeeping systems or iPhone apps), there is little to choose from in any one niche (EDC, ePRO, CTMS, AES, etc.). So here is the chain: We are not a software company = off the shelf tools for clinical research = little choice = requirements not required.
Why is this the logical chain? Because choosing software off-the-shelf is an entirely different process than software development. By definition, we have to make do with what is out there, and choose among them. The task is to figure out how to differentiate among a handful of vendors. But requirements to the detail usually generated by traditional methods either:
• Are met by all relevant vendors
• Aren’t met by any vendor; or
• Aren’t likely to be developed by any vendor soon enough to properly influence the off-the-shelf purchase.
In other words, you will not differentiate one vendor from another based on all that work you paid for, so why bother?
Worse still is the usual wrap-up to the standard requirements development process, when artificial quantification methods are applied with the aura of scientific rigor for concluding that Vendor A meets Requirement #64 to a score of 3.76, versus Vendor B’s score of 3.24. Are you really going to make a decision on this basis? What does these values mean? Did you have one of your biostatisticians in the room when you used this method? Why are we not embarrassed, in an industry whose livelihood is based on determining false from true statistical measures of outcomes, when we do not apply the same attitude to other uses (or misuses) of statistics?
A Waste of Focus
Traditional requirements development is equally deleterious for its impact on what is more important in our clinical research software projects. Requirements development is so expensive and time-consuming, it saps the budget and the enterprise’s energy to the point that we are gasping as we cross the requirements finish line. But that’s the problem – it is by no means the finish line for getting useful software in the hands of the users who need it.
First of all, functional and technical requirements ignore what is at least as important to our software selection, if not more so: is this vendor a good match for us? Do we like/trust working with them? Does their business strategy match our business strategy? What about our respective future directions?
For instance, what is the larger enterprise context for the need expressed by users for, for instance, real-time access to a monitor’s Site Visit Report? If the research sponsor is in a specialized therapeutic area with a limited pool of poor performing investigators, this need is critical. If not, is this just a “nice to have”?
Most damaging to the successful use of the software being considered is that an extensive requirements definition effort leaves no patience or budget for the real hard work and the real key to success: a well-designed, thoroughly executed implementation plan, which should encompass the gamut from project leadership to process definition to training and organizational psychology.
Our responsibility as clinical researchers is to ensure not only that we have the tools to do our work accurately and competitively, but to ensure that how we acquire these tools is properly focused. Traditional requirements gathering is not required; what is required is understanding why you are getting the software and how to implement it.