My Fiefdom, Right or Wrong
If asked, I imagine that most any pharmaceutical executive would say that their company works very well across individual functions. They would say that no one can develop drugs without thorough cooperation throughout the discovery and development process. “After all,” they would say, “we have all these inter-departmental teams, don’t we?”
Yes indeed they do have those teams, but a team doesn’t create respect, trust, efficiency, or productivity by itself. And standing in the way of cross-functional nimbleness are the kings and queens of the pharmaceutical fiefdoms and their loyal subjects. Too often, it’s “my good” over the “greater good”; it’s “my fiefdom, right or wrong”.
While all sponsors eventually get out good therapeutic products, the industry remains plagued by chronic inefficiencies. Clinical development has many sources of inefficiency, and one of them is the failure of the diverse professions involved in clinical research to work together productively on a consistent basis. Certainly there are particular projects, or particular leaders, or particular moments in time when the stars and planets align, where people do get along well. But these are moments, and do not represent standard practice.
This Land is My Land
Most interdepartmental activities (project teams, process change projects, technology implementations, acquisition teams) show little tolerance for pressure, unexpected events, disagreements or changes in management direction (in other words, life). We see teams, carefully constructed with the best of intentions, all too frequently dissolve into recrimination, passive-aggressive withholding, and hallway politicking when the going gets tough.
And this isn’t just about teams. It is about the very organizational structure of clinical development itself. Obviously clinical development needs physicians, clinical operations staff, CRAs, data managers, software programmers, statisticians and medical writers. What we don’t need is for each group to place its allegiance first to their department, second to their profession, and last to the company trying to improve human health.
Then you have one of the common methods of process adaptation in vogue today: the creation of “roles”, as distinct from jobs ¡© essentially a superset of responsibilities assigned to higher performing individuals without any increase in compensation or diminution of their original responsibilities. By proliferating roles without letting go of profession-based structures, you have double the bureaucracy and double the pressure without really making an improvement.
Organizing for Failure
Organization charts are the language of a company. A company speaks through these documents: it tells us what is important to them and what is not important, what it values, and how well it understands itself. Too often we see companies organizing for failure.
The first clue comes when you ask to see the organization chart and are told that human resources won’t let you see it. We know of several companies where managers can’t even see the organization chart of their own department! Then you know the company language has lost its tongue.
What are common ways that companies organize for failure? Organizing to protect egos is organizing for failure. Organizing to “prop up” someone’s headcount to justify their title is organizing for failure. If you cannot achieve at least “logical intimacy” among functions, much less true interconnectedness, then you are organizing for failure. If you organize around visionaries instead of managers, you are organizing for failure. But one of the most dangerous trends in pharmaceutical companies today is organizational fragmentation.
This Village is My Village
Warring fiefdoms is one thing. War within the fiefdom is much worse. There is a growing tendency at some pharma companies to respond to process problems by further fragmenting their organization chart into smaller and smaller pieces. The thinking is that professional specialization will somehow inspire higher performance (ironic, at a time when CRA specialization by therapeutic area is nearly universally rejected in favor of regionally-based generalists). Examples of fragmentation in clinical departments are moves like organizing support functions into their own department, breaking up the monitoring function into several specialized jobs, or breaking up clinical data management into micro-constituencies (programmers, database managers, data managers, data analysts). The worst symptom of fragmentation is when the pieces are sprinkled hither and yon in odd ways organizationally.
Why is fragmentation so damaging? When you create a micro-profession you compound the essential problem we are describing: the creation of tribes who define themselves by who they are not. The more tribal we are, the more distrustful and disrespectful they are, by human nature. Worse yet, each new fragment (say, clinical site document specialists) has to create its own department. And what does that mean? It has to have its own meetings, its own representative on the interdepartmental team, its own career path and job ladder. The micro-profession becomes self-fulfilling and self-perpetuating. A tactic designed to innovate roles ends up instead creating yet another principality, with full regalia. In sum, instead of a source of efficiency, it is a cost-generating machine.
This Fiefdom is Our Land
This is not meant as a polemic against organizational innovation. Rather, I encourage companies to examine their initiatives in terms of their output. Are objective measures of performance improving? Can managers and workers alike say there is less finger-pointing and more respect?
One method of organizational innovation that breaches borders is to lead with focus. Ask yourself, “what is the most thing (or two) which our company needs the organization (clinical development as a whole) to accomplish in the next two years?” Figure that out and organize around those objectives, not around disciplines or roles. If the job of clinical development is to get this drug (or three drugs) to submission, then that’s what you need to focus on. If clinical development needs to develop this one drug for sure, and needs to change its processes to electronic trials, then those are the two things you should focus on. Nothing else. Organize around these projects and don’t chicken out. Ongoing professional development can always be fed by the many ample means of professional education and communication which exist for every discipline. You don’t need to re-create that in your company; organize around what you are trying to do, not who you are.
If people are organized around a sense of purpose, and only so, you are likely to see much greater success because people need to learn to trust and respect each other in order for the work to get done. Imagine if Major League Baseball was organized by position: all the second basemen in one department, the catchers in another, the third base coaches in another. And each day they were asked to come together as a “team” and win a ballgame. That’s what we try to do every day of the year.
Aretha Franklin was the “Queen of Soul”. The keys to her kingdom were R-E-S-P-E-C-T. If we all follow Aretha’s way, avoiding mistrust and fragmentation, then maybe all of clinical development can be one productive land, and even a nice place to work, combining professional and corporate success.
Process improvement projects start in many ways — top-down, bottom-up and sideways — but they succeed only one way, with proper governance. Someone needs to be in charge: someone to be respected by those participating in the process, and empowered by those who need to fund it or enforce the results. It is governance that is too often the downfall of the best improvement plans.
Governance is critical to any process change or improvement, any technology introduction, mergers and acquisitions, and all other organizational change. Governance is a combination of people, empowerment, legitimacy, procedure, politics, communication and financing that ensures change gets done. Because change, by definition, is so disruptive to any group focused on its daily work, proper governance can be the difference between that change being a positive force, and being a never-ending torment.
A team instead of a leader
How is governance of a change management project usually handled? The most common approach is to appoint a “team”. We have previously excoriated teams in this column, for too often being formed because of a lack of imagination in ways to get work done. Teams have many uses in change management: ensuring all affected constituencies are involved, subdividing tasks and responsibilities, serving as a communications exchange. But they can’t govern.
A team can have a leader, but by itself it cannot lead. Indeed, teams are often appointed because the organization or executive is afraid of choosing a leader. By stepping on no one’s toes, you only step to the right or left, and never forward.
Teams are too readily victims of our overcrowded calendars. The team meeting is one of dozens every month and they all begin to blur. More importantly, teams usually consist of individual contributors without any power in their own department, much less the power to handle an interdepartmental project.
In over his head
Another common way to fail at governance is to appoint a low-level manager as the leader of the change –a junior member of the management team, or worse, a “business analyst”. What message does this send to the rest of the organization? That this project is not important enough for a more visible, empowered leader. Often we see someone purposefully put in over his head. Usually when executives say the person will “grow into it”, what they mean is “not in my lifetime,” and use the tactic to ensure delay or inaction. Even if this choice is sincere, rarely can someone in fact “grow into it” without active mentoring. This method of governance ultimately lacks the legitimacy, in the political sense of the word, to marshal action, follow through, and success.
The third party
One of the more dangerous mistakes of governance in change management is choosing a “disinterested third party” as the project manager. Too often these folks are not only disinterested, but uninformed. Typically such managers are pulled from information services (IS) groups, or perhaps an organizational development group (i.e., generic trainers), or corporate process group. The rationale is often stated as, “we’re the project management experts,” or “we’re the process change experts”, which may be true, but such skills are not universally applicable or necessarily sufficient.
IS departments do implement large scale projects, but so do a lot of other groups in your company. Would you make them your clinical study manager tomorrow? Then why make them your clinical trial process improvement manager? If there are no other strong managers to look to for project governance within clinical research, then maybe generic managers can help, or maybe you should be developing more process management skill within the clinical disciplines.
No one has the time
When confronted with these suboptimal alternatives for project governance, most executives will say that no one who would be really appropriate for the job has the time. If the process improvement, or technology adoption, or acquisition is worth doing, then someone of sufficient skill and power needs to be assigned the time. Do you not have time to breathe? Do you not have time to achieve your goal for First Patient In, or Database Lock? Do you not have time to meet your regulatory filing dates? Of course you do. We make time for what is important to us.
The cost of poor governance
What’s so bad about poor project governance? Maybe you feel you have very strong political, personal, or financial reasons to make one or more of the compromises outlined above. The cost of poor governance is high. It directly and quickly leads to:
– A lack of focus: a weakly governed project will inevitably drift, as different forces jump in to fill the vacuum of power, even in all sincerity and goodwill;
– A lack of pace: the fatal start-and-stop of a major change process, which undermines staff motivation, stretches the timeline painfully, and is very costly;
– A lack of decisiveness: no governance, no government — critical decisions stall;
– A lack of learning: people will move in and out of the project, without much buy-in, and therefore have little to gain from learning how to “do it right next time”;
– A lack of gravitas: the absence of the credibility of a true leader — the embodiment of the project itself –someone who can look in the eye of the naysayers, the obstructionists, the skeptics, and the newcomers, and say “I was there; this is what we did.”Not just a champion
So how do we properly run a process change project? It is not just about picking a strong leader. You will need to decide carefully how important this project is and how much political weight it deserves. The leader must be backed by upper management, and be able to discuss frankly with management the obstacles she is finding to achieve success. The leader must indeed have a team — one made of people who have the time and knowledge to devote to the project and who are willing to be led. The leader must have the money she needs to get the work done, to see it through, especially from one budget year to the next. And he must have a process of governance to use, with an effective range of communication options, clear decision milestones, contingency plans, and a framework of purpose.
Governance can make or break your initiative. If it’s not clear who’s in charge here, then no one is. Stop and find yourself a leader. Give her the respect and the funding she needs, and follow her to the future.
Cross Dys-Functional Teams
Cross-functional teams have been fashionable across many US industries for at least twenty years. The idea seems great: corporations are divided into departmental fiefdoms by definition, such “silos” create poor communication and competing interests that do not serve the greater good; the answer is to develop products through teams of individuals from each silo. The result is supposed to be cooperation instead of warfare, goal-oriented employees instead of politically motivated employees, innovation instead of stagnation. The concept has been famously applied to everything from automobiles to computers, and yes, pharmaceuticals. Cross-functional teams are great, except when they aren’t. Too often we see cross dys-functional teams.
We are concerned whenever a process or organizational fad is adopted in sweeping fashion by those who are not prepared properly for applying the wisdom hidden inside the fashion. Almost every biopharmaceutical company develops its products, particularly in clinical development, using cross-functional teams. But the creation of such a team does not by wish or magic create synergy, cooperation and efficiency.
Do you recognize these signs of dysfunctional teams? Meetings that routinely start late or are repeatedly canceled or postponed. Team members with “hidden agendas” –objectives that do not match the goals of the team they are on. Meetings where some people consistently do not show up. Teams where one or two members dominate the dialogue by their seniority, volume, political connections, or lack of self-consciousness. Teams with members who take the opposite, “passive-aggressive” tack, and just don’t say anything; they even accept assignments from the team, and then simply ignore them. Teams that are doggedly formed with representation from each department as required by the company’s policies, regardless of whether there is a staff member of value available from that department. Teams that simply lack the skill to be a team, whose members have never been trained in team dynamics and effectiveness.
The Cost of a Dysfunctional Team
An ironic and very common indicator that your company is guilty of dysfunctional teams is if you have only a small handful of outstanding team players. What happens? They are, of course, assigned to as many teams as possible! So many, that they have no time for their “real” job, or even sufficient time to make all their teams successful.
And most damaging are teams which perform poorly in a crisis (and every project will have crises). It’s easy for your team to shine when the trials are rolling along, you’re meeting your targets (more or less), and proving your endpoints in sprightly fashion. When things are smooth, you may not ever be aware a dysfunctional team is lurking beneath the surface. It’s when the problems start to hit –like the inevitable shortfall in patient recruitment, the equivocal trial result, the CRO cost overrun, or a shift in executive priorities –that you learn how functional your team really is, or isn’t.
What’s the problem with a less than perfect project team? Isn’t the organizational model so superior it can tolerate wide variances in quality? We would say no. A poorly performing team really is worse than effective and professional individual departments who just happen to communicate poorly. The sheer waste of time that a poorly functional team creates is like a black hole, sucking in scarce resources and even scarcer hours.
Indeed, a single effective leader, empowered to command resources across departments when (and only when) needed, and equipped with the right process, is more likely to get the best of both models –professional competency reinforced through departmental verticality, and good planning and smooth handoffs from all involved. This is a solution to dysfunctionality that is highly unfashionable, but worth exploring.
Elements of a Functional Team
Let’s assume that ultimately, a well run, well trained, well staffed cross-functional team is indeed a highly desirable model for clinical development. What should you be looking for to ensure your team is indeed functional? The first element is commitment: team members must be sincerely and honestly committed to the achievement of the team’s objective and be willing to be an honest and involved participant. The second element is skill –all team members must know or be taught the dynamics of an effective team, the responsibilities they are taking on, and have the ability (either innate or learned) to work in the unique collaborative manner that teams demand.
The third key element of effective cross-functional teams is a strong, empowered and properly trained leader. Each of these characteristics — strong, empowered and trained — are important and distinct. A team does not lead itself, nor is a leader simply a facilitator. Indeed, it is often recommended that a team designate separate roles among its members for leader, facilitator, and rapporteur (documenter). The leader must welcome the role and be willing to take command. She must be empowered by both the clinical development leadership and her own vertical department so that the team’s decisions will be endorsed, funded and supported. And the leader must be trained in the special skills the leader needs, as distinct from a participant.
The fourth key element is clarity and focus: the team must have clear objectives and be able to focus on them without distractions or unnecessary changes in direction. So, too, each team member must be allowed by his or her department to focus on team participation and not be pulled away constantly to other duties. Ideally, each participant should feel that his or her self-interest matches the interest of the team. This will ensure the team’s performance will serve the corporation’s self-interest.
The fifth key element of effective cross-functional teams is a mechanism for measuring performance. The team should know how to develop relevant and feasible metrics so that it can determine, or even anticipate, performance problems, and increase the predictability of those inevitable crises, leading to a more rapid and effective response.
But the most important element of an effective cross-functional team is that which lies at the heart of effective clinical development — a well-understood, proven and documented process for clinical trial conduct. Even the best trained and most well-intentioned team will founder if your organization has not figured out how to do clinical trials well under the conditions of your company’s special circumstances. This is perhaps the most overlooked piece of the cross-functional fad; these teams will be only as good as the process they are asked to implement (and improve!).
Cross-functional teams have proven themselves in our industry and others as a way to focus the energies of talented multidisciplinary staff on a common goal. But sitting down with an organization chart and picking one person from each department will not be sufficient to realize the value in this approach. Don’t let your teams end up being black holes. With commitment, skill building, a strong leader, clarity of objectives, performance metrics and a good clinical trial process to implement, your cross-functional teams will live up to their expectations.
As the information technology used in clinical research has evolved, matured and somewhat stabilized in recent years, many companies and clinical research professionals have gained great confidence in their understanding of technology options and even technology implementation. This increasing awareness and sophistication among staff from all clinical research functions is gratifying, but as the saying goes, a little knowledge is a dangerous thing.
Knowing
After a decade of product demonstrations, industry presentations and column reading, biopharmaceutical clinical research staff understandably think they’ve seen and heard it all. They know about systems for clinical data management, and electronic submissions, adverse event tracking, patient randomization, and a myriad of other computer-based tools. After seeing their twelfth EDC demo, or their fifth document management demo, what else is there to look at? After hearing speakers give very similar presentations year after year, what else is there to hear? After reading dozens of columns by writers nagging them about how to select and implement software for clinical research, what else is there to read?
Many people think they now know most everything there is to know about clinical IT. People think they know what the various application spaces are, and are confident about how the combination of each individual application’s niche makes up the whole solution. People think they know what these applications should do – primarily automate what they do on paper. People think they understand who should be responsible for implementing the technology (themselves!, regardless of their function). People even think they understand all about how buying a new software tool means change, by which they mostly mean that they are ready to open up a laptop computer instead of a spiral notebook to do their work.
Ultimately, people don’t know what they don’t know. A false sense of confidence, even smugness, has settled in at a number of companies whose management firmly believe there is nothing new under the sun, and what was once new they have fully absorbed. This perception can also be well-intentioned, as project teams go off down the path of acquiring software confident that they’ve done their homework, without realizing they haven’t finished the curriculum.
Consequences
This phenomenon is manifest across the spectrum of clinical research. For instance, one of the truisms that everyone “knows” is that great advantages can be achieved when functions are approached, in the IT context, in an integrated fashion – not as separate individual entities each with their own standalone application purchased, implemented and used individually. Instead, wherever possible, and particularly in new opportunities where multiple software replacements are sought, full consideration should be given to how each function’s work can be accomplished through a shared system or collection of applications which draw and feed data from and to each other. Everybody knows this, but in 2004 companies still pursue their software needs vertically by function, each in a vacuum. An example would be a company with pharmacovigilance, medical affairs and product quality functions all pursuing new solutions for tracking events simultaneously, yet independently, while ignoring the enormous potential for efficiency and business advantage in doing so as a single coordinated project. Why is this happening when everyone knows integrated solutions are a good idea?
Another example all too common in our industry is the adoption of EDC while sticking to all the business rules and conventions, standards and policies that were used by the company to do paper-based studies. Everybody knows that EDC changes the workflows and dataflows of trial conduct, but companies still gravitate to the familiar, and when faced with difficult decisions requiring alterations of a policy on data review, or interdepartmental approvals, or monitoring scheduling, or site selection in order to optimize EDC’s power, the response, when push comes to shove, is to shoehorn EDC into the way they work now. This despite the dozens of times that company’s staff will have learned and even repeated the mantra that EDC requires process change.
Another manifestation of not knowing what you don’t know is that the wrong people get involved in the implementation of new technologies. The ones who “know” all about new technology (i.e., they saw the demo, they heard the speech) are not at all necessarily those who should be responsible for implementing them. There are three very distinct roles in leading change: the catalyst , or change agent; the authority , or person with the budget; and the implementer , the one who is truly able to manage the myriad tasks required to get a new technology working properly in your company. When these roles are confused (and who is best suited to each is very different from company to company), the technology project will go astray. For instance, the catalyst is often rewarded for her initiative by being “awarded” the implementation job, when the personality and experiential requirements for each role are very different. This is most frequently seen when the informatics department, whose responsibility it may be to be on the lookout for new enabling technologies (i.e., to play the catalyst role), is assigned the implementation role, perhaps even unwillingly, instead of the business user being responsible for the implementation. The result is that the end user has abdicated responsibility to the success of its own technology to a third party, and the initiative is insufficiently informed with the perspective of the all-important end user.
Each of these folks may know all about the technology in question, but there is a difference between knowing and doing.
Doing
Knowing about something is a good thing. If you were going to build a table out of a wooden log, it would help to know a great deal about woodworking, the hand and power tools you will need to use, and the characteristics of the wood you were about to saw into. But if all you had done was read about these things, or seen a demonstration, you are likely to have a painful experience ahead of you, and may well chew up too much of the log making inevitable mistakes so that, by the time you know how to really make the table, there isn’t enough log left to make it.
Knowing is speedy compared to doing. Doing means understanding consequences, good and bad, and being able to predict them and mitigate them. Doing means planning without creating paralyzing delay (this is where knowing can help). Doing means confronting your own organization with the knowledge you are bringing into a previously stable environment, and overcoming the antibodies that your functional organism will generate prolifically to fight this foreign knowledge. In short, doing has very little to do with knowing, except that it is dependent on it.
Doing takes time and resources, special skills and more money than you want to think. Most of all it requires an awareness of this dichotomy, a recognition that the path from awareness to execution is measured in miles, not inches. The key for companies seeking to implement enabling technologies in clinical research is to both know and do – to harness knowledge but ward off complacency and overconfidence. Rather than thinking of a little knowledge as being dangerous, try to use it as the start of a highly beneficial, well planned and detailed triumph of doing.
X X X
Now that my title has grabbed your attention, let’s talk about something people don’t think much about when purchasing information technology: the cost of using it. When you start to implement the clinical research software package you purchased, you may find the implementation costs to be as obscene as a XXX website. There are good reasons for the high cost of implementation; you should be prepared for it beforehand, and you should factor these potential costs into your strategy for acquiring the enabling technology.
The title of this column is based on a pretty useful rule of thumb: the actual first-year cost of acquiring and implementing a new clinical research software application is about three times (3X) the price of the software license alone. Sometimes it’s less, sometimes it’s more. But if you keep that figure in your head, it’s a pretty good guideline. So if you pay $300,000 for a new CDMS (clinical data management system) or CTMS (clinical trials management system), you are well-served by budgeting $900,000 in total costs.
“Wow!,” I can hear you (and the vendors) screaming. This is a bit like revealing the secret of the pain of childbirth before a couple gets pregnant. Maybe now none of you will buy any software, not believing it can possibly be worthwhile. This is certainly not my intention, but instead I mean to alert you to what is ahead, and help you plan accordingly.
Sources of Cost
Where does this cost multiple come from? Most research professionals can immediately guess two key factors: validation and training. Validation of the software application involves many components, not the least of which in our industry is regulatory compliance. The responsibilities inherent to the sponsor in using a software tool for EDC (electronic data capture) or CDM ¡© that the clinical data as recorded by the investigator has not been altered in anyway during the process of capture, storage and analysis ¡© mean that a significant validation effort is required for any such software. And this responsibility is not alleviated by some “validation pack” from the vendor. Sponsors, for whom this software will carry the data upon which their discovery and development investments depend, are well-advised to design and execute the validation plan themselves or find a reputable third-party to assist them.
Training is just one of the aspects of change management that come with any enterprise software introduction. Training is always the last item in the operations budget and the first to be cut. Under-fund training at your peril. Nearly everyone has a story of how poorly trained staff led to underutilization, misuse, or simply no use of the software so expensively obtained. Training done right addresses each of the multiple audiences according to what they need to know, when they need to know it, and how they learn best. If you start to add up the number of people who need to be trained, how many sessions it will take, how many different courses it will require, where you will have to go to deliver them, and how often it will have to be repeated, you can see the dollars mounting.
But there’s more to this picture than validation and training. Every new piece of clinical research software changes the way people in your organization work. In our industry, this needs to be thoroughly understood and documented. This in turn means new SOPs, working practices, clearly defined roles and responsibilities, perhaps process maps or workflows documented, and a change control process governing it all.
Then there is legacy migration: most sponsors and CROs are purchasing a new software application to replace an older one; some or all of the old data needs to be moved to the new application, which can be a particularly complex technical task. This is one area where careful thought should be given to what is truly necessary to be done; while validation, training and SOPs are unavoidable, a well-crafted migration strategy may save thousands of dollars.
There are numerous other “soft costs” to software implementation. An important point is that these costs do not necessarily appear in budget line items. Simply the amount of staff time spent in meetings, teleconferences, workshops, exercises and briefings, adds up to a considerable amount of dollars in salary and “opportunity cost” (the cost of being taken away from work that could be more productive to revenue or operations). Soft costs can be mitigated by using excellent implementation planning and procedures.
There are some “hard costs” too, in computer hardware, network infrastructure, security infrastructure, and other technical costs. You may have more of this already in place than you realize, but depending on what you are implementing, this may be an unexpected added expense. This is another point where alternative strategies can possibly reduce your costs.
No Worries
Recently some vendors (and their pharmaceutical customers) have announced publicly that their products have been used with little, or even “no”, process change, implying that the 3X formula described here is not immutable, or perhaps out-of-date. These claims are disingenuous at best. When the sources of these claims are investigated, you will find that while the vendors and executives may believe their processes didn’t change, those who actually do the work will tell quite a different tale ¡© of heroics performed to make the software deliver. Or you might discover that the announcement is premature, and that much work lies ahead to ensure compliance and reproducibility of the software’s success.
Mitigation
There are a number of implications for sponsors and CROs of this 3X phenomenon. The first strategy is to anticipate it and budget for it. But you can also seek to mitigate this problem in a number of ways. Include the implementation items in your list of requirements that is used for your RFP or similar dialogue with the vendors. Evaluate their answers and see if one vendor may in fact offer products or services which will burden your internal resources less.
Secondly, seek help from those consultants who are already experienced with clinical research software implementations specifically. This will save time and money in the implementation by providing you a jumpstart, and a repository of “lessons learned” to apply to your own project. You can also get such information by leaning heavily on your colleagues in other departments or even at sister pharmaceutical companies, if they are willing to share their learnings.
In addition, make sure to consider alternative means of acquiring and using the software. For instance, it may make more economic sense for you to outsource the application’s function (use a CRO for your CDM instead of buying your own CDMS, for instance). The tradeoff in outsourcing, of course, is the lack of control, and the fact that no outsourced supplier can ever care as much about your data as you do. Another alternative is to use an ASP (application service provider) model, where the software vendor or a services company can host the application for you, alleviating some hardware, network and maintenance costs, or at least spreading them out over time. Some vendors offer software operation (such as EDC trial design and setup) along with the application hosting. Again, this may be efficient for some period of time, but may be much too costly if you ramp up the use of the software widely in your enterprise.
The implementation costs for clinical research software may seem high, but they are a necessary evil. What’s really obscene is what happens when you don’t budget properly for implementation, and you get caught with your pants down.