Email us : info@waife.com
Management Consulting for Clinical Research

After decades of dragging its feet, the biopharma clinical development world is finally embracing the adoption of information technology in support of clinical development. The industry as a whole has overcome the problems of operating in a regulated environment, fear of change, inadequate budgeting and unclear software governance.

Whst is ironic now is how far to the other end the pendulum has swung, in the acceptance and reliance on technology as the solution to most any clinical development operations problem. Do any of the following sound familiar?

  • Problem: We should be doing more vendor oversight.
  • Solution: Let’s get a new CTMS system.
  • Problem: We should be doing Risk Based Monitoring.
  • Solution: Let’s get RBM software.
  • Problem: We should be collecting more real-time personalized patient data.
  • Solution: Let’s implement mHealth.

While the implementation of these tools can be part of a solution to the operational issues we have, they do not represent the entirety of the solution – and that is precisely where the frustration with technology stems from. Users and executives alike have the unrealistic expectation that the tool itself will solve the problem.  But in reality, it is the way you adapt to and use the tool that is the primary factor in achieving a solution. Indeed, implementing a new software application may not be the most important step you need to take.

Most sports activities provide a telling analogy, where in any given competition, the athletes by and large all use the exact same equipment.  In tennis, most players use the exact same racquet.  In baseball, there is very little difference between the bats, gloves, and shoes worn.  The same goes for hockey, basketball, skiing. Indeed in basketball everyone hasto use the same ball.

The equipment itself has progressed over the years, which gives athletes today an edge compared to previous generations.  For example, you would be hard-pressed to find any professional tennis player using a wooden racquet today.  But why are some athletes more successful than others? It is definitely not the equipment.  I can go out and buy the exact same racquet as Rafael Nadal or Roger Federer, but that will not allow me play tennis at their level.  Their success is a combination of raw talent, practice, determination, conditioning and, yes, taking advantage of the technology (their racquets) to complement their game, by leveraging that technology to highlight their strengths and minimize their weaknesses.

And you do see even top athletes change equipment when things are going bad for them. Very rarely does this make a difference in their performance. Their performance improves when they have correctly diagnosed the technique, attitude and skill deficiencies applied to using the tools.

This same example holds for biopharma and the use of information technology.  Many biopharmas use the same piece of technology (or very similar technology) for data collection, a CTMS, safety reporting, an eTMF, etc., and yet some companies (or projects within companies) are more successful in clinical development than others.  Like sports, the spectrum of success and optimization lies in how the organizations use those tools.  Are they taking full advantage of the functionality that supports their goals? Have they adapted the tool to fit within their organization?  Have they defined their processes to limit duplication of effort and optimize workflows? Technology provides the means to help us get there more easily and efficiently, but it will not get us there on its own. It’s the environment around the technology which is the key to solving the operational issues that confront us every day.

Another obstacle biopharma tends to trip over is a reliance on the developer of the information technology to provide guidance on how to use it.  This is not fair to the developer nor is it fair to ourselves.  Technology vendors are, for the most part, technology specialists, by definition.  They are not clinical research and development specialists, and don’t market themselves as such. So, why do we ask them for help in using their tools when the questions we ask are fundamentally operational, and not technological?  Most vendors will, of course, be happy to try and help you, as it is their system you are using and they have a vested interest in ensuring your satisfaction, but this relationship often ends badly and unnecessarily so. Our sports analogy teaches us again: Roger Federer does not enlist someone from Wilson to be his coach nor does Tiger Woods ask someone at Nike to help him work out his drive and short game.  Instead, such professionals enlist the help of specialists who are experts in the sport they are playing so that they can get the advice they need to win.

When confronted with your next clinical development operational issue, think back to the analogies from other industries and by all means buy, enlist, or lease the appropriate information technology that will help you address the problem.  But if you want to be successful, be sure to devote enough time, attention and money to defining how to optimally use that technology within your individual organizations.

For the arrow to hit the target, the archer has to be the one to get it there.

For further discussion or assistance on this topic, reach out to Waife & Associates, Inc. at www.waife.com, or email me directly at shevel@waife.com.

 

 

Clinical trial disclosure and data transparency continue to be a challenge to sponsors, regulators and to public advocacy groups. The challenge is across the board – in methods, requirements and urgency.  To the extent they understand the requirement, research sponsors certainly want to comply and publish their trials. But they often fall short in complete achievement of that compliance.  Even when they take steps to support and fund the transparency requirements, many organizations do not take the time to address the necessary people and process changes.

The typical “knee-jerk” reaction to data transparency is one of annoyance, followed by what is thought to be a quick fix: “implement a computer system” or “outsource trial disclosure”.  Although these are potentially reasonable links in the compliance chain, technology and resources alone are not adequate to address the gap between traditional operations and new requirements.  The tried and true maxim of “people, process and technology” applies here.  The key to clinical trial disclosure and data transparency compliance is: a robust clinical trial disclosure process which is properly designed and enforced by upper management, which is integrated into all aspects of trial conduct within the organization and supported by adequate resources,and is supported by a comprehensive technologysolution.

Clinical trial sponsors struggle with the trial transparency requirements for a variety of reasons.  Perceived as an additional burden and an industry confidentiality risk, trial transparency has often received less than adequate attention and resources from sponsors.  This continues today even while it attracts the focus of journal publishers and patient advocacy groups, in addition to increasing government scrutiny on sponsor compliance.

In fact, a robust and optimal compliance solution is remarkably complex and resource-intensive: sponsors have to draw people from multiple disciplines (data management, marketing, statistics, regulatory); a process with clear governance has to be designed, understanding and keeping up with changing international requirements is required; compliance assurance has to be monitored and maintained; and the resulting mechanism has be fully funded permanently.

The typical reaction to meeting an information requirement is to either implement an information technology solution, or outsource it to someone who has one.  Unfortunately, owning a car does not mean you know how to drive.  The slow pace of achieving compliance can be attributed to varying degrees of typical “people-focused” reasons:

  • Lack of executive appreciation: Executive management typically have an understanding of the Clinical Trial transparency and data access requirements.  However, this understanding is not the same as funding or resourcing support. Many managers do not seem to appreciate the intent and objective of the transparency rules, and do not consider the impact and consequences of non-compliance.  It is possible that this could change in the near future if enforcement of the mandated fines increases, along with public “shaming” of those who fail to publish.  But so far the perceived costs of implementing an effective solution is still winning over the benefit of stricter disclosure compliance.
  • Lack of resources:Some development organizations have sought to address transparency requirements by allocating (very) few FTEs to oversee the posting of required clinical trial information on relevant websites (typically focused on clinicaltrials.gov only). Unfortunately, adequate resources from the required multiple disciplines are usually not assigned – it seems like too much of an investment requiring too much coordination.  Further, the personnel working on disclosure are rarely empowered with the authority or the financial resources to implement procedures effectively or consistently.
  • Process failures:Which leads us to the third and typically the most significant obstacle to consistent compliance. Most clinical trial disclosure functions are “add-on” to the clinical development and operations functions.  Organizations fail to integrate the disclosure requirements into all aspects of clinical trial definition and execution.  Consequently, many trial disclosure activities become an afterthought. Some sponsors consider themselves successful because they incorporate the disclosure requirement into the protocol development lifecycle.  However, they fail to consider the requirement for ongoing updates and for data reporting.  Furthermore, trials conducted by subsidiaries in ex-US/EU markets seem to slip through the net, which can lead to compliance failures.

To discuss these issues further and how we can help, please reach out to Waife & Associates (www.waife.com) by emailing Ramzi Najm (najm@waife.com).

How many times have you hear people in your organization ask third party providers (vendors, consultants, CROs) the question “Tell me what industry standard is”?  This question is posed to any number of topics, be they procedural activities, technology adoption, data standards, or even corporate structure.  Whenever I get this question from a client I get a bit of a cold shiver down my spine and have to find a way to politely respond in a way that gets them closer to the answer to the question they should really be asking “what should we be doing to improve”?

You see, the problem is not the idea of finding out how your competition at other companies are operating.  It is fine to seek out what others are doing and then determine if that way of doing business will fit within your corporate construct.  The problem lies in modelling yourselves after other companies just because they do something a particular way.  Admittedly there are a number of practices that are common within the industry but these are more general in nature, as they should be.  For example, if someone asks me how they should be capturing data in the year 2018 I would say you should be using EDC.  This example, as with a few others, is more of a general practice within the industry and is one of necessity as oppose to preference.   So yes, there are general practices that one can point to in the industry which almost everyone is doing, but these are well known to the point of being self-evident.   The typical “Industry Standard” question is directed at more granular topics which are far more complicated.

The reason why the term “industry standard” does not fit well in BioPharma is because our very diverse organizations do not and should not in fact do things the same way.  Even so, we continue to try and apply homogeneous methods and procedures across our diverse organizations with mediocre results.  The following examples may seem familiar to you:

  • A new consulting firm comes in and applies an almost identical “templated” approach to your problem that they used with their previous client.
  • A new initiative is implemented that borrows efficiency modeling and practices from manufacturing (or another industry entirely) and attempts to apply them to your (non-manufacturing) organization.
  • A new set of executives enters your company and begins to alter the operational aspects to more closely resemble the company they just left.

All of these examples and others like them usually have limited success as they fail to take into account two crucial characteristics of the Biopharma industry:

  • People are not constants like parts in a manufacturing process, so it is unlikely that people will react the exact same way to a given stimuli.
  • Culture, both social and corporate, plays a significant role in the operational facets of a company.

When we boil it down to its simplest parts, yes we are all doing the same type of work in a regulated environment, but the similarities end at that macro level.  How we go about completing this work can, and should, differ based on the culture of your organization.  A certain methodology widely accepted at one company may not be palatable at another company. Yet many third-party providers (CROs, consultants, vendors) will force a particular process or product on you because it has been successful elsewhere. Worse yet, it may be labeled a “best practice.”And worse still, you may embrace it only because of this industry standard/best practice label.

Why does this happen so often? Usually out of understandable frustration for our under-performing organizations and the resulting mistrust of current staff and opinions, followed by the eager embrace of the attractively branded unknown. Keep in mind that you are often getting this information third-hand and may not be aware of any pitfalls and issues these other companies are experiencing.  As the old saying goes “If you do what everyone else does, you get what everyone else gets”. It is better to follow the Rolling Stones: don’t get what you want, get what you need.

So, the next time you overhear someone asking “what is industry standard?,” consider exploring the purpose of that question.  If your colleagues are hoping to simply mimic other companies, then try and steer them instead in the direction of analyzing what it is your company needs first. Perhaps you may identify practices from other companies that fit with your culture, or indeed, it may be better to develop your own tailored solution – your own “best practice.”  Instead of asking what other companies do, begin with answering where you would like to be, and identifying what is keeping you from getting there.

Contact shevel@waife.com for more…

It is interesting that the CRO industry has grown to many billions of dollars per year in revenue, but its biopharmaceutical customers have done so little to prepare themselves to manage well the activities done on their behalf. Those for whom these outsourced services are being performed– the clinical development operations managers, the research physicians, the data managers – have usually never been trained in the oversight of outsourced tasks, how to evaluate potential service providers, or how to use them efficiently. This situation invites inefficiency, not only in dollar terms, but scientifically, in terms of access to and retention of proprietary data, real-time knowledge of status and performance, and the ability to assess and reflect on development program progress and strategy.

Oversight of outsourced providers is mandated by regulation. The most that biopharmas have done usually is to designate contracting/purchasing department to do the oversight – a department with little or no clinical development professional background. This separation of oversight from those with the greatest internal knowledge, and greatest vested interest in the performance, is a common but damaging error.

As recent highly publicized reports have emphasized, the use of CROs has not reduced cost or shortened timelines. Indeed, at the executive level, performance is not the concern, but rather only the trading of fixed costs for variable ones. Some would call this a cynical bargain, or at least a cold financial choice. This is not unique to biopharma, but perhaps it is more concerning considering the importance of our work to human health.

 The Contract Will Fix Everything

In theory the contract between a vendor (CRO or software provider) and a sponsor should be the reference for both sides in case of doubts. In reality the contract complicates and obstructs vendor oversight management.

The lack of domain knowledge in the contracting department, and the lack of time for input from the in-house experts, increases the risk of suboptimal contract issues. These will not get noticed until execution. For instance, if the in-house experts have not been asked for their advice on payment triggers (or did not looked at them), you can frequently find payments for the wrong items (e.g., query resolution, per visit, per data check, etc.) and high costs for rather repetitive tasks (for instance, programming of tables, listings, and figures). Overall it seems that too often there is a high tolerance for failed milestones anyway, as eventually the sponsors just “want to get things done”.

As the contracts department gets more powerful, their natural „legalistic“ and procedural focus can alienate providers and create extended delays, especially when dealing with large providers with robust legal departments themselves. Indeed, study startup times are growing as studies in complex therapeutic areas increasingly rely on hospitals which have their own efficiency issues.

Meanwhile, one of the long outstanding controversies has not been resolved – should or can sponsors write incentives and penalties into their outsourcing contracts?  If so, how should they be written and executed?

 The Mismatched Business Goals

The most overarching issue in vendor management is the operational mismatch between service providers and their customers. There is the biopharma on one side – a complex organization with its multi-million dollar projects and with the target to bring their medical innovations as quickly as possible to market. And on the other side there are the CROs, which are more organized like a “unit of work” factory, for which (as publicly traded service companies or private equity driven enterprises) quarterly cashflow goals are the most important.

At the base level, sponsors have drugs to develop (at very high cost and risk of failure); service providers on the other hand simply have bodies to keep busy. Indeed, efficiency is not an inherently desirable goal for a service provider whose contracts are time-based.

This is not about company size, since some CROs are bigger than many biopharmas. It is more about the incompatibilities of company cultures. Changes of priorities, project success and project failures are common when working for a sponsor. Biopharmas have worked hard over the last 15 years to tear down intracompany silos and to attract and develop broadly qualified people. Vendors tend to lag behind in this regard. In practice this leads to significant expectation mismatches. Where the sponsor expects flexibility and a solution-focused approach, CROs often have a more formalistic, “one step after the other” approach, and silo thinking is much more pronounced than on the sponsor side. Eventually, this may become an important hurdle for collaboration. So the question should be how to jump the hurdle, rather than whose fault it is.

 Efficient CRO Oversight – Where To Start?

Depending on the status and preparedness of a clinical development organization, it will take time and effort to develop and implement a (new) vendor oversight strategy. So where to start? What to do first?

The answer begins by doing your homework, i.e., understand your experiences‘ success and failures and the reasons for them, assess your typical contract language, and establish your medium- and long-term intended outsourcing strategies. This will lead to identifying areas for internal process optimization. If, for instance, your study start-up processes are not working well, this may become a roadblock for better vendor management. If a company’s electronic Case Report Form (eCRF) design is not optimal, this may lead to site dissatisfaction and unnecessary costs. There may be misalignment between a sponsor’s and the more „advanced“ state of your CRO’s standards and processes.

The metrics which guide you to understanding and judging CRO performance may also need revision – most sponsors use too many metrics, and the data on which they are based is too out-of-date or inaccurate. A proper streamlined approach to defining and collecting those metrics will inform all steps of the process, from original project design to contracting to oversight and learning.

Ultimately, all aspects of the sponor-CRO collaboration should be captured in a vendor oversight plan. This plan will be the guidance for everyone, and together with your completed homework, this should give you a good start into knowing what needs to be done for you to achieve proper risk-based vendor oversight.

 Successful CRO Oversight

The guiding principle for a healthy sponsor/vendor relationship is that the sponsor (big or small, experienced or naïve) should govern the relationship with its service providers. Although most vendors will insist they are your „partners”, in fact managing sponsor/provider communication and control from a position of accepted authority is key for the biopharma sponsor. Collaboration should never mean abdication: the authority needs to be natural and fact-based.

In our experience clinical development organizations are typically not prepared and not staffed to set up a successful vendor oversight management strategy. Instead sponsors jump to another vendor or another outsourcing model. There is never time or money to set up something sustainable from scratch. In consequence the learning from suboptimal experiences is never applied or considered relevant. This creates a succession of suboptimal experiences, which all clinical development organizations can no longer afford. To discuss these issues further and how we can help, please reach out to Waife & Associates (www.waife.com) by emailing Detlef Nehrdich (nehrdich@waife.com) or Steve Shevel (shevel@waife.com).

If you are not locking your database within 5 days after your “last subject is out” then something is wrong.

 

It is a well-recognized in clinical research that succeeding and/or failing quickly is critical to the ability to bring new therapies to market. There are many aspects of clinical research that are beyond our control, such as how will the drug compete against other therapies in efficacy, how will the drug interact with the physiology of the subjects, and will this yield a favorable safety profile? These are the questions we hope to answer by conducting clinical trials, but we don’t have any direct control over them until we get the data.

 

What we do have control over, is how we collect and analyze the data, and more importantly, how quickly we are able to obtain this data to arrive at that critical and expensive decision of whether or not to proceed further. Over many years working in the biopharma research industry, I notice how much discussion is spent on getting that first subject into the trial. But many would argue that a heavier focus should be placed on getting the trial completed, so that you have the data you need to make critical development decisions. The database lock timepoint, then, is the key step in arriving at a development go/no go decision.

 

I continue to be shocked when I ask at conferences and clients, “How many days, on average, after last patient out, do you lock your database?” and the responses are overwhelmingly “weeks to months”. There are those few companies that are doing this within 5 days, and to those I say congratulations! ­– you need not read any further. To the rest, if you are using electronic data capture (EDC) and you are not locking your database within 5 days of last subject out, then there is something broken in your process. This applies to both outsourced and insourced models, and to a large degree is independent of the EDC software you are using. To be fair, some software solutions may make this process a little easier with various built-in tools, but most of you can achieve this goal with your current EDC technologies.

So, you should ask yourselves about whether your procedures are preventing you from completing your trials in an expedited fashion; this is one of those components of clinical research that you have full control over, and there are many proven strategies and techniques to succeed at faster database lock processes. Often, as in most process analysis and correction, the solution is very specific to your particular circumstances of talent, policies, politics, compliance and history. If you are interested in updating your processes to take full advantage of the technology you have purchased and implemented, then please reach out to Waife & Associates at www.waife.com. Or to me directly at shevel@waife.com. We have a proven track record over 25 years specializing in biopharma clinical development to address these types of issues. A 5-day database lock is a very real and achievable milestone, and if you are not achieving this performance in your Phase II and Phase III programs, then you are leaving opportunities and money on the table.