Almost everywhere you turn today, or any conference you attend, you are likely to encounter someone talking about Artificial Intelligence (AI) and the associated proclamation of how AI’s implementation is going to be disruptive to almost every aspect of business. There are countless articles being written, talks being given, and startups emerging, all leveraging AI as a game changer and sometimes overselling it as a panacea.
The biopharma industry, typically a slow adopter of new technology, has moved rather quickly to grasp the potential of AI, albeit not its application yet, in bringing new drugs to market, and like other industries is enthused at the numerous opportunities that AI affords in completely automating what were once manual tasks. There is of course the regular chorus of caution against the use of artificial intelligence, with many good and salient arguments about why we should be careful in how we adopt and apply AI.
One of the most recent arguments I heard on exercising caution presented the premise of “benefit” and how that premise is vastly different in humans as opposed to machines. The argument, in short, was that humans do tasks in order to gain some benefit which can take any number of forms – monetary, charity, goodwill, benevolence, personal growth, etc. While on the other hand, a machine will never possess a conscience that will dictate to them a diverse reasoning for why it is doing a particular task and for what purpose. Instead, the argument claims, a machine’s primary focus will be to advance its own directive without emotion or thought of others. This may very well have some veracity to it, but like it or not, the argument will not stop the advance of AI in the biopharma industry or others.
So, the next question then becomes, how do we deal with AI once we begin to advance its application? I believe this is the particular area that requires a more direct, and specific focus. The potential benefits of applying AI to our industry are irrefutable. If you can get machines to predict outcomes more effectively, analyze data more holistically, and pinpoint potential roadblocks to success more accurately, then all of those outcomes in the end add significant benefit to bringing new treatments to market for people who desperately need them.
But, are your organizations prepared for the integration of AI into your existing processes and structures? The answer is, probably not. Most of the attention to date has been applied to the application of AI to a problem and how it can solve that problem, but very little has been applied to how to integrate AI into the company structure, culture and process. There is a wonderful TED talk on this very issue by Matt Beane (professor at University of California at Santa Barbara), where he points out the devasting impact that a one-dimensional implementation of AI can have to the next generation of human knowledge and capability. I recommend you take 9 minutes out of your day and listen to the talk, because it is very poignant thought-provoking. Matt’s conclusions are equally applicable to the rush to implement AI at biopharma companies without taking some time to plan ahead and adjust your organization to accept it. How will your company adapt when AI is leveraged to identify and target specific geographic areas for subject and site recruitment. How will your governance structures change if AI is successful in predicting and analyzing safety projections and how will this impact your PV departments and DSMBs? How will your organization’s procedures and support structure adapt to an AI solution that automates a large portion of monitoring or protocol development? As this technology matures, there is little doubt that you will gain efficiencies in a number of areas, but at what cost to the human, cultural and emotional intelligence portions of your organizations?
There is an analogous example, and good case study to draw from, in biopharma’s strong shift to outsourcing in clinical research that continue to accelerate inexorably. The shift started when executives at biopharma companies, with the advice of consultants, decided that because of the fluidity of clinical trials they should look at reducing their fixed costs (in-house resources) in favor of variable costs (outsourced resources). The financials all made a lot of sense, and so in a relatively short time the resource models were drastically overhauled, and the few people that were retained at the biopharma companies were shifted almost overnight, from a role of a contributor to a role of overseer, with little more than a few days of training to help them along. The result of this quick shift in role and organizational expertise culminated in relationships with CROs and other vendors wrought with friction and assignments of blame. In addition, it had the unintended consequence of resulting in a perceptible decay of operational knowledge and expertise at the biopharma companies themselves, as those skills were no longer put into practice or fostered. Equally unfortunate is that the projected savings, in both costs and efficiencies, have not materialized in a meaningful way, as evidenced by some recent studies conducted by Tufts Center for the Study of Drug Development (CSDD).
So, as we embark upon the exciting and inspirational path of AI, and all that it can offer to the clinical research world, it would behoove us all to direct just a portion of our focus away from the technology itself and towards our organizations seeking to benefit. Preparing our organizations, and its people, to accept a technology that promises to be far more disruptive than anything we have encountered before, may be the difference between a rapid successful adoption versus a path strewn with impediments and tribulation.
© Waife & Associates, Inc., 2019