{"id":701,"date":"2019-03-01T14:19:22","date_gmt":"2019-03-01T19:19:22","guid":{"rendered":"http:\/\/waife.com\/home\/?p=701"},"modified":"2019-11-09T13:49:42","modified_gmt":"2019-11-09T18:49:42","slug":"ai-ai-oh-using-ai-in-clinical-development","status":"publish","type":"post","link":"http:\/\/waife.com\/home\/ai-ai-oh-using-ai-in-clinical-development\/","title":{"rendered":"AI, AI, Oh! Using AI in Clinical Development"},"content":{"rendered":"<p>Almost everywhere you turn today, or any conference you attend, you are likely to encounter someone talking about Artificial Intelligence (AI) and the associated proclamation of how AI\u2019s implementation is going to be disruptive to almost every aspect of business.\u00a0 There are countless articles being written, talks being given, and startups emerging, all leveraging AI as a game changer and sometimes overselling it as a panacea.<\/p>\n<p>The biopharma industry, typically a slow adopter of new technology, has moved rather quickly to grasp the potential of AI, albeit not its application yet, in bringing new drugs to market, and like other industries is enthused at the numerous opportunities that AI affords in completely automating what were once manual tasks.\u00a0 There is of course the regular chorus of caution against the use of artificial intelligence, with many good and salient arguments about why we should be careful in how we adopt and apply AI.<\/p>\n<p>One of the most recent arguments I heard on exercising caution presented the premise of \u201c<strong>benefit<\/strong>\u201d and how that premise is vastly different in humans as opposed to machines.\u00a0 The argument, in short, was that humans do tasks in order to gain some benefit which can take any number of forms \u2013 monetary, charity, goodwill, benevolence, personal growth, etc.\u00a0 While on the other hand, a machine will never possess a conscience that will dictate to them a diverse reasoning for why it is doing a particular task and for what purpose.\u00a0 Instead, the argument claims, a machine\u2019s primary focus will be to advance its own directive without emotion or thought of others.\u00a0 This may very well have some veracity to it, but like it or not, the argument will not stop the advance of AI in the biopharma industry or others.<\/p>\n<p>So, the next question then becomes, how do we deal with AI once we begin to advance its application? I believe this is the particular area that requires a more direct, and specific focus.\u00a0 The potential benefits of applying AI to our industry are irrefutable.\u00a0 If you can get machines to predict outcomes more effectively, analyze data more holistically, and pinpoint potential roadblocks to success more accurately, then all of those outcomes in the end add significant benefit to bringing new treatments to market for people who desperately need them.<\/p>\n<p>But, are your organizations prepared for the integration of AI into your existing processes and structures?\u00a0The answer is, probably not.\u00a0 Most of the attention to date has been applied to the application of AI to a problem and how it can solve that problem, but very little has been applied to how to integrate AI into the company structure, culture and process.\u00a0 There is a wonderful TED talk on this very issue by Matt Beane (professor at University of California at Santa Barbara), where he points out the devasting impact that a one-dimensional implementation of AI can have to the next generation of human knowledge and capability.\u00a0 I recommend you take 9 minutes out of your day and listen to the talk, because it is very poignant thought-provoking.\u00a0\u00a0 Matt\u2019s conclusions are equally applicable to the rush to implement AI at biopharma companies without taking some time to plan ahead and adjust your organization to accept it. \u00a0How will your company adapt when AI is leveraged to identify and target specific geographic areas for subject and site recruitment.\u00a0 How will your governance structures change if AI is successful in predicting and analyzing safety projections and how will this impact your PV departments and DSMBs?\u00a0 How will your organization\u2019s procedures and support structure adapt to an AI solution that automates a large portion of monitoring or protocol development?\u00a0 \u00a0As this technology matures, there is little doubt that you will gain efficiencies in a number of areas, but at what cost to the human, cultural and emotional intelligence portions of your organizations?<\/p>\n<p>There is an analogous example, and good case study to draw from, in biopharma\u2019s strong shift to outsourcing in clinical research that continue to accelerate inexorably.\u00a0 The shift started when executives at biopharma companies, with the advice of consultants, decided that because of the fluidity of clinical trials they should look at reducing their fixed costs (in-house resources) in favor of variable costs (outsourced resources).\u00a0\u00a0 The financials all made a lot of sense, and so in a relatively short time the resource models were drastically overhauled, and the few people that were retained at the biopharma companies were shifted almost overnight, from a role of a contributor to a role of overseer, with little more than a few days of training to help them along.\u00a0 The result of this quick shift in role and organizational expertise culminated in relationships with CROs and other vendors wrought with friction and assignments of blame. \u00a0In addition, it had the unintended consequence of resulting in a perceptible decay of operational knowledge and expertise at the biopharma companies themselves, as those skills were no longer put into practice or fostered.\u00a0 Equally unfortunate is that the projected savings, in both costs and efficiencies, have not materialized in a meaningful way, as evidenced by some recent studies conducted by Tufts Center for the Study of Drug Development (CSDD).<\/p>\n<p>So, as we embark upon the exciting and inspirational path of AI, and all that it can offer to the clinical research world, it would behoove us all to direct just a portion of our focus away from the technology itself and towards our organizations seeking to benefit.\u00a0Preparing our organizations, and its people, to accept a technology that promises to be far more disruptive than anything we have encountered before, may be the difference between a rapid successful adoption versus a path strewn with impediments and tribulation.<\/p>\n<p>\u00a9 Waife &amp; Associates, Inc., 2019<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Almost everywhere you turn today, or any conference you attend, you are likely to encounter someone talking about Artificial Intelligence (AI) and the associated proclamation of how AI\u2019s implementation is going to be disruptive to almost every aspect of business.\u00a0 There are countless articles being written, talks being given, and startups emerging, all leveraging AI as a game changer and sometimes overselling it as a panacea. The biopharma industry, typically a slow adopter of new technology, has moved rather quickly to grasp the potential of AI, albeit not its application yet, in bringing new drugs to market, and like other industries is enthused at the numerous opportunities that AI affords in completely automating what were once manual tasks.\u00a0 There is of course the regular chorus of caution against the use of artificial intelligence, with many good and salient arguments about why we should be careful in how we adopt and apply AI. One of the most recent arguments I heard on exercising caution presented the premise of \u201cbenefit\u201d and how that premise is vastly different in humans as opposed to machines.\u00a0 The argument, in short, was that humans do tasks in order to gain some benefit which can take any [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[50,4,54],"tags":[],"class_list":["post-701","post","type-post","status-publish","format-standard","hentry","category-recent-columns","category-recent-news","category-recent-postings"],"_links":{"self":[{"href":"http:\/\/waife.com\/home\/wp-json\/wp\/v2\/posts\/701","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/waife.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/waife.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/waife.com\/home\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/waife.com\/home\/wp-json\/wp\/v2\/comments?post=701"}],"version-history":[{"count":1,"href":"http:\/\/waife.com\/home\/wp-json\/wp\/v2\/posts\/701\/revisions"}],"predecessor-version":[{"id":702,"href":"http:\/\/waife.com\/home\/wp-json\/wp\/v2\/posts\/701\/revisions\/702"}],"wp:attachment":[{"href":"http:\/\/waife.com\/home\/wp-json\/wp\/v2\/media?parent=701"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/waife.com\/home\/wp-json\/wp\/v2\/categories?post=701"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/waife.com\/home\/wp-json\/wp\/v2\/tags?post=701"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}