Li-An Lin, PhD, is the Head of Safety
            Statistics at Moderna. Before joining Moderna, he was a
            safety statistician at Merck following the completion of his
            PhD at The University of Texas. His research interests
            encompass Adaptive Trial Design, Causal Inference,
            Meta-Analysis, Statistical Evidence, Survival Analysis, and
            Bayesian Analysis. In recent years, Lian has been actively
            involved in developing interactive tools (via R Consortium
            and internal effort) for clinical trial data processing,
            analysis, reporting, and visualization. He is an active
            member of the ASA Safety Working Group and co-leads the WS3
            initiative on integrating RCT and RWE for safety
            decision-making.
Abstract: Post-marketing safety assessment is a crucial aspect of drug development, aiming to ensure the continued safety and efficacy of pharmaceuticals across diverse populations. The International Council for Harmonization (ICH) has provided essential guidance and frameworks to harmonize the drug development and post-marketing pharmacovigilance. The ICH E17 guideline promotes the harmonization of multi-regional clinical trial designs, which facilitates simultaneous drug development and approval globally. The ICH M14 guideline emphasizes the need for robust methodologies and rigorous monitoring in post-marketing safety evaluations. The increasing use of multi-national pharmacovigilance studies is a significant development in this field. The integration of electronic healthcare data, including administrative claims and electronic health records, enables researchers to conduct extensive observational studies. AI is increasingly being utilized to enhance the design and execution of observational studies in clinical research. AI-based tools can rapidly identify the pattern built in the various observational data sources. In additional, AI can significantly improve the efficiency and accuracy of these studies by automating various aspects of the research process. In this presentation, we will examine various design considerations for such post-marketing safety studies and how AI-based tool can facilitate the study design. We will use an example of multi-national postmarketing studies and discuss key pitfalls related to the design and analysis of such studies as well as strategies to mitigate biases. By leveraging these AI technologies, researchers can enhance their ability to identify and manage risks, thereby improving overall trial outcomes and accelerating the clinical research process.
 Andrew Semmes is the Associate
            Director of Pharmacovigilance Artificial Intelligence and
            Digital Innovation at Moderna, where he leads AI adoption
            and digital transformation initiatives within Clinical
            Safety & Pharmacovigilance (CSPV). His work focuses on
            leveraging AI to enhance pharmacovigilance processes,
            automate workflows, and improve operational efficiency while
            ensuring GxP compliance. He spearheads enterprise-wide
            initiatives to enable Moderna’s AI infrastructure to scale
            effectively in highly regulated environments. Prior to
            joining Moderna, Andrew was a strategy and analytics
            consultant at Deloitte, where he helped pharmaceutical and
            biotech companies integrate AI into pharmacovigilance,
            automate adverse event triage, and optimize regulatory
            workflows. He led efforts to develop safety and compliance
            systems, streamline business processes within digital
            transformations, and drive data strategy initiatives that
            generated global cost savings. Andrew holds a Master’s in
            Information Science with a focus on Data Science and a
            Bachelor’s in Information Science with a concentration in
            User Experience, both from Cornell University. 
Abstract: Many biopharmaceutical companies see the potential of AI, but making it both effective and compliant is a challenge. This session will dive into how digital teams can assess risk of AI systems, strike a balance between automation and human oversight, and adopt AI while upholding the highest standards of quality and compliance. We’ll explore key principles for AI adoption in GxP workflows (informed by health authority guidance), the role of solid documentation and lifecycle management of AI models, and how structured credibility assessments can build confidence in AI-generated insights and validate the business case for AI systems.
 Tarak Thakker is a seasoned expert
            with over 22 years of experience in Life Sciences and
            Pharmacovigilance, dedicated to leveraging technology to
            address critical challenges in the PV industry. He has
            played a pivotal role in pioneering AI and automation
            solutions for ICSR intake and processing. A recognized
            thought leader, Tarak is a frequent speaker at industry
            conferences and has delivered an educational seminar at DIA
            on AI/ML applications in Pharmacovigilance. Before joining
            BeiGene, he led global implementations of automated intake
            and safety systems at RxLogix, contributed to AI-driven PV
            solutions with IBM Watson, and collaborated with Deloitte,
            Oracle/Relsys, and Otsuka. Currently, he serves as the
            Director, Global Head of Safety Systems and Reporting at
            BeiGene. Beyond his professional pursuits, Tarak is an avid
            sports enthusiast and passionate traveler.