Abstract: Adaptive clinical trial designs use interim analyses (IA) to inform trial modifications, often guided by conditional power (CP). For time-to-event outcomes, CP is typically estimated from the log-rank test under the assumption that the interim hazard ratio (HR) persists in future patients, ignoring baseline covariates. We propose a predictive modeling framework that leverages baseline covariates to improve the estimation of CP. We introduce new metrics that quantify the accuracy of estimation and decision-making. Using extensive simulations, we examine the impact of covariate informativeness and covariate type to evaluate when our framework is beneficial. Results show that the proposed framework is particularly effective in futility analyses. Based on these findings, we provide practical guidance for incorporating our framework into adaptive clinical trial designs.
Abstract: Digital twins are emerging as useful tools in clinical research. By leveraging mechanistic models and data-driven approaches, digital twins enable simulation, prediction, and monitoring of patient trajectories in ways that can support trial design and execution. This presentation introduces the concept of digital twins, explores key methodologies from the literature, and highlights practical applications in clinical trials. We will discuss how digital twins can support covariate adjustment, stratification, and the creation of in silico patients, potentially reducing sample sizes and improving trial efficiency. Finally, we will examine challenges such as model generalizability, transportability, bias, and interpretability, and outline strategies for mitigating these risks. Attendees will gain a foundational understanding of digital twin technology and its role in advancing evidence generation in drug development.
Abstract: Sanofi is transforming into an R&D driven, AI powered biopharma company committed to improving people’s lives and fueling sustainable growth. This talk will highlight how Sanofi is building and deploying impactful AI products that reinvent the R&D value chain—from target discovery to clinical development and portfolio decision making. The speaker will share examples of key AI solutions, outline the scientific and technical innovations behind them, and show how scaled AI platforms are accelerating insights, enhancing decision quality, and shortening development cycles. The session will also discuss lessons learned in building AI for real world R&D environments, including productization, adoption, and cross functional collaboration. Attendees will gain a view into how AI is reshaping biopharma R&D and how Sanofi is executing this transformation at scale.
Abstract: Standardized clinical data transformation—CDISC SDTM, ADaM, and the downstream TFL pipeline—has always demanded rigor. It has also demanded time: countless hours translating specifications into code, checking edge cases, aligning controlled terminology, and proving compliance. Today, large language models (LLMs) are changing the economics of that work, not by weakening standards, but by making high-quality automation suddenly practical. In this talk, we use SDTM as a concrete, high-stakes example to explore how LLMs are reshaping statistical programming. We will walk through an approach that turns natural-language SDTM specifications into executable R code using modern LLMs and a Retrieval-Augmented Generation (RAG) framework. The key idea is to treat SDTM knowledge—Implementation Guide rules, variable roles, controlled terminology, domain structures—not as “helpful background,” but as enforceable generation constraints and validation rules. We then push beyond one-shot code generation. Instead, we demonstrate a closed-loop workflow—generate → self-check → repair—where the model executes the code in a sandbox environment, detects problems via static and dynamic checks, and iteratively improves the output until it meets predefined quality gates. Finally, we introduce a structured review mechanism that supports human validation, traceability, and regulatory expectations—positioning AI not as an unchecked autopilot, but as an auditable teammate. The real question is no longer whether LLMs can write code. It is whether we can design systems where speed, correctness, and compliance reinforce each other—and how this shift changes what it means to be a statistical programmer in pharmaceutical R&D.
Abstract: Clinical study protocol development is often hindered by time consuming authoring, inconsistent formatting, and quality issues that can lead to costly delays. eProtocol Suite addresses these challenges through an AI enabled Microsoft Word add in integrated with a unified Protocol framework. The solution supports study teams in managing standardized content library, searching historical protocols, extracting study specific information, verifying regulatory template compliance in real time, and generating contextually appropriate draft content on demand. By automating repetitive authoring tasks and proactively identifying compliance gaps before formal quality control review, eProtocol Suite shortens drafting timelines while improving consistency, accuracy, and overall protocol quality. Serving as an intelligent backbone for the clinical document lifecycle, it helps organizations close the gap between manual processes and the rigorous standards required for regulatory approval.
Abstract: TBA
Abstract: TBA