Abstract: Adaptive clinical trial designs use interim analyses (IA) to inform trial modifications, often guided by conditional power (CP). For time-to-event outcomes, CP is typically estimated from the log-rank test under the assumption that the interim hazard ratio (HR) persists in future patients, ignoring baseline covariates. We propose a predictive modeling framework that leverages baseline covariates to improve the estimation of CP. We introduce new metrics that quantify the accuracy of estimation and decision-making. Using extensive simulations, we examine the impact of covariate informativeness and covariate type to evaluate when our framework is beneficial. Results show that the proposed framework is particularly effective in futility analyses. Based on these findings, we provide practical guidance for incorporating our framework into adaptive clinical trial designs.
In this talk, I will share our three-year effort to build an AI-era workforce. Although this journey has been filled with more failures than successes, the initial glimpses of remarkable future productivity justified this ongoing endeavor.
Abstract: Digital twins are emerging as useful tools in clinical research. By leveraging mechanistic models and data-driven approaches, digital twins enable simulation, prediction, and monitoring of patient trajectories in ways that can support trial design and execution. This presentation introduces the concept of digital twins, explores key methodologies from the literature, and highlights practical applications in clinical trials. We will discuss how digital twins can support covariate adjustment, stratification, and the creation of in silico patients, potentially reducing sample sizes and improving trial efficiency. Finally, we will examine challenges such as model generalizability, transportability, bias, and interpretability, and outline strategies for mitigating these risks. Attendees will gain a foundational understanding of digital twin technology and its role in advancing evidence generation in drug development.
Abstract: Sanofi is transforming into an R&D driven, AI powered biopharma company committed to improving people’s lives and fueling sustainable growth. This talk will highlight how Sanofi is building and deploying impactful AI products that reinvent the R&D value chain—from target discovery to clinical development and portfolio decision making. The speaker will share examples of key AI solutions, outline the scientific and technical innovations behind them, and show how scaled AI platforms are accelerating insights, enhancing decision quality, and shortening development cycles. The session will also discuss lessons learned in building AI for real world R&D environments, including productization, adoption, and cross functional collaboration. Attendees will gain a view into how AI is reshaping biopharma R&D and how Sanofi is executing this transformation at scale.
Abstract: Standardized clinical data transformation—CDISC SDTM, ADaM, and the downstream TFL pipeline—has always demanded rigor. It has also demanded time: countless hours translating specifications into code, checking edge cases, aligning controlled terminology, and proving compliance. Today, large language models (LLMs) are changing the economics of that work, not by weakening standards, but by making high-quality automation suddenly practical. In this talk, we use SDTM as a concrete, high-stakes example to explore how LLMs are reshaping statistical programming. We will walk through an approach that turns natural-language SDTM specifications into executable R code using modern LLMs and a Retrieval-Augmented Generation (RAG) framework. The key idea is to treat SDTM knowledge—Implementation Guide rules, variable roles, controlled terminology, domain structures—not as “helpful background,” but as enforceable generation constraints and validation rules. We then push beyond one-shot code generation. Instead, we demonstrate a closed-loop workflow—generate → self-check → repair—where the model executes the code in a sandbox environment, detects problems via static and dynamic checks, and iteratively improves the output until it meets predefined quality gates. Finally, we introduce a structured review mechanism that supports human validation, traceability, and regulatory expectations—positioning AI not as an unchecked autopilot, but as an auditable teammate. The real question is no longer whether LLMs can write code. It is whether we can design systems where speed, correctness, and compliance reinforce each other—and how this shift changes what it means to be a statistical programmer in pharmaceutical R&D.
Abstract: Clinical study protocol development is often hindered by time consuming authoring, inconsistent formatting, and quality issues that can lead to costly delays. eProtocol Suite addresses these challenges through an AI enabled Microsoft Word add in integrated with a unified Protocol framework. The solution supports study teams in managing standardized content library, searching historical protocols, extracting study specific information, verifying regulatory template compliance in real time, and generating contextually appropriate draft content on demand. By automating repetitive authoring tasks and proactively identifying compliance gaps before formal quality control review, eProtocol Suite shortens drafting timelines while improving consistency, accuracy, and overall protocol quality. Serving as an intelligent backbone for the clinical document lifecycle, it helps organizations close the gap between manual processes and the rigorous standards required for regulatory approval.
Abstract: Establishing a Biometrics group within small- to mid-sized biopharma companies offers a unique opportunity to leverage technological and AI advances with flexibility, though it also presents distinct challenges. In this presentation, we share our experience in building an AI-native team, along with the processes and tools required for core biometrics functionalities in clinical research. We advocate for a structural model designed to adapt to the evolving landscape of the AI-native clinical development lifecycle.
Abstract: TBA
TBA
Abstract: External control arms can inform early clinical development of experimental drugs and provide efficacy evidence for regulatory approval. However, accessing sufficient real-world or historical clinical trials data is challenging. Indeed, regulations protecting patients’ rights by strictly controlling data processing make pooling data from multiple sources in a central server often difficult. To address these limitations, we develop a method that leverages federated learning to enable inverse probability of treatment weighting for time-to-event outcomes on separate cohorts without needing to pool data. To showcase its potential, we apply it in different settings of increasing complexity, culminating with a real-world use-case in which our method is used to compare the treatment effect of two approved chemotherapy regimens using data from three separate cohorts of patients with metastatic pancreatic cancer.
Abstract: TorchSurv is a Python package that serves as a companion tool to perform deep survival modeling within the PyTorch environment (Paszke et al., 2019). With its lightweight design, minimal input requirements, full PyTorch backend, and freedom from restrictive parameterizations, TorchSurv facilitates efficient deep survival model implementation and is particularly beneficial for high-dimensional and complex data analyses. At its core, TorchSurv features calculations of log-likelihoods for prominent survival models (Cox proportional hazards model (Cox, 1972), Weibull Accelerated Time Failure (AFT) model (Carroll, 2003)) and offers evaluation metrics, including the time-dependent Area Under the Receiver Operating Characteristic (ROC) curve (AUC), the Concordance index (C-index) and the Brier Score. TorchSurv has been rigorously tested using both open-source and synthetically generated survival data, against R and Python packages. The package is thoroughly documented and includes illustrative examples. The latest documentation for TorchSurv can be found on our website (https://opensource.nibr.com/torchsurv/) .