Shubing earned a Bachelor of Science
in Mathematics from the University of Science and Technology
of China and a Master of Arts in Computational Mathematics
from the University of Texas at Austin. He completed his
Ph.D. in Statistics at the University of Wisconsin–Madison,
focusing on Weighted Fourier Representation and Multiple
Testing in MR Brain Image Analysis. Shubing has worked at
Merck since 2006, starting as a summer intern and later
joining the Biometric Research department at BARDS. He
supports preclinical studies and research, applying machine
learning, high-dimensional, and longitudinal data analysis,
including early-stage development of Keytruda. His recent
research includes deep computer vision, self-supervised
learning, foundation model fine-tuning, and Generative AI,
developing Retrieval-Augmented Generation (RAG) applications
for AI-driven projects at MRL.
Abstract: Hailed as the GPT-4 of computer vision, the Segment Anything Model (SAM) is a foundational model that exhibits robust zero-shot performance across a variety of computer vision tasks, including segmentation and object detection, applicable to diverse imaging platforms. Despite SAM's superior out-of-the-box adaptation compared to previous methods, there remains significant potential for improvement, particularly in challenging scenarios such as cryo-electron microscopy (cryoEM) image analysis. By leveraging smaller, noisier annotated datasets from earlier approaches, fine-tuning SAM in different contexts—such as region of interest (ROI) detection, particle segmentation, and classification in cryoEM image analysis—has significantly enhanced performance in these tasks. However, the large size of fine-tuned SAM models often results in slow training and inference, requiring substantial computational resources. To address these challenges, we demonstrate that fine-tuned SAM can significantly assist in fine-tuning smaller foundational models, such as YOLO, to improve efficiency through knowledge distillation. This presentation will demonstrate how fine-tuning foundation models can transform cryo-electron microscopy (cryoEM) from a specialized technique into a fundamental platform for the development of various vaccines and adjuvants.
Dr. Xueyan Mei is an Instructor in the
BioMedical Engineering and Imaging Institute (BMEII), the
Department of Diagnostic, Molecular and Interventional
Radiology, the Windreich Department of Artificial
Intelligence and Human Health, and the Department of
Emergency Medicine at the Icahn School of Medicine at Mount
Sinai. Dr. Mei obtained her Ph.D. in Biological Science and
completed her postdoctoral training at BMEII under the
supervision of Dr. Zahi A. Fayad. Dr. Mei has numerous
publications in leading international journals and
conferences. Dr. Mei is the recipient of the prestigious
Eric and Wendy Schmidt AI in Human Health Fellow award from
the Eric Schmidt Foundation as of June 2024. Dr. Mei is
actively working in designing innovative methods for medical
image analysis, developing multi-modal AI and ML models for
diagnosis, and creating vision-language models that
integrate biomedical images with medical notes. Her work
also extends to large language models adept at managing
electronic health records and AI bots designed to streamline
physician workflows.
Abstract: The integration of Artificial Intelligence (AI) in medical imaging and patient data analysis represents a transformative development in healthcare diagnostics and prognostics. This presentation explores the role of multi-modal AI in predicting clinical outcomes of interstitial lung disease and multiple myeloma. Additionally, I will discuss the creation of pseudo patients through AI, a novel approach that simulates patient responses to therapies for rare diseases. This session highlights AI's potential to further improve personalized medicine by integrating diverse data sources and sophisticated simulations.
Jameson Ma is a seasoned patent attorney
with over 15 years of experience specializing in
intellectual property strategy, patent prosecution, and
patent counseling. He has worked extensively with
pharmaceutical, medical device, and AI-driven technology
companies, advising on complex patentability issues,
particularly in AI-assisted drug discovery. His expertise
spans industries such as consumer electronics, athletic
apparel, automotive, and digital health, allowing him to
provide strategic IP solutions tailored to innovation-driven
companies.
Abstract: The integration of artificial intelligence (AI) into drug discovery is accelerating the identification of novel therapeutics, optimizing molecular design, and streamlining early-stage research. However, the increasing role of AI in these processes raises significant legal considerations regarding patent eligibility, particularly concerning the requirement that only natural persons can be recognized as inventors. Under current U.S. patent law, AI systems cannot be listed as inventors, and the extent of human contribution is a key factor in determining patentability. Failure to demonstrate sufficient human involvement in AI-assisted discoveries may compromise a company’s ability to secure and enforce valuable pharmaceutical patents. This session will provide an overview of the evolving intellectual property landscape for AI-driven drug discovery, with a focus on legal precedents and best practices for ensuring compliance. Attendees will gain practical insights into (1) defining human inventorship in AI-assisted research by understanding the U.S. Patent and Trademark Office’s criteria for determining whether a human’s contribution is legally sufficient, (2) documenting human contributions to AI-generated discoveries to strengthen patent eligibility, and (3) enhancing collaboration between scientific and legal teams to effectively structure disclosures and support robust patent applications.