Amritpal Singh

Dr. Amritpal Singh is a physician-scientist with formal training across medicine, computer science, and artificial intelligence. He holds an M.B.B.S. from Maulana Azad Medical College and an MS in Computer Science from Georgia Tech, where he specialized in machine learning, deep learning, reinforcement learning, graph methods, and medical robotics. His work sits at the intersection of clinical medicine and data-driven modeling, with experience spanning imaging, genomics, and decision-making systems for healthcare.

He is currently pursuing a PhD in Computer Science and Informatics at Emory University under the mentorship of Dr. Anant Madabhushi. His research focuses on multimodal biomarker discovery using deep learning, integrating radiology, digital pathology, genomics, and ocular imaging to improve risk prediction and outcome modeling in cancer, cardiovascular disease, and other complex conditions. His broader interests include explainable and continual learning for medical AI, early disease detection, and building clinically deployable systems that generalize across populations and institutions.

Email  /  GitHub  /  Google Scholar  /  LinkedIn  /  ORCID  /  Twitter  /  Kaggle

profile photo

News

  • [Dec'25] [Podium presentation] Presented AI-based MRI work on early non-responders and surgical need in spinal TB at NASS Annual Meeting, U.S.
  • [Dec'25] [Talk] 3 RSNA co-authored abstracts selected for RSNA Annual Meeting, Chicago, 2025.
  • [Nov'25] [Feature] Featured on Emory University’s Wonderwall for research contributions, Atlanta.
  • [Nov'25] [Paper] Co-authored journal paper on Exposome on ocular imaging, multi-omics, and AI for environmental health.
  • [Nov'25] [Paper] AI-based MRI study on spinal TB published in The Spine Journal and presented at NASS 2025, Denver.
  • [Sep'25] [Paper] First first-author paper in European Journal of Cancer on RetHemo, AI predicting 10-year risk of hematological malignancies.
  • [Aug'25] [Talk] Guest lecture at “Introduction to Medical AI” session for AI in Medicine Cohort Series, Maulana Azad Medical College, India.
  • [Apr'25] [Podium presentation] USCAP on graph-based AI predicting molecular subtypes in DLBCL from H&E slides.
  • [Dec'24] [Podium presentation] Presented at AHA 2024 on AI-derived retinal vessel features for 3-year MACE risk, Chicago.
  • [May'24] [Milestone] Graduated with MS in Computer Science from Georgia Tech, specializing in AI, computer vision, and medical device innovation.
  • [Dec'23] [Talk] Presented “GraphPrint” at NeurIPS 2023 AI4D3 Workshop, using AlphaFold 3D protein structures for drug target prediction.
  • [Dec'23] [Paper] Presented continual learning in healthcare imaging AI at NeurIPS 2023, adapting across tasks and hospitals without forgetting.
  • [Dec'23] [Paper] Presented multi-modal AI research for Alzheimer’s staging using MRI, PET, EHR, and genomics at IEEE BIBM 2023.

Research

I'm interested in artificial intelligence, machine learning for healthcare, computational pathology, radiology. Below are some of my recent research publications.

project image

Spatial Arrangement of Neoplastic Lymphocytes Predicts Molecular Subtypes in Diffuse Large B-cell Lymphoma


Amritpal Singh, Tilak Pathak, Germán Corredor, Anant Madabhushi
USCAP (Boston, USA), 2025
website /

Diffuse large B-cell lymphoma (DLBCL) is the most common non-Hodgkin lymphoma and can be fatal if untreated. Accurate cell-of-origin (COO) classification is critical for prognosis and treatment, but standard immunohistochemistry (IHC) for BCL6, MUM1, and CD10 is costly, time-consuming, and not widely accessible. Alternative approaches using routine histopathology could improve scalability and access. We analyzed digitized H&E tissue microarray images from DLBCL patients. Images were tiled, neoplastic lymphocytes segmented with Hover-Net, and nuclear morphology and intensity features extracted. A graph neural network, MNeo, was trained to predict BCL6, MUM1, and CD10 expression, compared to a baseline foundation model. MNeo reliably predicted molecular subtypes from H&E images, outperforming the baseline. Feature maps highlighted regions most informative for each marker, showing that nuclear morphology patterns carry meaningful molecular information. This study demonstrates that graph-based computational pathology can serve as a cost-effective, accessible alternative to IHC, enhancing diagnostic efficiency and enabling broader application of precision medicine in DLBCL.

project image

Artificial intelligence-based virtual staining platform for identifying tumor-associated macrophages from hematoxylin and eosin-stained images


Arpit Aggarwal, Mayukhmala Jana, Amritpal Singh, Tanmoy Dam, Himanshu Maurya, Tilak Pathak, Sandra Orsulic, Kailin Yang, Deborah Chute, Justin A Bishop, Farhoud Faraji, Wade M Thorstad, Shlomo Koyfman, Scott Steward, Qiuying Shi, Vlad Sandulache, Nabil F Saba, James S Lewis Jr, Germán Corredor, Anant Madabhushi
European Journal of Cancer, 2025
website /

Background: Virtual staining is an artificial intelligence-based approach that transforms pathology images between stain types, such as hematoxylin and eosin (H&E) to immunohistochemistry (IHC), providing a tissue-preserving and efficient alternative to traditional IHC staining. However, existing methods for translating H&E to virtual IHC often fail to generate images of sufficient quality for accurately delineating cell nuclei and IHC+ regions. To address these limitations, we introduce VISTA, an artificial intelligence-based virtual staining platform designed to translate H&E into virtual IHC. Methods and Results: We applied VISTA to identify M2-subtype tumor-associated macrophages (M2-TAMs) in H&E images from 968 patients with HPV+ oropharyngeal squamous cell carcinoma across six institutional cohorts. Co-registered H&E and CD163 + IHC tissue microarrays were used to train (D1, N = 102) and test (D2, N = 50) the VISTA platform. M2-TAM density, defined as the ratio of M2-TAMs to total nuclei. High M2-TAM density was associated with worse overall survival in D4 (p = 0.0152, Hazard Ratio=1.63 [1.1–2.42]). VISTA outperformed existing methods, generating higher-quality virtual CD163 + IHC images in D2, with a Structural Similarity Index of 0.72, a Peak Signal-to-Noise Ratio of 21.5, and a Fréchet Inception Distance of 41.4. Additionally, VISTA demonstrated superior performance in segmenting M2-TAMs in D2 (Dice=0.74).

project image

Explainable AI Better Predicts 3-Year MACE Risk Compared to Clinical and ASCVD Models in the UK Biobank Cohort


Amritpal Singh, Rohan Dhamdhere, Gourav Modanwal, Sudeshna Sil Kar, Sadeer Al-Kindi, Anant Madabhushi
American Heart Association (AHA), 2024
website /

Cardiovascular risk prediction could be improved by analyzing retinal microvascular architecture, as standard risk calculators may miss early vascular changes. This study aimed to assess whether AI-derived retinal vessel features could predict 3-year MACE and improve upon traditional risk models. We analyzed baseline fundus images from 2,120 UK Biobank participants without prior CVD, extracting vessel features such as angle, tortuosity, curvature, and caliber. Cox models incorporating demographics, clinical factors, and AI-derived retinal features were trained and validated to predict MACE risk. AI-derived retinal features significantly improved prediction of 3-year MACE compared to clinical factors alone and the ASCVD risk calculator. Integrated models highlighted novel vascular patterns captured from fundus images that correlated with event risk. This approach demonstrated the greatest predictive power in the short term, suggesting retinal imaging can provide complementary information to existing risk assessments. The findings support AI-based retinal biomarkers as a non-invasive, accessible tool for enhanced cardiovascular risk stratification, with potential for prospective multi-center validation.

project image

GraphPrint: Combining Traditional Fingerprint with Graph Neural Networks For Drug Target Prediction



NEURIPS, 2023
website /

Accurate drug target affinity prediction can improve drug candidate selection, accelerate the drug discovery process, and reduce drug production costs. Previous work focused on traditional fingerprints or used features extracted based on the amino acid sequence in the protein, ignoring its 3D structure which affects its binding affinity.In this work, we propose GraphPrint: a framework for incorporating 3D protein structure features for drug target affinity prediction. We generate graph representations for protein 3D structures using amino acid residue location coordinates and combine them with drug graph representation and traditional features to jointly learn drug target affinity. Our model achieves a mean square error of 0.1378 and a concordance index of 0.8929 on the KIBA dataset and improves over using traditional protein features alone. Our ablation study shows that the 3D protein structure-based features provide information complementary to traditional features.

project image

Class-Incremental Continual Learning for General Purpose Healthcare Models



NEURIPS, 2023
website /

Healthcare clinics regularly encounter dynamic data that changes due to variations in patient populations, treatment policies, medical devices, and emerging disease patterns. Deep learning models can suffer from catastrophic forgetting when fine-tuned in such scenarios, causing poor performance on previously learned tasks. Continual learning allows learning on new tasks without performance drop on previous tasks. In this work, we investigate the performance of continual learning models on four different medical imaging scenarios involving ten classification datasets from diverse modalities, clinical specialties, and hospitals. We implement various continual learning approaches and evaluate their performance in these scenarios. Our results demonstrate that a single model can sequentially learn new tasks from different specialties and achieve comparable performance to naive methods. These findings indicate the feasibility of recycling or sharing models across the same or different medical specialties, offering another step towards the development of general-purpose medical imaging AI that can be shared across institutions.

project image

Multi-Modal Deep Feature Integration for Alzheimer’s Disease Staging



IEEE BIBM conference, 2023
website /

Alzheimer’s disease (AD) is one of the leading causes of dementia and 7th leading cause of death in the United States. The provisional diagnosis of AD relies on comprehensive examinations, including medical history, neurological and psychiatric examinations, cognitive assessments, and neuroimaging studies. Integrating diverse sets of clinical data, including electronic health records (EHRs), medical imaging, and genomic data, enables a holistic view of AD staging analysis. In this study, we propose an end-to-end deep learning architecture to jointly learn from magnetic resonance imaging (MRI), positron emission tomography (PET), EHRs, and genomics data to classify patients into AD, mild cognitive disorders, and controls. We conduct extensive experiments to explore different feature-level and intermediate-level fusion methods. Our findings suggest intermediate multiplicative fusion achieves the best stage prediction performance on the external validation dataset. Compared with unimodal baselines, we can observe that integrative approaches that leverage all four modalities demonstrate superior performance to baselines reliant solely on one or two modalities. In an age-wise comparison, we observe a unique pattern that all fusion methods exhibited superior performance in the earlier age brackets (50-70 years), with performance diminishing as the age group advanced (70-90 years). The proposed integration framework has the potential to augment our understanding of disease diagnosis and progression by leveraging complementary information from multimodal patient data.

project image

Autonomous Soft Tissue Retraction Using Demonstration-Guided Reinforcement Learning


Amritpal Singh, Wenqi Shi, May D. Wang
MICCAI conference, 2023
arxiv / code / website /

In the context of surgery, robots can provide substantial assistance by performing small, repetitive tasks such as suturing, needle exchange, and tissue retraction, thereby enabling surgeons to concentrate on more complex aspects of the procedure. However, existing surgical task learning mainly pertains to rigid body interactions, whereas the advancement towards more sophisticated surgical robots necessitates the manipulation of soft bodies. Previous work focused on tissue phantoms for soft tissue task learning, which can be expensive and can be an entry barrier to research. Simulation environments present a safe and efficient way to learn surgical tasks before their application to actual tissue. In this study, we create a Robot Operating System (ROS)-compatible physics simulation environment with support for both rigid and soft body interactions within surgical tasks. Furthermore, we investigate the soft tissue interactions facilitated by the patient-side manipulator of the DaVinci surgical robot. Leveraging the pybullet physics engine, we simulate kinematics and establish anchor points to guide the robotic arm when manipulating soft tissue. Using demonstration-guided reinforcement learning (RL) algorithms, we investigate their performance in comparison to traditional reinforcement learning algorithms. Our in silico trials demonstrate a proof-of-concept for autonomous surgical soft tissue retraction. The results corroborate the feasibility of learning soft body manipulation through the application of reinforcement learning agents. This work lays the foundation for future research into the development and refinement of surgical robots capable of managing both rigid and soft tissue interactions.

project image

Multi-Modality Deep Learning Methods to Learn Alzheimer’s Disease Classification



Georgia Institute of Technology, 2023

pdf paper

project image

Roadmap to Autonomous Surgery - A Framework to Surgical Autonomy



arxiv, 2022
arxiv /

Robotic surgery has increased the domain of surgeries possible. Several examples of partial surgical automation have been seen in the past decade. We break down the path of automation tasks into features required and provide a checklist that can help reach higher levels of surgical automation. Finally, we discuss the current challenges and advances required to make this happen.

project image

Validation of expert system enhanced deep learning algorithm for automated screening for COVID-Pneumonia on chest X-rays



Nature - Scientific Reports, 2021
website /

SARS-CoV2 pandemic exposed the limitations of artificial intelligence based medical imaging systems. Earlier in the pandemic, the absence of sufficient training data prevented effective deep learning (DL) solutions for the diagnosis of COVID-19 based on X-Ray data. Here, addressing the lacunae in existing literature and algorithms with the paucity of initial training data; we describe CovBaseAI, an explainable tool using an ensemble of three DL models and an expert decision system (EDS) for COVID-Pneumonia diagnosis, trained entirely on pre-COVID-19 datasets. The performance and explainability of CovBaseAI was primarily validated on two independent datasets. Firstly, 1401 randomly selected CxR from an Indian quarantine center to assess effectiveness in excluding radiological COVID-Pneumonia requiring higher care. Second, curated dataset; 434 RT-PCR positive cases and 471 non-COVID/Normal historical scans, to assess performance in advanced medical settings. CovBaseAI had an accuracy of 87% with a negative predictive value of 98% in the quarantine-center data. However, sensitivity was 0.66–0.90 taking RT-PCR/radiologist opinion as ground truth. This work provides new insights on the usage of EDS with DL methods and the ability of algorithms to confidently predict COVID-Pneumonia while reinforcing the established learning; that benchmarking based on RT-PCR may not serve as reliable ground truth in radiological diagnosis. Such tools can pave the path for multi-modal high throughput detection of COVID-Pneumonia in screening and referral.

project image

Personalized Brain State Targeting via Reinforcement Learning


Abhishek Naik, Amritpal Singh, Koushani Biswas, Harini Sudha, Matthew Schlegel, Kyle E. Mathewson
Neuromatch academy, 2020
website /

We propose a novel use of reinforcement learning as an active closed-loop assistive system that learns in real time to lead any brain state to a given goal state. Previous open- and closed-loop systems to manipulate brain states are generally passive in the sense that they are trained offline from data collected from a population but are not tailored or adapted for any individuals. Offline adaptation per individual if at all is very slow. Adaptation, and the speed of adaptation, is critical in most applications where manipulation of brain states is performed because poor initial performance and long training periods are barriers to BCI adaptation and success. Reinforcement learning is a sequential decision-making paradigm in which the system learns to map situations to optimal actions via trial-and-error interactions with the world to maximize a reward signal. Crucially, this reward signal is a form of evaluative feedback, for instance, proportional to how far the current state is from the goal state. This is in contrast to instructive feedback in the supervised learning paradigm, where the correct action is assumed known. We propose modeling brain state manipulation as a sequential decision-making problem, wherein a system takes real-time EEG data as input and uses audio-visual cues to start from any brain state and reach a physiologically-objective goal state such as a particular oscillation frequency or a deep-sleep state. We show a proof of concept example using a consumer-based EEG device. We believe such an active closed-loop system would have a large impact in assistive applications ranging from helping critically-ill patients fall asleep to helping everyday stressed-out individuals relax.

project image

In-Silico Repositioning of Drugs for Neurofibromatosis 2 Vestibular Schwannoma using Machine Learning



MIT Hack 4 Rare Disease Hackathon, 2020
website /

Neurofibromatosis type 2 is an autosomal-dominant multiple neoplasia syndrome. It is highly debilitating with the frequency of one in 25,000 live births and nearly 100% penetrance by 60 years of age (1). NF2 represents a difficult management problem with most patients facing substantial morbidity and reduced life expectancy. The hallmark of NF2 is the appearance of bilateral vestibular schwannomas, benign tumors on both sides of the vestibular nerve. People with NF2 may also develop schwannomas in other parts of the body, or may develop other types of benign brain or spinal tumors (2) .At this time, possible treatments available for Neurofibromatosis 2-associated tumors include surgery, chemotherapy, and radiation therapy (3). Presently available off-label drugs are not fully effective. It has also been observed that monotherapy is not effective with lesser efficacy, higher resistance and toxicity. The complex interlinked pathways in the pathogenesis NF2 suggest multi-drug therapy may provide an ideal therapeutic effect (4). We therefore would like to work on developing a Machine Learning model to identify suitable drug combination that could lead to better efficacy and lesser chance of resistance.




Other Projects

These include coursework, side projects and unpublished research work.

project image

Optimize Task Allocation via Redundancy in Multi-Agent Systems


projects
2022-12-10

Task allocation is an important problem to be solved. Several real life scenarios require efficient task alloacation, like:

  1. On-duty Nurses/physicians in patient wards
project image

Path planning to control robotic arm for suturing


projects
2022-12-06

Project for CS-6739 Medical robotics course, MSCS Gatech, USA Team members: Amritpal Singh, Oluwatofunmi M Sodimu Group 5 (G) , BMED 6739 Medical robotics Advisor - Prof. Yue Chen

project image

Efficient blood pumping in Bionic heart via distributed control as Multi-agent system


projects
2022-10-25

Decentralised control of heart muscles dynamics to allow efficient coordination for pumping blood. Programmed directed acyclic graph based reinforcement learning environment with vertices representing agents, and edges equal to blood flow. Two sub-problems to solve: 1. Calculate blood flow using mathematics of fluid dynamics and then using Ford-Fulkerson max flow algorithm to derive find max blood flow in graph at any time. 2. Reinforcement learning algo to solve for optimal coordination policy

project image

Surgical Tracking in endoscopic videos


projects
2022-09-15

Visual tracking involves following a bounding box throughout a video sequence. This is a crucial task in Computer-Assisted Interventions (CAI), with a range of applications including soft tissue deformation estimation, lesion tracking, augmented reality and robotic visual servoing. Medical applications require accurate trackers that are robust in challenging conditions prevalent in surgery. Hence prior to being utilized in real-world practice, tissue trackers need to be evaluated in large and diverse datasets that capture multiple challenging conditions. To address this, we propose the SurgT challenge, a new first-of-a-kind collection of tools and datasets for training and benchmarking tissue trackers in surgery.

project image

Multiagent agent efficient team coordination in Football using QMIX reinforcement learning algorithm


projects
2022-04-30
code /

Project for CS-7641, MS CS Gatech, USA. Challenges of Multi-agent reinforcement learning On top of the exploration-exploitation dilemma, Multi-agent RL faces another dilemma called the Predictability exploitation dilemma. Maximizing performance requires collecting rewards. As in Dec-POMDP agents cannot explicitly com- municate, coordination requires predictability. At times, this predictability can also require ignoring private information. The dilemma is to choose between the benefit of exploiting private observation vs the cost of predictability.

project image

RANZCR CLiP - Catheter and Line Position Challenge


projects
2021-03-16
website /

Classify the presence and correct placement of tubes on chest x-rays to save lives. Evaluation metric: Modified version of the Laplace Log Likelihood.

  1. Detect Catheter on Chest Xrays into Endotracheal tube, Nasogastric catheter, CVC
  2. Detect is the catheter in Normal(functionally), Abnormal(Needs to be replaced) or boderline. Classification problem with 14 classes.
project image

Vitals: Android app to track patient vitals


projects
2021-01-01
website / code /

Demo version of app to track your vitals - Temp, BP, SP02, BP. Track vitals over time, See graph representations. Get it on Google Play Kindly don’t use for medical purposes or patient care.

project image

EyeAI - Android app


projects
2021-01-01
website / code /

Prototype Deep leanring based android app for eye disease on edge devices Training, deploying deep learning models for ophthalmology diagnosis on android app

project image

DermaAI - Deep learning based Skin lesions Classification


projects
2021-01-01
code /

CNNs based classification of skin lesions Aim: Deploying deep learning models for Derma diagnosis