10-Week Program
Curriculum
Three tracks, one cohort. Every track shares the same clinical cases and core modules, but with different depth, tools, and deliverables tailored to your background.
Three Tracks, One Learning System
Not three separate courses — a unified medical AI learning system with three entry points. Track A feeds into B, B feeds into C, and C’s governance constraints become A/B’s design boundaries.
Track A — AI Principles
Code-First
- Who: Pre-med, CS, STEM undergrads, technical learners
- Focus: Hands-on ML/DL with medical datasets using Colab, PyTorch, and Claude Code
- Capstone: Build and evaluate a clinical AI model or multi-agent workflow
Tools
Track B — Clinical Applications
Evaluate & Apply
- Who: Medical students, residents, nurses, pharmacists, researchers
- Focus: AI evaluation, paper critique, deployment readiness assessment
- Capstone: Clinical utility memo, paper critique, or deployment recommendation
Tools
Track C — Executive & Implementation
Decide & Deploy
- Who: Department heads, innovation teams, CMOs/CIOs, clinical leaders
- Focus: AI governance, procurement, ROI modeling, organizational adoption
- Capstone: Board-ready AI strategy deck with vendor evaluation and governance plan
Tools
Weekly Rhythm
Every week follows a consistent structure — case-first, principle-driven, paper-backed, discussion-closed.
Clinical case opening
AI principles deep-dive
Clinical application & limitations
Paper spotlight (latest research)
Dual-track breakout / discussion
10-Week Syllabus
AI in Medicine: History, Hype & Problem Framing
Track A: Build
AI pipeline: symbolic → ML → DL → LLM → agent
Track B: Judge
Clinical use boundaries, hype vs reality assessment
Track C: Deploy
Executive AI landscape, success/failure case map
Data, Labels & Evaluation Metrics
Track A: Build
Train/val/test, overfitting, threshold analysis in Colab
Track B: Judge
Paper critique: why accuracy misleads in clinical settings
Track C: Deploy
SOTA vendor ecosystem, evaluation criteria for procurement
Classical ML with Clinical Data
Track A: Build
Logistic regression, XGBoost, baseline model with Claude Code
Track B: Judge
Sepsis/deterioration prediction, clinical actionability
Track C: Deploy
Workflow integration, process redesign for AI insertion
Deep Learning for Medical Imaging
Track A: Build
CNN, transfer learning, confusion matrix analysis
Track B: Judge
Reader studies, false positive cost, imaging AI adoption
Track C: Deploy
Pilot KPIs: accuracy vs utility, safety and operational metrics
NLP, Transformers & Clinical Text
Track A: Build
Embeddings, attention, clinical note summarization
Track B: Judge
Safe assistive vs unsafe autonomous LLM uses
Track C: Deploy
Regulation (FDA SaMD, HIPAA, EU AI Act), governance checklist
LLMs, Prompting & RAG in Healthcare
Track A: Build
Medical literature QA/RAG prototype with Claude Code
Track B: Judge
Prototype spec via natural language, evaluation rubrics
Track C: Deploy
Financial modeling, ROI calculator workshop, hidden costs
Agents & Multi-Agent Systems
Track A: Build
OpenClaw multi-agent demo (data/training/report agents)
Track B: Judge
AI-assisted peer review, research workflow automation
Track C: Deploy
LLM vendor evaluation, benchmark vs enterprise readiness
Multimodal AI & Precision Medicine
Track A: Build
Multimodal fusion, omics, drug repurposing architectures
Track B: Judge
Regulation/ethics checklist applied to prototype
Track C: Deploy
Agentic workflow for hospital automation, approval memo
Responsible AI & Clinical Translation
Track A: Build
Model cards, error analysis, deployment risk assessment
Track B: Judge
One-page implementation proposal, pilot plan
Track C: Deploy
Change management, 90-day pilot roadmap, monitoring plan
Capstone: Demo Day
Track A: Build
Final build: notebook + evaluation + risk section
Track B: Judge
Full paper critique or AI tool deployment recommendation
Track C: Deploy
Board-ready AI strategy presentation + decision memo
Anchor Cases
All three tracks revisit these clinical anchors from different angles — building shared language across disciplines.
CXR / Radiology AI
From CNN architecture to reader studies, workflow integration, and procurement evaluation.
EHR / Clinical Note Summarization
From transformer embeddings to hallucination risk, documentation support, and vendor assessment.
Sepsis / Deterioration Prediction
From risk score modeling to threshold-setting, clinical utility, and deployment monitoring.
Assessment
We don’t test who memorizes AI jargon best. We assess who can define problems, match models to tasks, evaluate evidence, and judge clinical safety.
Ready to Choose Your Track?
Spring 2026 cohort now forming. All three tracks welcome — pick the one that matches your background.
Apply Now