10-Week Program

Curriculum

Three tracks, one cohort. Every track shares the same clinical cases and core modules, but with different depth, tools, and deliverables tailored to your background.

Three Tracks, One Learning System

Not three separate courses — a unified medical AI learning system with three entry points. Track A feeds into B, B feeds into C, and C’s governance constraints become A/B’s design boundaries.

Build

Track A — AI Principles

Code-First

  • Who: Pre-med, CS, STEM undergrads, technical learners
  • Focus: Hands-on ML/DL with medical datasets using Colab, PyTorch, and Claude Code
  • Capstone: Build and evaluate a clinical AI model or multi-agent workflow

Tools

Google ColabPyTorchscikit-learnHugging FaceClaude CodeOpenClaw
Judge

Track B — Clinical Applications

Evaluate & Apply

  • Who: Medical students, residents, nurses, pharmacists, researchers
  • Focus: AI evaluation, paper critique, deployment readiness assessment
  • Capstone: Clinical utility memo, paper critique, or deployment recommendation

Tools

No-code templatesPaper critique frameworksLLM comparison toolsDecision dashboards
Deploy

Track C — Executive & Implementation

Decide & Deploy

  • Who: Department heads, innovation teams, CMOs/CIOs, clinical leaders
  • Focus: AI governance, procurement, ROI modeling, organizational adoption
  • Capstone: Board-ready AI strategy deck with vendor evaluation and governance plan

Tools

ROI calculatorsVendor evaluation matrixGovernance checklistsPilot roadmap templates

Weekly Rhythm

Every week follows a consistent structure — case-first, principle-driven, paper-backed, discussion-closed.

20 min

Clinical case opening

25 min

AI principles deep-dive

25 min

Clinical application & limitations

15 min

Paper spotlight (latest research)

15–30 min

Dual-track breakout / discussion

10-Week Syllabus

Week 01

AI in Medicine: History, Hype & Problem Framing

Track A: Build

AI pipeline: symbolic → ML → DL → LLM → agent

Track B: Judge

Clinical use boundaries, hype vs reality assessment

Track C: Deploy

Executive AI landscape, success/failure case map

Week 02

Data, Labels & Evaluation Metrics

Track A: Build

Train/val/test, overfitting, threshold analysis in Colab

Track B: Judge

Paper critique: why accuracy misleads in clinical settings

Track C: Deploy

SOTA vendor ecosystem, evaluation criteria for procurement

Week 03

Classical ML with Clinical Data

Track A: Build

Logistic regression, XGBoost, baseline model with Claude Code

Track B: Judge

Sepsis/deterioration prediction, clinical actionability

Track C: Deploy

Workflow integration, process redesign for AI insertion

Week 04

Deep Learning for Medical Imaging

Track A: Build

CNN, transfer learning, confusion matrix analysis

Track B: Judge

Reader studies, false positive cost, imaging AI adoption

Track C: Deploy

Pilot KPIs: accuracy vs utility, safety and operational metrics

Week 05

NLP, Transformers & Clinical Text

Track A: Build

Embeddings, attention, clinical note summarization

Track B: Judge

Safe assistive vs unsafe autonomous LLM uses

Track C: Deploy

Regulation (FDA SaMD, HIPAA, EU AI Act), governance checklist

Week 06

LLMs, Prompting & RAG in Healthcare

Track A: Build

Medical literature QA/RAG prototype with Claude Code

Track B: Judge

Prototype spec via natural language, evaluation rubrics

Track C: Deploy

Financial modeling, ROI calculator workshop, hidden costs

Week 07

Agents & Multi-Agent Systems

Track A: Build

OpenClaw multi-agent demo (data/training/report agents)

Track B: Judge

AI-assisted peer review, research workflow automation

Track C: Deploy

LLM vendor evaluation, benchmark vs enterprise readiness

Week 08

Multimodal AI & Precision Medicine

Track A: Build

Multimodal fusion, omics, drug repurposing architectures

Track B: Judge

Regulation/ethics checklist applied to prototype

Track C: Deploy

Agentic workflow for hospital automation, approval memo

Week 09

Responsible AI & Clinical Translation

Track A: Build

Model cards, error analysis, deployment risk assessment

Track B: Judge

One-page implementation proposal, pilot plan

Track C: Deploy

Change management, 90-day pilot roadmap, monitoring plan

Week 10

Capstone: Demo Day

Track A: Build

Final build: notebook + evaluation + risk section

Track B: Judge

Full paper critique or AI tool deployment recommendation

Track C: Deploy

Board-ready AI strategy presentation + decision memo

Anchor Cases

All three tracks revisit these clinical anchors from different angles — building shared language across disciplines.

CXR / Radiology AI

From CNN architecture to reader studies, workflow integration, and procurement evaluation.

EHR / Clinical Note Summarization

From transformer embeddings to hallucination risk, documentation support, and vendor assessment.

Sepsis / Deterioration Prediction

From risk score modeling to threshold-setting, clinical utility, and deployment monitoring.

Assessment

We don’t test who memorizes AI jargon best. We assess who can define problems, match models to tasks, evaluate evidence, and judge clinical safety.

20%
Participation & case reflections
20%
Weekly assignments (layered by track)
25%
Paper critique or tool evaluation
35%
Final capstone project

Ready to Choose Your Track?

Spring 2026 cohort now forming. All three tracks welcome — pick the one that matches your background.

Apply Now