Case study

Epilepsy Seizure Detection

Deep learning model for neonatal EEG seizure detection with explainability for clinician trust.

Sep 2023 – Jun 2024
Deep LearningHealthcareXAI

Overview

A neonatal EEG seizure detection project that combines deep learning with explainability (XAI) to produce clinically interpretable predictions.

Problem

Seizures in neonates can be subtle and hard to detect, and manual EEG review is time-intensive.

Solution

I trained a deep learning model on EEG features/signals and added an explainability layer to highlight why the model predicted a seizure event.

Architecture

  • EEG preprocessing → feature extraction/segmentation
  • Model training → evaluation with sensitivity/accuracy metrics
  • XAI layer → saliency/attribution visualization for interpretability

Tech stack

Python + deep learning frameworks: TenserflowExplainability tooling: SHAP

Key engineering decisions

  • Prioritized sensitivity given clinical risk profile.
  • Added explainability to support stakeholder trust and debugging.

Results

  • 82% accuracy and 85% sensitivity

Links

What I’d improve next

  • Add external validation on a second dataset to confirm generalization.
  • Calibrate outputs and quantify uncertainty for safer deployment.
  • Explore lightweight models for on-device inference.