Nam Hoang – Cell Segmentation & Tracking Showcase

GitHub Repo

Project Overview

Segmentation Results

The table below summarizes the performance of each model. You can find the implementation code for all models in the src/cell_tracking/models folder. The Full Metrics column provides links to detailed reports containing the Intersection over Union (IoU) score for every frame in the validation sequence.

Method Dataset Track Mean IoU Notes Full Metrics
Naive Bayes Fluo-N2DH-GOWT1 01 0.8545 Window=5 Link
Naive Bayes Fluo-N2DH-GOWT1 02 0.8271 Window=5 Link
Linear SVM Fluo-N2DH-GOWT1 01 0.8958 Window=5 Link
Linear SVM Fluo-N2DH-GOWT1 02 0.8651 Window=5 Link
SVM (RBF) Fluo-N2DH-GOWT1 01 0.8952 Window=5 Link
SVM (RBF) Fluo-N2DH-GOWT1 02 0.8651 Window=5 Link
Logistic Regression Fluo-N2DH-GOWT1 01 0.9017 Window=5 Link
Logistic Regression Fluo-N2DH-GOWT1 02 0.8680 Window=5 Link
Neural Network (MLP) Fluo-N2DH-GOWT1 01 0.9162 Window=5, 2 Hidden Layers Link
Neural Network (MLP) Fluo-N2DH-GOWT1 02 0.8659 Window=5, 2 Hidden Layers Link
U-Net (MobileNetV2) Fluo-N2DH-GOWT1 01 0.9604 Deep Learning, 512x512 Input Link
U-Net (MobileNetV2) Fluo-N2DH-GOWT1 02 0.9269 Deep Learning, 512x512 Input Link

Performance Analysis

The U-Net model significantly outperforms all classical approaches, achieving a Mean IoU of 0.96 on Track 01 compared to the best classical model (MLP) at 0.91 and a Mean IoU of 0.93 on Track 02 compared to the best classical model at 0.87

Why U-Net is better:

  1. Contextual Understanding: Classical models only see a tiny 5 × 5 patch around each pixel. They lack global context and cannot "see" the shape of the cell. U-Net processes the entire image at multiple scales, allowing it to understand cell morphology and separate cells from complex backgrounds.
  2. Spatial Consistency: Pixel-wise classifiers treat every pixel independently, often resulting in "salt-and-pepper" noise where scattered pixels are misclassified. U-Net predicts a coherent mask, enforcing spatial continuity.
  3. Feature Learning: Instead of relying on raw pixel intensities in a small window, U-Net learns hierarchical features (edges, textures, shapes) that are robust to noise and intensity variations common in microscopy images.

Tracking Results (Not Implemented)

Tracker Metric Score Notes
ASM + IoU linking ID switches 4 Initialized from previous frame

Explain how you propagate identities across frames and highlight failure cases.

Demo Videos / GIFs

The following GIFs demonstrate the segmentation results on the validation set over time. For each figure, the left side shows the ground truth (Silver Truth) segmentation in green, while the right side shows the model's predicted segmentation in red.

Track 01

Naive Bayes
Naive Bayes Track 01

Linear SVM
Linear SVM Track 01

SVM (RBF)
SVM RBF Track 01

Logistic Regression
Logistic Regression Track 01

Neural Network (MLP)
Neural Network Track 01

U-Net (MobileNetV2)
U-Net Track 01

Track 02

Naive Bayes
Naive Bayes Track 02

Linear SVM
Linear SVM Track 02

SVM (RBF)
SVM RBF Track 02

Logistic Regression
Logistic Regression Track 02

Neural Network (MLP)
Neural Network Track 02

U-Net (MobileNetV2)
U-Net Track 02

Reproduction Checklist

To reproduce the results presented above, follow these steps:

1. Clone the Repository

git clone https://github.com/jfemiani/CSE488-Project-Cell-Tracking.git
cd CSE488-Project-Cell-Tracking

2. Environment Setup

Create a new Conda environment and install the required packages listed in pyproject.toml.

conda create -n cell-tracking python=3.11
conda activate cell-tracking
pip install -e .

Note: If you encounter build issues with albumentations on Windows, install a binary version explicitly:

pip install "albumentations==1.3.1"

3. Data Setup

Download and prepare the Fluo-N2DH-GOWT1 dataset with training/testing/validation

python scripts/setup_data.py --dataset Fluo-N2DH-GOWT1 --splits training test val

4. Train Classical Models

Train the baseline models (SVM, Naive Bayes, Logistic Regression, MLP) for both tracks.

# Train models for Track 01
python scripts/train_models.py --dataset Fluo-N2DH-GOWT1 --track 01 --window 5 --samples 500 --pct-fg 0.5

# Train models for Track 02
python scripts/train_models.py --dataset Fluo-N2DH-GOWT1 --track 02 --window 5 --samples 500 --pct-fg 0.5

5. Train U-Net Models

Train the deep learning U-Net models for both tracks.

# Train U-Net for Track 01
python scripts/train_unet.py --dataset Fluo-N2DH-GOWT1 --output-dir artifacts/models --track-id 01 --epochs 20

# Train U-Net for Track 02
python scripts/train_unet.py --dataset Fluo-N2DH-GOWT1 --output-dir artifacts/models --track-id 02 --epochs 20

6. Evaluate Segmentation

Evaluate all trained models (classical and deep learning) found in the artifacts/models directory. This script calculates the Mean IoU for each model, saves the results to artifacts/metrics, and saves resulted masked to artifacts/results

python scripts/eval_seg.py --dataset Fluo-N2DH-GOWT1 --models-dir artifacts/models

Full Artifacts Download

For full access to all trained models, segmentation results, and detailed metrics without running the reproduction scripts, you can download the compressed artifacts folder here:
Download Full Artifacts (Google Drive)

Lessons Learned / Next Steps

Share insights, what you'd try next, and open research questions.