wav2sleep

Cardio-respiratory sleep staging (4-class: Wake, Light, Deep, REM)

Model Description

This is a wav2sleep model for automatic sleep stage classification from cardio-respiratory signals (ECG, PPG, respiratory). wav2sleep is a unified multi-modal deep learning approach that can process various combinations of physiological signals for sleep staging.

Model Details

Property Value
Input Signals ECG, PPG, ABD, THX
Output Classes 4
Architecture Non-causal (bidirectional)

Signal Specifications

Signal Samples per 30s epoch
ECG, PPG 1,024
ABD, THX 256
EOG-L, EOG-R 4,096

Usage

from wav2sleep import load_model

# Load model from Hugging Face Hub
model = load_model("hf://joncarter/wav2sleep")

# Or load from local checkpoint
model = load_model("/path/to/checkpoint")

For inference on new data:

from wav2sleep import load_model, predict_on_folder

model = load_model("hf://joncarter/wav2sleep")
predict_on_folder(
    input_folder="/path/to/edf_files",
    output_folder="/path/to/predictions",
    model=model,
)

Training Data

The model was trained on polysomnography data from multiple publicly available datasets managed by the National Sleep Research Resource (NSRR).

Citation

@misc{carter2024wav2sleep,
    title={wav2sleep: A Unified Multi-Modal Approach to Sleep Stage Classification from Physiological Signals},
    author={Jonathan F. Carter and Lionel Tarassenko},
    year={2024},
    eprint={2411.04644},
    archivePrefix={arXiv},
    primaryClass={cs.LG},
}

License

MIT

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for joncarter/wav2sleep