Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: BadZipFile
Message: zipfiles that span multiple disks are not supported
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 638, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 293, in infer_module_for_data_files
split: infer_module_for_data_files_list(data_files_list, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 234, in infer_module_for_data_files_list
return infer_module_for_data_files_list_in_archives(data_files_list, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 262, in infer_module_for_data_files_list_in_archives
for f in xglob(extracted, recursive=True, download_config=download_config)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 999, in xglob
fs, *_ = url_to_fs(urlpath, **storage_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/core.py", line 395, in url_to_fs
fs = filesystem(protocol, **inkwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/registry.py", line 293, in filesystem
return cls(**storage_options)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 80, in __call__
obj = super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/implementations/zip.py", line 62, in __init__
self.zip = zipfile.ZipFile(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/zipfile/__init__.py", line 1354, in __init__
self._RealGetContents()
File "/usr/local/lib/python3.12/zipfile/__init__.py", line 1417, in _RealGetContents
endrec = _EndRecData(fp)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/zipfile/__init__.py", line 311, in _EndRecData
return _EndRecData64(fpin, -sizeEndCentDir, endrec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/zipfile/__init__.py", line 257, in _EndRecData64
raise BadZipFile("zipfiles that span multiple disks are not supported")
zipfile.BadZipFile: zipfiles that span multiple disks are not supportedNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
π³ EPFL-Smart-Kitchen: Action recognition benchmark
π Introduction
Given a video or temporal clip, action recognition requires the model to predict a single action label for that clip (or for a specified window within an untrimmed video). Building on the same EPFL-Smart-Kitchen-30 data and modalities as our segmentation benchmark, we provide a recognition-oriented setup to compare inputs from 3D body pose, hand pose, and eye gaze, as well as deep visual features (e.g., VideoMAE).
Action recognition with kinematic inputs is lightweight and fast, while still capturing rich information about the task. We therefore offer recognition-ready annotations and splits that pair these modalities with clip-level labels. This makes it easy to evaluate the effect of different inputs (body vs hand vs gaze vs video) on standard classification metrics.
π Content
This repository contains materials for performing action recognition on EPFL-Smart-Kitchen-30. You can find the original dataset on Zenodo.
Current contents in this folder:
ESK_action_recognition
βββ benchmark_data.z01
βββ benchmark_data.z02
βββ benchmark_data.z03
βββ benchmark_data.z04
βββ benchmark_data.z05
βββ benchmark_data.z06
βββ benchmark_data.zip # main part (open this to extract)
βββ checkpoints.z01
βββ checkpoints.z02
βββ checkpoints.z03
βββ checkpoints.z04
βββ checkpoints.z05
βββ checkpoints.zip # main part (open this to extract)
βββ README.md
How to unzip (Linux):
- Ensure all parts are in the same directory (as above).
- Use either unzip or 7-Zip, starting from the
.zipfile (not the.z01).
- With unzip (preinstalled on many systems):
unzip benchmark_data.zip
unzip checkpoints.zip
- With 7-Zip (if you prefer):
# install if needed (Debian/Ubuntu)
sudo apt-get update && sudo apt-get install -y p7zip-full
7z x benchmark_data.zip
7z x checkpoints.zip
Notes:
- Donβt try to extract the
.z01files directlyβalways open the corresponding.zipfile. - If extraction fails, verify that all parts are fully downloaded and present.
After extracting the files, you will get the following folders:
ESK_action_recognition
βββ Benchmark_data
| βββ [PARTICIPANT_ID]/[SESSION_ID]
βββ Annotations
| βββ [SPLIT].csv
βββ Hand_videos
| βββ [SPLIT]
βββ pose_data
| βββ [PARTICIPANT_ID]/[SESSION_ID]
βββ checkpoints
| βββ [INPUT_TYPE]_experiment
| βββ [INPUT_TYPE]_nopretrain_experiment
βββ README.md
π Data format
Clip-level labels are provided in CSV form for straightforward use with classification training code. A minimal activity_labels.csv contains the following columns:
clip_id: Unique identifier for the clipparticipant_id: e.g., YH20XXsession_id: e.g., YYYY_MM_DD_hh_mm_ssstart_frame,end_frame: temporal window within the session (omit if clips are pre-trimmed files)label_id: integer-coded action classlabel_name: human-readable action name
Splits are text/CSV files listing clip_id or clip paths for train/val/test.
π Usage
Training and evaluation code are available on the GitHub repository. Please see the Action Recognition section for:
- dataset splits and loading utilities for clip-level labels
- using 3D kinematic features (body/hand/eyes) and/or VideoMAE features
- training scripts for classification models (e.g., MLP/Transformer over features)
π Citations
Please cite our work!
@misc{bonnetto2025epflsmartkitchen,
title={EPFL-Smart-Kitchen-30: Densely annotated cooking dataset with 3D kinematics to challenge video and language models},
author={Andy Bonnetto and Haozhe Qi and Franklin Leong and Matea Tashkovska and Mahdi Rad and Solaiman Shokur and Friedhelm Hummel and Silvestro Micera and Marc Pollefeys and Alexander Mathis},
year={2025},
eprint={2506.01608},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.01608},
}
β€οΈ Acknowledgments Our work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services.
- Downloads last month
- 48