Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code: ConfigNamesError
Exception: ReadTimeout
Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: d72a1004-362a-4cbd-9ffe-cece7937d2b3)')
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 598, in get_module
standalone_yaml_path = cached_path(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 180, in cached_path
).resolve_path(url_or_filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
self._api.repo_info(
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2816, in repo_info
return method(
^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2673, in dataset_info
r = get_session().get(path, headers=headers, timeout=timeout, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 96, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 690, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: d72a1004-362a-4cbd-9ffe-cece7937d2b3)')Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
EgoMM: Tri-Modal Egocentric Dataset (Video + Audio + IMU)
EgoMM is a large-scale tri-modal egocentric dataset combining Video, Audio, and IMU data from head-mounted Meta Aria glasses. Built from EgoLife and Ego-Exo4D sources.
Dataset Summary
| Split | EgoLife | EgoExo4D | Total | Narration files |
|---|---|---|---|---|
| narrated (val/test) | 4,689 | 17,367 | 22,056 | 19,933 |
| raw (train) | 27,119 | 6,357 | 33,476 | — |
| Total | 31,808 | 23,724 | 55,532 | 19,933 |
- Clip duration: 30 seconds (fixed)
- Total hours: 463h
- Modalities: Video (MP4) + Audio (MP3) + IMU left/right (NPZ)
- Video resolution: EgoLife 768×768, EgoExo4D 448×448
Structure
EgoMM/
├── narrated/ (val/test: clips with human annotations)
│ ├── egolife/{participant}/DAY1/{clip_id}/
│ │ ├── video.mp4 (768×768, 20fps, video-only)
│ │ ├── audio.mp3 (separate audio track)
│ │ ├── imu_left.npz (800Hz, 6-axis: accel_xyz + gyro_xyz)
│ │ ├── imu_right.npz (1000Hz, 6-axis)
│ │ └── narration.json (clip-relative timestamps)
│ └── egoexo4d/{activity}/{take_name}/{clip_id}/
│ ├── video.mp4 (448×448, video-only)
│ ├── audio.mp3
│ ├── imu_left.npz
│ ├── imu_right.npz
│ └── narration.json
├── raw/ (train: clips without annotations)
│ ├── egolife/{participant}/{DAY2-7}/{clip_id}/
│ │ ├── video.mp4
│ │ ├── audio.mp3
│ │ ├── imu_left.npz
│ │ └── imu_right.npz
│ └── egoexo4d/{activity}/{take_name}/{clip_id}/
│ └── ...
└── metadata/
├── egolife/ (DenseCaption SRTs, Transcript, Caption/QA JSONs)
└── egoexo4d/ (atomic descriptions, expert commentary, splits)
Download
from huggingface_hub import snapshot_download
# Download narrated set only (val/test, ~120 GB)
snapshot_download(
repo_id="ldkong/EgoMM",
repo_type="dataset",
local_dir="./EgoMM",
allow_patterns=["narrated/**"]
)
# Download raw set only (train, ~370 GB)
snapshot_download(
repo_id="ldkong/EgoMM",
repo_type="dataset",
local_dir="./EgoMM",
allow_patterns=["raw/**"]
)
# Download metadata only
snapshot_download(
repo_id="ldkong/EgoMM",
repo_type="dataset",
local_dir="./EgoMM",
allow_patterns=["metadata/**"]
)
# Download a specific activity
snapshot_download(
repo_id="ldkong/EgoMM",
repo_type="dataset",
local_dir="./EgoMM",
allow_patterns=["narrated/egoexo4d/cooking/**"]
)
Narration Format
Each narration.json contains clip-relative annotations:
{
"clip_id": "upenn_0714_Cooking_7_2_0090000",
"narrations": [
{"timestamp": 2.4, "text": "C scrolls through the mobile phone screen", "type": "atomic"},
{"timestamp": 5.1, "text": "C places the phone down", "type": "atomic"}
],
"expert_commentary": [
{"timestamp": 0.0, "text": "He needs to give himself more room...", "type": "expert"}
],
"sequence_info": {
"take_name": "upenn_0714_Cooking_7_2",
"clip_index": 3,
"total_clips_in_take": 12,
"start_sec_in_take": 90.0,
"end_sec_in_take": 120.0
}
}
Reconstructing Long Sequences
Clips can be combined into longer sequences using sequence_info:
import json
from pathlib import Path
# Load all clips from a take
take_name = "upenn_0714_Cooking_7_2"
clips = sorted(Path("narrated/egoexo4d/cooking").glob(f"{take_name}/*/narration.json"))
# Reconstruct full timeline
for clip_path in clips:
clip = json.loads(clip_path.read_text())
offset = clip["sequence_info"]["start_sec_in_take"]
for n in clip["narrations"]:
abs_time = offset + n["timestamp"]
print(f"{abs_time:.1f}s: {n['text']}")
IMU Format
Each NPZ file contains 7 arrays at 800Hz (left) or 1000Hz (right):
timestamp: int64 nanosecondsaccel_x,accel_y,accel_z: float64, m/s²gyro_x,gyro_y,gyro_z: float64, rad/s
import numpy as np
data = np.load("imu_left.npz")
# Duration: (data['timestamp'][-1] - data['timestamp'][0]) / 1e9 ≈ 30.0s
# Gravity magnitude: ~9.8 m/s²
Activities (EgoExo4D)
| Activity | Narrated clips | Raw clips |
|---|---|---|
| cooking | ~7,800 | ~3,600 |
| music | ~1,850 | ~1,030 |
| health | ~1,950 | ~560 |
| bike_repair | ~1,470 | ~260 |
| dance | ~1,200 | ~380 |
| rock_climbing | ~1,040 | ~295 |
| basketball | ~935 | ~316 |
| soccer | ~740 | ~244 |
Participants (EgoLife)
| Participant | DAY1 (narrated) | DAY2-7 (raw) |
|---|---|---|
| A1_JAKE | ~780 clips | ~4,500 clips |
| A2_ALICE | ~780 clips | ~4,200 clips |
| A3_TASHA | ~780 clips | ~4,800 clips |
| A4_LUCIA | ~780 clips | ~4,600 clips |
| A5_KATRINA | ~780 clips | ~4,100 clips |
| A6_SHURE | ~780 clips | ~4,900 clips |
Hardware
All data recorded with Meta Aria glasses:
- Video: 1408×1408 fisheye RGB (downscaled to 768×768 for EgoLife, 448×448 for EgoExo4D)
- Audio: built-in microphone
- IMU: dual 6-axis sensors (left 800Hz, right 1000Hz)
- Same hardware across EgoLife and EgoExo4D — models transfer between datasets
Sources
- EgoLife — 6 participants × 7 days of continuous daily recording
- Ego-Exo4D — 4,168 takes of skilled activities (8 categories)
License
Please refer to the original dataset licenses:
- EgoLife: S-Lab License 1.0
- Ego-Exo4D: Ego-Exo4D Dataset License
- Downloads last month
- -