MicroAGI00 / README.md
MicroAGI's picture
Update README.md
9f53f34 verified
# MicroAGI00: MicroAGI Egocentric Dataset (2025)
> **License:** MicroAGI00 Open Use, No-Resale v1.0 (see `LICENSE`).
> **No resale:** You may not sell or paywall this dataset or derivative data. Trained models/outputs may be released under any terms.
## What this is
MicroAGI00 is a large-scale **egocentric RGB+D** dataset of **human household manipulation**, aligned with the task style of the Stanford BEHAVIOR benchmark: [https://behavior.stanford.edu/challenge/index.html](https://behavior.stanford.edu/challenge/index.html)
It’s designed to be “robotics-ready” at the *signal level*: synchronized streams, clean packaging, strong QC, and consistent structure—so you can spend time modeling, not cleaning data.
## Quick facts
* **Modalities:** synchronized RGB + 16-bit depth + IMU
* **Resolution & rate (RGB):** 1920×1080 @ 30 FPS (in MCAP)
* **Depth:** 16-bit, losslessly compressed inside MCAP
* **Scale:** ≈1,000,000 synchronized RGB frames and ≈1,000,000 depth frames (≈1M frame pairs)
* **Container:** `.mcap` (all signals)
* **Previews:** for a subset of sequences, `.mp4` previews (annotated overlays / visualized depth for quick review)
> Note: MP4 previews may be lower quality than MCAP due to compression and post-processing. Research use should read from MCAP.
## What’s included per sequence
* One large **MCAP** file containing:
* RGB frames (1080p/30 fps)
* 16-bit depth stream (lossless compression)
* IMU data (as available)
* **MP4 preview videos** (subset of sequences):
* RGB preview (for quick visual QA)
* Visualized depth preview (for quick visual QA)
## Labels / annotations https://www.youtube.com/watch?v=4-RVKBj2bcw
The base MicroAGI00 release is **primarily raw synchronized signals** (RGB-D-IMU) and **does not ship with full-coverage labels**.
If you’ve seen demo videos with overlays: those demonstrate **what MicroAGI can produce** as an add-on (see below), not what is universally present in the base dump.
## Data quality and QC philosophy
MicroAGI00 is built around *trustworthy signal integrity*:
* Tight **RGB↔Depth synchronization** checks
* Automated detection and scoring of:
* frame drops / time discontinuities
* motion blur / exposure failures
* depth sanity (range/invalid ratios), compression integrity
* IMU continuity where available
* Consistent trimming and packaging, with **sequence-level quality ratings** to support filtering (e.g., “clean only” training vs. “wild” robustness training)
## Diversity and covariate-shift robustness
MicroAGI data is captured across **Europe and Asia**, intentionally spanning environments that create real-world distribution shift:
* different homes, layouts, lighting regimes, materials
* different hands/skins, tool choices, cultural cooking/object priors
* varied camera motions and operator styles
This is meant to be **covariate-shift resilient** data for models that need to generalize.
## Optional derived signals (available on request)
If you want more than raw RGB-D-IMU, MicroAGI can deliver *derived outputs* on top of the same sequences (or on newly captured data), such as:
* **Ego-motion / trajectories** (VIO-style)
* **SLAM reconstructions** (maps, trajectories, keyframes)
* **Accurate body pose estimation**
* **State-of-the-art 3D hand landmarks** (true 3D, not just 2D reprojections)
* Additional QA layers and consistency checks tailored to your training setup
These are provided as a service deliverable (and can be scoped to subsets / key frames / full coverage), depending on your needs.
## Data access and structure
* Each top-level sample folder typically contains:
* an MCAP “raw dump” folder
* an MCAP “processed/curated” folder (when applicable)
* an `mp4/` previews folder (when available)
All authoritative signals are inside the **MCAP**. Use MP4s for fast browsing only.
## Getting started
* Inspect an MCAP: `mcap info your_sequence.mcap`
* Extract messages: `mcap cat --topics <topic> your_sequence.mcap > out.bin`
* Python readers: `pip install mcap` (see the MCAP Python docs) or any MCAP-compatible tooling.
Typical topics include RGB, depth, IMU, and any additional channels you may have requested.
## Intended uses
* Policy and skill learning (robotics / VLA)
* Action detection and segmentation
* Hand/pose estimation and grasp analysis (raw or with add-ons)
* Depth-based reconstruction, SLAM, scene understanding
* World-model pre/post training
* Robustness testing under real distribution shift
## Data rights, consent, and licensing options
All capture is **legally consented**, with **data rights documentation** attached. Depending on the engagement, rights can be structured as:
* **non-exclusive** usage rights (typical dataset access), or
* **exclusive** rights for specific task scopes / environments / cohorts (custom programs)
## Services and custom data
MicroAGI provides on-demand:
* New data capture via our operator network (Europe + Asia)
* ML-enhanced derived signals (ego-motion, pose, hands, SLAM)
* Real-to-Sim pipelines and robotics-ready packaging
* Custom QC gates to match your training/eval stack
Typical lead times: under two weeks (up to four weeks for large jobs).
## How to order more
Email `data@micro-agi.com` with:
* Task description
* Desired hours or frame counts
* Target environment constraints (if any)
* Rights preference (exclusive / non-exclusive)
* Proposed price
We reply within one business day with lead time and final pricing.
Questions: `info@micro-agi.com`
## License
This dataset is released under the **MicroAGI00 Open Use, No-Resale License v1.0** (custom). See [`LICENSE`](./LICENSE). Redistribution must be free-of-charge under the same license.
Required credit: **"This work uses the MicroAGI00 dataset (MicroAGI, 2025)."**
## Attribution reminder
Public uses of the Dataset or Derivative Data must include the credit line above in a reasonable location for the medium (papers, repos, product docs, dataset pages, demo descriptions). Attribution is appreciated but not required for Trained Models or Outputs.