The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
HLTV CS2 POV Rendered Dataset
Rendered Counter-Strike 2 POV training clips derived from blanchon/cs2_dataset_demo.
Each row is one one-minute-or-shorter player POV chunk. The default config is
previews, a small path-only overlay+audio video view for browsing. The heavy chunks config
contains full-resolution video/audio as loose files referenced by path, plus
embedded inputs and world-state streams.
The raw HLTV .dem files stay in the source dataset. This repo stores only the
rendered training dataset.
Configs
previews(default): one low-resolutionpreview_videorow per chunk with the input overlay baked in. Preview MP4s are stored as loose files and the Parquet stores only their relative paths, so filtering stays cheap. Thepreview_pathcolumn is relative to the preview Parquet directory (previews/chunk_000123/preview.mp4), whilepreview_video.pathis anhf://datasets/<repo>@main/...URI so Hugging Face's dataset viewer can resolve it without inlining bytes.matches: one row per rendered(match_id, map_name)with team/event/date metadata.rounds: one row per rendered(match_id, map_name, round)with tick boundaries.chunks: full training rows with path-onlyvideo/audio, embeddedinputs/worlds, and typed metadata.
Filesystem Layout
data/
match_id=<match_id>/map_name=<map_name>/player=<player>/
chunks-preview-<machine>-<uuid>.parquet
chunks-full-<machine>-<uuid>.parquet
chunks/
chunk_<ordinal>/
video.mp4
audio.wav
previews/
chunk_<ordinal>/preview.mp4
index/
manifest-<machine>-<uuid>.parquet
rounds-<machine>-<uuid>.parquet
state/
processed/<input_metadata_stem>/<match_id>.json
failed/...
skipped/...
Every machine writes a unique <machine>-<uuid> shard, so parallel uploads do
not touch the same files. state/processed is written only after the shard
upload succeeds. Data files use Hive-style key=value directories for pruning.
The same keys are also stored inside the Parquets so the Hugging Face viewer and
datasets.load_dataset("parquet", ...) expose them even without applying Hive
partitioning. Parquets use best-effort bloom filters on hot filter columns.
Stream With datasets
from datasets import load_dataset
repo = "blanchon/cs2_dataset_render"
# Cheap default browsing view.
previews = load_dataset(repo, split="train", streaming=True)
for row in previews.take(3):
print(row["match_id"], row["round"], row["player"], row["preview_video"])
# Full training rows. Use columns/filters to avoid pulling unused bytes.
chunks = load_dataset(
repo,
"chunks",
split="train",
streaming=True,
columns=["video", "audio", "inputs", "worlds", "match_id", "round", "player"],
filters=[("player", "==", 0)],
)
Query With DuckDB
-- Match-level index only; no media bytes.
SELECT match_id, map_name, team1, team2, event, match_date
FROM 'hf://datasets/blanchon/cs2_dataset_render/index/manifest-*.parquet'
LIMIT 20;
-- Round timing index only.
SELECT match_id, map_name, round, round_duration_ticks
FROM 'hf://datasets/blanchon/cs2_dataset_render/index/rounds-*.parquet'
WHERE round_duration_ticks > 3000;
-- Preview rows for fast visual review.
SELECT match_id, map_name, round, player, chunk_index, primary_weapon
FROM 'hf://datasets/blanchon/cs2_dataset_render/data/**/chunks-preview-*.parquet'
WHERE player = 0
LIMIT 20;
Partial Download
hf download blanchon/cs2_dataset_render --repo-type dataset \
--include "index/*.parquet"
hf download blanchon/cs2_dataset_render --repo-type dataset \
--include "data/match_id=2393343/**/chunks-preview-*.parquet"
hf download blanchon/cs2_dataset_render --repo-type dataset \
--include "data/match_id=2393343/map_name=de_ancient/player=0/chunks-full-*.parquet"
Row Semantics
playeris the stable canonical 0-9 player index for a match.spec_slotis the transient CS2 spectator slot used only to record the POV.- Recording starts at the playable round start (
freeze_end_tick) and stops at the player's death tick, or at round end for survivors. - The production worker validates exactly 10 canonical players per round, unique spec-slot resolution, no recording past death, and valid video/audio/ inputs/world sidecars before upload.
- Downloads last month
- -