Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ORBench Test Data
Test data for ORBench — a benchmark for evaluating LLMs on CPU-to-CUDA code acceleration.
This repo hosts only the input data + expected outputs. The benchmark harness (CPU references, task_io adapters, evaluation scripts) lives at github.com/YOURNAME/ORBench.
Contents
| File | Size | Description |
|---|---|---|
small.tar.gz |
~100 MB | Small inputs — for smoke tests (43 tasks) |
medium.tar.gz |
~3.5 GB | Medium inputs — main leaderboard |
large.tar.gz |
~8 GB | Large inputs — stress test |
manifest.json |
— | SHA256 + task metadata |
Each tarball extracts to <task>/data/<size>/ containing:
input.bin— binary input tensors (ORBench v2 format)expected_output.txt— reference output from CPU baselinecpu_time_ms.txt— CPU baseline wall timerequests.txt— per-call queries (if applicable)
Usage
# Clone the harness repo
git clone https://github.com/YOURNAME/ORBench.git
cd ORBench
# Install HF client
pip install huggingface_hub
# Download data
python3 scripts/download_data.py small # smoke
python3 scripts/download_data.py medium # main leaderboard
python3 scripts/download_data.py large # stress test
python3 scripts/download_data.py all # everything
# Run the benchmark
python3 -m framework.run_all_tasks \
--models gemini-3.1-pro-preview-openrouter \
--levels 3 --sizes medium --yes
Verification
python3 scripts/download_data.py medium --verify
Citation
If you use ORBench, please cite:
@misc{orbench2026,
title={ORBench: Evaluating LLMs on CPU-to-CUDA Code Acceleration},
author={...},
year={2026},
url={https://github.com/YOURNAME/ORBench}
}
- Downloads last month
- 12