Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

LOOPerSet: A Large-Scale Dataset for Data-Driven Polyhedral Optimization

Paper PACT '25 Paper License: CC BY 4.0 Dataset size

Dataset at a Glance

LOOPerSet is a corpus of 28 million labeled compilation traces designed for machine learning research in compilers and systems. It maps synthetically generated loop nests and complex optimization sequences to ground-truth execution times measured on physical hardware. Transformation sequences were generated using a polyhedral compilation framework to ensure they were legal and semantics-preserving.

LOOPerSet was originally created to train the cost model for the LOOPer autoscheduler (PACT '25). For a full description of the generation process and a diversity analysis, please see our companion paper on arXiv.

What is inside?

Each data point represents a (Program, Schedule) → Performance tuple containing:

  • Source Code & IR: Raw Tiramisu (C++) generator code, lowered Halide IR, and ISL ASTs.
  • Structured Features: JSON-based representation of the program structure (loop hierarchy, memory access patterns, arithmetic expressions) for feature engineering.
  • Optimization Schedules: Sequences of code transformations (tiling, skewing, fusion, interchange, unrolling, parallelization, etc) and the specific API commands used to apply them.
  • Ground Truth: Execution time (ms) measured over many runs on physical hardware.

Key Research Tasks

By exposing both low-level source code and high-level structural features, the dataset can be used for several research applications in machine learning and compilers:

  • Performance Prediction: The dataset's primary use case. Train a model to map a program's features and a candidate optimization schedule to a predicted performance value (e.g., execution time or speedup). This forms the core of a learned cost model for guiding compiler optimization.
  • Schedule Ranking: A learning-to-rank task where a model learns to order a set of candidate schedules for a given program based on their relative performance.
  • Compiler Heuristic Discovery: A data analysis task to discover new optimization heuristics by finding correlations between program features and the effectiveness of transformation sequences.
  • Program Representation Learning: Develop and evaluate novel methods for featurizing programs, computer code, and transformation schedules, such as learning dense vector embeddings.
  • Transfer Learning: A general-purpose cost model can be pre-trained on LOOPerSet and then fine-tuned on a much smaller, target-specific dataset, significantly reducing the data collection cost for new architectures.

Dataset Configurations

The dataset is provided in two structural variants (Standard and Compact) across two split configurations (Full and PACT '25), plus a supplementary source code archive.

Variants

  • Standard: Contains complete program information including raw C++ code, lowered IRs, and compile commands. Ideal for source code analysis and NLP tasks.
  • Compact: Optimized for speed and low-memory usage. It retains all features needed for training performance models but excludes raw code strings and intermediate representations. Recommended for training cost models and performance prediction.

Splits

  • full: The complete ~28 million point dataset (composed of ~220k programs), available as a single train split.
  • pact25 split: A 10-million-point version used to train the LOOPer cost model, pre-split into train (90%) and validation (10%) sets for reproducibility. This is a subset of the Full dataset.

Generators Archive

A compressed tar.gz containing the raw ~220k Tiramisu C++ generator files (.cpp). These match the program_name keys in the dataset and are useful for static analysis or if you wish to re-compile/re-execute the programs yourself.

File Sizes

Configuration File Path Compressed Size Decompressed Size
Full (Standard) data/full/looperset_v2_full.jsonl.gz 6.0 GB 84 GB
Full (Compact) data/full/looperset_v2_full_compact.jsonl.gz 3.1 GB 21 GB
PACT Train (Standard) data/pact25/looperset_v2_pact_train.jsonl.gz 2.0 GB 28 GB
PACT Train (Compact) data/pact25/looperset_v2_pact_train_compact.jsonl.gz 1.1 GB 6.8 GB
PACT Val (Standard) data/pact25/looperset_v2_pact_validation.jsonl.gz 236 MB 3.3 GB
PACT Val (Compact) data/pact25/looperset_v2_pact_validation_compact.jsonl.gz 121 MB 818 MB
Generators Source data/source/looperset_v2_generators.tar.gz 34 MB 339 MB

How to Use

The dataset files are stored in .jsonl.gz format (gzipped JSON Lines), where each line is a complete JSON object representing one program.

Below we provide a simple method to download the files and stream the data in Python.

Installation

You will need the huggingface-hub library to download the files from the repository.

pip install huggingface-hub

Step 1: Download the Data Files

The dataset is available in X configurations<....>:

First, use the hf_hub_download function to fetch the dataset files you need.

from huggingface_hub import hf_hub_download
import os

REPO_ID = "Mascinissa/LOOPerSet"

# --- Option 1: Download the Full Compact (Recommended for Speed) ---
full_compact_path = hf_hub_download(
    repo_id=REPO_ID,
    filename="data/full/looperset_v2_full_compact.jsonl.gz",
    repo_type="dataset",
)
print(f"Full Compact dataset downloaded to: {full_compact_path}")


# --- Option 2: Download the Standard PACT '25 splits ---
pact25_train_path = hf_hub_download(
    repo_id=REPO_ID,
    filename="data/pact25/looperset_v2_pact_train.jsonl.gz",
    repo_type="dataset",
)
pact25_validation_path = hf_hub_download(
    repo_id=REPO_ID,
    filename="data/pact25/looperset_v2_pact_validation.jsonl.gz",
    repo_type="dataset",
)
print(f"PACT'25 train split downloaded to: {pact25_train_path}")
print(f"PACT'25 validation split downloaded to: {pact25_validation_path}")

Step 2: Stream and Parse the Data

Due to the large size of the dataset, we recommend streaming the data using a generator function.

The following function reads a .jsonl.gz file line-by-line.

import gzip
import json

def stream_jsonl_gz(file_path):
    """
    Generator function to stream and parse a .jsonl.gz file.
    Yields one JSON object (as a Python dict) at a time.
    """
    with gzip.open(file_path, 'rt', encoding='utf-8') as f:
        for line in f:
            yield json.loads(line)

# --- Example: Iterate through the pact25_split training set ---
# (Assuming you have run the download code from Step 1)
data_stream = stream_jsonl_gz(pact25_train_path)

print("First 3 programs from the stream:")
for i, program in enumerate(data_stream):
    if i >= 3:
        break
    print(f"\n--- Program {i+1}: {program['program_name']} ---")
    print(f"  Initial time: {program['initial_execution_time']:.4f} ms")
    print(f"  Number of schedules: {len(program['schedules_list'])}")

Example 1: Generating Training Examples

Each record in LOOPerSet represents a single program. This program contains a list of all schedules (optimization sequences) that were evaluated for it. To create training examples, one must iterate through each program and then through its schedules_list.

Here is how you can use the streamer to create (program, schedule, performance) tuples.

import numpy as np

# (pact25_train_path is defined in the download step)
data_stream = stream_jsonl_gz(pact25_train_path)

training_examples = []

for processed_count, program in enumerate(data_stream):
        # iterate over the first 100 programs only
    if processed_count >= 100:
        break

    program_features = program['program_annotation']
    initial_time = program['initial_execution_time']

    for schedule in program['schedules_list']:
        schedule_features = schedule # Or a subset of its fields
        
        # The label is the median of the 30 execution times
        # Here we compute speedup over the un-optimized version
        median_time = np.median(schedule['execution_times'])
                 
        speedup = initial_time / median_time
        
        training_examples.append({
            "program_features": program_features,
            "schedule_features": schedule_features,
            "speedup": speedup
        })

print(f"Created {len(training_examples)} tuples from {processed_count} programs.")

Example 2: Finding the Best Schedule per Program

The following example shows how to find the best speedup achieved for each program:

import numpy as np

# (pact25_train_path is defined in the download step)
data_stream = stream_jsonl_gz(pact25_train_path)

# Iterate through a few programs and find the best schedule for each
num_programs_to_process = 5
processed_count = 0


# Iterate through a few programs and find the best schedule for each
for processed_count, program in enumerate(data_stream):
    if processed_count >= 5:
        break

    program_name = program['program_name']
    initial_time = program['initial_execution_time']

    # Handle cases where the initial run might have failed
    if initial_time is None:
        print(f"\nProgram: {program_name} has no initial time. Skipping.")
        continue

    best_schedule_info = None
    min_time = initial_time

    for schedule in program['schedules_list']:
        # Ensure execution times are valid before calculating median
        if not schedule.get('execution_times'):
            continue
            
        current_time = np.median(schedule['execution_times'])
        
        if current_time < min_time:
            min_time = current_time
            best_schedule_info = schedule['schedule_str']

    speedup = initial_time / min_time if min_time > 0 else float('inf')

    print(f"\nProgram: {program_name}")
    print(f"  - Initial Time: {initial_time:.4f} ms")
    if best_schedule_info:
        print(f"  - Best Found Time: {min_time:.4f} ms (Speedup: {speedup:.2f}x)")
        print(f"  - Best Schedule: {best_schedule_info}")
    else:
        print("  - No better schedule found in the dataset.")
    

Dataset Structure

Each row in the dataset represents a single synthetic program and contains all optimization schedules explored for it.

Click to see a sample JSONL entry
{
  "program_name": "function12345",
  "program_annotation": {
    "memory_size": 4.19,
    "iterators": { "...": "..." },
    "computations": { "...": "..." },
    "buffers": { "...": "..." }
  },
  "Tiramisu_cpp": "// raw tiramisu generator source code ...",
  "initial_execution_time": 1393.751,
  "schedules_list": [
    {
      "transformations_list": [
        {"type": "interchange", "loop_levels": [0,1], "parameters": [], "computations": ["comp00"]},
        {"type": "tiling", "loop_levels": [1,2], "parameters": [32,32], "computations": ["comp01","comp02"]}
      ],
      "schedule_str": "I(L0,L1,comps=[comp00])|T2(L1,L2,32,32,comps=[comp02,comp02])",
      "legacy_schedule_str": "I({C0},L0,L1)T2({C1,C2},L2,L3,32,32)...",
      "ISL_AST": "..." ,
      "Halide_IR": "// lowered Halide IR ...",
      "Tiramisu_transform_commands": "comp01.tile(...); comp00.interchange(...); ...",
      "execution_times": [451.234, 465.112, 458.543, ...]
    },
    { /* ... another schedule object ... */ }
  ]
}

Top-Level Fields

  • program_name (string): A unique identifier for the synthetic program (e.g., "function684979"). This name is also used to locate the corresponding generator file in the source archive: <program_name>_generator.cpp.
  • program_annotation (dict): A detailed, structured representation of the original, untransformed program. This serves as the primary source for program feature engineering.
  • Tiramisu_cpp (string): Raw Tiramisu generator C++ source code of the program before any schedule transformations. (Excluded in Compact version)
  • initial_execution_time (float): The median execution time (in ms) of the program before any optimizations.
  • schedules_list (list of dicts): A list of all optimization sequences explored for this program. Each dictionary in the list details a unique schedule and its performance.
  • exploration_trace (dict): Internal search logs. (Excluded in Compact version).

The program_annotation Dictionary

This object contains all the static information about the source program.

  • memory_size (float): The total memory footprint of all buffers in megabytes.
  • iterators (dict): Contains the full loop nest hierarchy of the program. Each key is an iterator name (e.g., i0), and the value contains its lower_bound, upper_bound, parent_iterator, and child_iterators.
  • computations (dict): Contains all computational statements. Each key is a computation name (e.g., comp00), and the value contains its properties, including:
    • iterators: The list of loops this computation is nested in.
    • write_access_relation: A string representing the write access pattern.
    • accesses: A list of all read memory accesses.
    • expression_representation: A tree-based representation of the arithmetic expression.
  • buffers (dict): Contains metadata for all data arrays (buffers) used in the program, including their dimensions, data types, and whether they are inputs or outputs.

The schedules_list Entries

Each element in this list represents one complete optimization schedule applied to the program.

  • execution_times (list of float): A list of 30 raw execution time measurements (in ms) for this specific schedule. The ground-truth label for ML models is typically derived from this list (e.g., by taking the median).
  • transformations_list (list of dicts): A structured list where each element describes a specific transformation step (see format below).
  • schedule_str (string): A human-readable summary string of the transformations applied in this schedule (see format below).
  • legacy_schedule_str (string): Legacy schedule string found in older versions of the dataset. (Excluded in Compact version).
  • ISL_AST (string): An ISL (Integer Set Library) abstract syntax tree representing the loop nest after the transformation is applied. (Excluded in Compact version).
  • Halide_IR (string): Generated/lowered Halide IR after applying the transformations. (Excluded in Compact version).
  • Tiramisu_transform_commands (string): The actual Tiramisu C++ API commands used to apply this schedule. (Excluded in Compact version).

Transformation Object Format (transformations_list)

Each item in the transformations_list is a dictionary describing a single step:

{
  "type": "String",              // e.g., "skewing", "interchange", "tiling", etc.
  "loop_levels": [Integers],      // List of loop levels involved (e.g., [0,1])
  "parameters": [Integers],       // Numeric parameters (tiling factors, skewing coefficients)
  "computations": ["String"]     // List of computation IDs affected
}

Schedule String Format (schedule_str)

A schedule is represented as a pipe-separated list of transformations:

<T1>|<T2>|<T3>|...

Supported transformations:

  • S(LX,LY,v1,v2,comps=[...]): skewing loop levels LX and LY with factors v1, v2
  • I(LX,LY,comps=[...]): interchange loop levels LX and LY
  • R(LX,comps=[...]): reversal of loop level LX
  • P(LX,comps=[...]): parallelization of loop level LX
  • T2(LX,LY,v1,v2,comps=[...]): 2D tiling of loop levels LX,LY with factors v1,v2
  • T3(LX,LY,LZ,v1,v2,v3,comps=[...]): 3D tiling of loop levels LX,LY,LZ with factors v1,v2,v3
  • U(LX,v,comps=[...]): unrolling of loop level LX with factor v
  • F(LX,comps=[...]): fusion at loop level LX for the computations listed in comps

C++ Source Code Archive

This repository also includes a compressed archive containing the raw Tiramisu generator sources for all programs (data/looperset_generators.tar.gz). These are provided to enable researchers to perform static program analysis, reproduce results by re-executing schedules on different hardware architectures, or extend the dataset by collecting completely new schedules.

  • Content: Contains ~220,000 .cpp files.
  • Filename Format: <program_name>_generator.cpp (e.g., function12345_generator.cpp).
  • Relation to JSON: The content of these files is identical to the string found in the Tiramisu_cpp field within the JSON dataset. The archive is provided purely for convenience.

How to extract specific files in Python:

import tarfile

# Extract a specific generator file
with tarfile.open(source_code_path, "r:gz") as tar:
    # Example: Extract function12345_generator.cpp
    member = tar.getmember("looperset_generators/function793216_generator.cpp")
    f = tar.extractfile(member)
    content = f.read().decode('utf-8')
    print(content)

Dataset Creation

Generation Pipeline

The data was generated using a three-stage pipeline:

  1. Synthetic Program Generation: A randomized generator created a diverse corpus of polyhedral programs with varied loop structures, memory access patterns, and computational complexities.
  2. Transformation Space Sampling: We used the beam search algorithm from the LOOPer autoscheduler to explore and sample meaningful optimization sequences for each program. This "relevance-guided" strategy ensures the dataset focuses on transformations a real-world compiler would consider.
  3. Performance Label Generation: Each (program, schedule) pair was compiled with Tiramisu and executed on a dual-socket Intel Xeon E5-2695 v2 system. Each version was run up 30 times to collect a stable distribution of execution times.

Diversity Analysis

A quantitative diversity analysis was performed to validate the dataset's quality. Using normalized Tree Edit Distance (nTED) to measure structural similarity between programs, the analysis showed that:

  1. LOOPerSet does not contain any accidental replications of PolyBench benchmarks.
  2. The dataset covers a broader and more varied structural space than existing benchmark suites.

Full details are available in our companion paper.

Citation Information

If you use this dataset, please cite the following paper:

@misc{looperset,
      title={LOOPerSet: A Large-Scale Dataset for Data-Driven Polyhedral Compiler Optimization}, 
      author={Massinissa Merouani and Afif Boudaoud and Riyadh Baghdadi},
      year={2025},
      eprint={2510.10209},
      archivePrefix={arXiv},
      primaryClass={cs.PL},
      url={https://arxiv.org/abs/2510.10209}, 
}

If you a building upon or comparing against the LOOPer cost model, please cite our PACT '25 paper:

@INPROCEEDINGS{looper,
  author={Merouani, Massinissa and Boudaoud, Afif and Aouadj, Iheb Nassim and Tchoulak, Nassim and Bernou, Islem Kara and Benyamina, Hamza and Tayeb, Fatima Benbouzid-Si and Benatchba, Karima and Leather, Hugh and Baghdadi, Riyadh},
  booktitle={2025 34th International Conference on Parallel Architectures and Compilation Techniques (PACT)}, 
  title={LOOPer: A Learned Automatic Code Optimizer For Polyhedral Compilers}, 
  year={2025},
  pages={201-215},
  keywords={Deep learning;Costs;Codes;Program processors;Predictive models;Space exploration;Parallel architectures;Optimization;Pluto;Faces;Compilers;Optimization;Program transformation;Machine learning;Modeling techniques},
  doi={10.1109/PACT65351.2025.00028}}

License

This dataset is licensed under the Creative Commons Attribution 4.0 International (CC-BY 4.0) License.

Versioning / Changelog

v2.0 Schema Update & Compact Splits

  • Source Code Availability: Added raw Tiramisu (C++) generator code to both the JSON dataset (Tiramisu_cpp field) and as a standalone downloadable archive.
  • Compact Mode: Introduced "Compact" dataset configurations that strip out large text fields (Source, IR, AST) to optimize for speed when training standard performance models.
  • Schema Update: Completely restructured the schedules_list. Replaced ad-hoc lists with a structured transformations_list dictionary format and standardized the schedule string representation.
  • PACT '25 Update: Updated citation information for the published LOOPer paper.

v1.0 (Initial Release)

  • Original dataset release containing ~220k programs and 28M schedules.
  • Includes full and pact25 split configurations.
Downloads last month
76

Paper for Mascinissa/LOOPerSet