The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ValueError
Message: Unexpected character found when decoding array value (2)
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4195, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
for key, pa_table in ex_iterable.iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 203, in _generate_tables
batch = "\n".join(ujson_dumps(x) for x in ujson_loads(full_data)).encode()
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Unexpected character found when decoding array value (2)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
AAA-HiTSR Dataset
A comprehensive multimodal time series understanding and reasoning dataset with multiple complexity levels.
Overview
This dataset contains time series data paired with visual representations and natural language instructions for time series analysis tasks. The dataset is organized into 3 levels of complexity with corresponding train/test splits.
Dataset Statistics
Level 1 (Basic): Single series analysis (min/max detection, trend analysis)
- Training samples: ~27,000
- Test samples: Multiple variants (minmax, multiseries, startend, subseries)
Level 2 (Intermediate): Multi-series analysis and relationships
- Training samples: ~40,000+
- Test categories: global, local, numerical
Level 3 (Advanced): Complex reasoning and annotations
- Training samples: ~1,000+
- Test samples: Final test set
Data Structure
Each sample contains:
{
"id": 1,
"timeseries": [[float_values]],
"prompt": "Multi-modal prompt with image tags",
"answer": "Model answer to the question",
"2img_prompt": "Detailed instructions for image interpretation",
"prompt_1": "Variant 1 of the question",
"prompt_2": "Variant 2 of the question",
"answer_1": "Answer in format 1",
"answer_2": "Answer in format 2",
"images": ["image_url_1", "image_url_2"]
}
Key Fields
- id: Unique identifier for each sample
- timeseries: Array of time series values (floats)
- prompt: Main question/instruction with image references (e.g.,
<image>) - answer: Expected model response
- 2img_prompt: Detailed instructions for interpreting high-density numeric grids
- prompt_1/prompt_2: Alternative question formats
- answer_1/answer_2: Alternative answer formats
- images: URLs to corresponding plot and numeric grid images
Image Generation
The images field in each sample contains URLs to visual representations of the time series data. To generate these images and populate the images field:
Clone the data conversion repository:
git clone https://github.com/RainingNovember/LLaTiSA.git cd LLaTiSA/data_convertInstall required dependencies:
pip install -r requirements.txtRun the appropriate conversion script based on dataset level:
For Level 1 datasets:
python data_convert_l1.py \ --input /path/to/level1_train.json \ --output /path/to/level1_train_with_images.json \ --plot_dir ./images/plots \ --num_dir ./images/numeric \ --plot_prefix "https://your-hosting.com/images/plots" \ --num_prefix "https://your-hosting.com/images/numeric" \ --sample_ratio 1.0For Level 2 datasets:
python data_convert_l2.py \ --input /path/to/level2_train.json \ --output /path/to/level2_train_with_images.json \ --plot_dir ./images/plots \ --num_dir ./images/numeric \ --plot_prefix "https://your-hosting.com/images/plots" \ --num_prefix "https://your-hosting.com/images/numeric" \ --sample_ratio 1.0For Level 3 datasets:
python data_convert_l3.py \ --input /path/to/level3_train.json \ --output /path/to/level3_train_with_images.json \ --plot_dir ./images/plots \ --num_dir ./images/numeric \ --plot_prefix "https://your-hosting.com/images/plots" \ --num_prefix "https://your-hosting.com/images/numeric" \ --sample_ratio 1.0Upload generated images to a hosting service (e.g., GitHub, Imgur, or cloud storage) and update the URL prefixes in the commands above.
The output JSON files will have the
imagesfield populated with the correct URLs:{ "images": [ "https://your-hosting.com/images/plots/plot_1.png", "https://your-hosting.com/images/numeric/num_1.png" ] }
Notes:
- Each script generates two types of images per sample: trend plots (
plot_*.png) and high-density numeric grids (num_*.png) - Use
--sample_ratio 1.0to process all samples (default is 0.5) - The scripts automatically update the
promptandanswerfields based on the dataset level requirements
File Organization
AAA-HiTSR/
βββ Train/
β βββ l1_train_llatisa.json (Level 1: 27,000 samples)
β βββ l2_train_llatisa.json (Level 2: 40,000+ samples)
β βββ l3_train_llatisa.json (Level 3: 1,000+ samples)
βββ Test/
βββ l1_test_startend.json
βββ l1_test_minmax.json
βββ l1_test_subseries.json
βββ l1_test_multiseries.json
βββ l2_test_global.json
βββ l2_test_local.json
βββ l2_test_numerical.json
βββ l3_test.json
Task Types
Level 1 - Basic Time Series Analysis
- Finding maximum/minimum values and their indices
- Trend detection (start/end values)
- Subsequence identification
Level 2 - Multi-Series Analysis
- Global patterns and relationships
- Local anomalies and features
- Numerical reasoning over multiple series
Level 3 - Advanced Reasoning
- Complex queries requiring multi-step reasoning
- Real fine-tuning (RFT) and GRPO annotations
Citation
If you use this dataset in your research, please cite:
@dataset{aaa_hitsr_2024,
title={AAA-HiTSR: A Comprehensive Multimodal Time Series Understanding Dataset},
year={2024}
}
Licensing
This dataset is released for research purposes.
Dataset Creator
Created as part of the LLaTiSA project.
Contact
For issues or questions regarding the dataset, please open an issue on the HuggingFace Hub.
- Downloads last month
- -