CapMIT1003 / README.md
azugarini's picture
Update README.md
d50b2df verified
metadata
dataset_info:
  features:
    - name: obs_uid
      dtype: string
    - name: usr_uid
      dtype: string
    - name: caption
      dtype: string
    - name: image
      dtype: image
    - name: clicks_path
      sequence:
        sequence: int32
        length: 2
    - name: clicks_time
      sequence: timestamp[s]
  splits:
    - name: train
      num_bytes: 1611467
      num_examples: 3848
  download_size: 241443505
  dataset_size: 1611467

Dataset Description

CapMIT1003 is a dataset of captions and click-contingent image explorations collected during captioning tasks. CapMIT1003 is based on the same stimuli from the well-known MIT1003 benchmark, for which eye-tracking data under free-viewing conditions is available, which offers a promising opportunity to concurrently study human attention under both tasks.

Usage

You can load CapMIT1003 as follows:

from datasets import load_dataset

capmit1003_dataset = load_dataset("azugarini/CapMIT1003", trust_remote_code=True)
print(capmit1003_dataset["train"][0]) #print first example

Citation Information

If you use this dataset in your research or work, please cite the following paper:

@article{zanca2023contrastive,
  title={Contrastive Language-Image Pretrained Models are Zero-Shot Human Scanpath Predictors},
  author={Zanca, Dario and Zugarini, Andrea and Dietz, Simon and Altstidl, Thomas R and Ndjeuha, Mark A Turban and Schwinn, Leo and Eskofier, Bjoern},
  journal={arXiv preprint arXiv:2305.12380},
  year={2023}