YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

ObjectRelator: Enabling Cross-View Object Relation Understanding Across Ego-Centric and Exo-Centric Perspectives (ICCV 2025 Highlight)

Yuqian Fu, Runze Wang, Bin Ren, Guolei Sun, Biao Gong, Yanwei Fu, Danda Pani Paudel, Xuanjing Huang, Luc Van Goo

Arxiv Paper

Features

  • Ego-Exo Object Correspondence Task: We conduct an early exploration of this challenging task, analyzing its unique difficulties, constructing several baselines, and proposing a new method.

  • ObjectRelator Framework: We introduce ObjectRelator, a cross-view object segmentation method combining MCFuse and XObjAlign. MCFuse first introduces the text modality into this task and improves localization using multimodal cues for the same object(s), while XObjAlign boosts performance under appearance variations with an object-level consistency constraint.

  • New Testbed & SOTA Results: Alongside Ego-Exo4D, we present HANDAL-X as an additional benchmark. Our proposed ObjectRelator achieves state-of-the-art (SOTA) results on both datasets.

Updates

  • Release evaluation code
  • Release training code
  • Release data
  • Release model

Installation

Follow PSLAM installation instructions.

Getting Started

  • Prepare Datasets

    Ego-Exo4D


    Follow SegSwap to download Ego-Exo4D videos and pre-process the data into images. After processing, you will obtain image folders structured as follows:

    data_root
    └── take_id_01/
        β”œβ”€β”€ ego_cam/
                β”œβ”€β”€ 0.jpg
                β”œβ”€β”€ ...
                └── n.jpg
        β”œβ”€β”€ exo_cam/
            β”œβ”€β”€ 0.jpg
            β”œβ”€β”€ ...
            └── n.jpg
        └── annotation.json
    β”œβ”€β”€ ...
    β”œβ”€β”€ take_id_n
    └── split.json
    

    Next, we use the images and annotations to generate a JSON file for training and evaluating ObjectRelator (w/o text prompt):

    python datasets/build_egoexo.py --root_path /path/to/ego-exo4d/data_root --save_path /path/to/save/ego2exo_train_visual.json --split_path /path/to/ego-exo4d/data_root/split.json --split train --task ego2exo
    

    This gives us a JSON file without text prompts. We then use LLaVA to generate textual descriptions for the objects in the images:

    cd LLaVA
    conda activate llava
    python gen_text.py --image_path /path/to/ego-exo4d/data_root --json_path /path/to/save/ego2exo_train.json --save_path /path/to/save/ego2exo_train_visual_text_tmp.json
    

    In the final step, we process the LLaVA-generated text to extract object names and convert them into tokenized form, producing a complete JSON file that includes both visual and textual prompts.

    python datasets/build_text.py --text_path /path/to/save/ego2exo_train_visual_text_tmp.json --save_path /path/to/save/ego2exo_train_visual_text.json
    

    HANDAL


    Download all ZIP files in HANDAL. You can use gdown in the command line as follows:

    gdown "https://drive.google.com/file/d/1bYP3qevtmjiG3clRiP93mwVBTxyiDQFq/view?usp=share_link" --fuzzy
    

    Once unzipped, the dataset will be organized into image folders as shown below:

    data_root
    └── handal_dataset_{obj_name}/
        β”œβ”€β”€ dynamic/
        β”œβ”€β”€ models/
        β”œβ”€β”€ models_parts/
        β”œβ”€β”€ test/
        └── train/
    β”œβ”€β”€ ...
    └── handal_dataset_{obj_name}
    

    Next, we use the images and masks to generate a JSON file for training and evaluating ObjectRelator (w/o text prompt):

    python datasets/build_handal.py --root_path /path/to/handal/data_root --save_path /path/to/save/handal_train_visual.json --split train
    

    The following text prompt generation steps are the same as those for Ego-Exo4D. Refer to the instructions above.

  • Pre-trained Checkpoint

    PSALM componets:


    ​ Download Siwn-B Mask2former from here.

    ​ Download Phi-1.5 based on huggingface from here.

    ​ LLava pretrained projector can be downloaded here.

    Pre-trained PSALM:


    ​ Download PSALM here.

  • Train

    Training on Ego-Exo4D

    '''
    1. change model paths and dataset paths to the exact ego-exo related paths in train_ObjectRelator.sh
    2. You can adjust the training behavior by modifying the configuration parameters in objectrelator/mask_config/data_args.py
    3. data_args.condition controls the number of prompt modalities used
    4. training_args.joint_training determines whether joint training is enabled
    5. training_args.first_stage determines whether to use the first stage of training
    '''
    
    # stage-1 training: set training_args.first_stage=True
    bash scripts/train_ObjectRelator.sh 
    
    # stage-2 training: set training_args.first_stage=False, training_args.pretrained_model_path=/path/to/stage-1
    bash scripts/train_ObjectRelator.sh 
    

    Training on HANDAL

    # change model paths and dataset paths to the exact handal related paths in train_ObjectRelator.sh
    # set training_args.is_handal=True in data_args.py
    # The remaining training procedure is identical to that of Ego-Exo.
    
    bash scripts/train_ObjectRelator.sh 
    
  • Evaluation

    Eval on Ego-Exo4D

    # set data_args.condition in objectrelator/mask_config/data_args.py to control the number of prompt modalities used
    
    python objectrelator/eval/eval_egoexo.py --image_folder /path/to/ego-exo4d/data_root --model_path /path/to/pretrained_model --json_path /path/to/save/ego2exo_val_visual_text.json --split_path /path/to/ego-exo4d/data_root/split.json --split val
    

    Eval on HANDAL

    # set data_args.condition in objectrelator/mask_config/data_args.py to control the number of prompt modalities used
    
    python objectrelator/eval/eval_handal.py --image_folder /path/to/handal/data_root --model_path /path/to/pretrained_model --json_path /path/to/save/handal_val_visual_text.json
    

Model Zoo

  • Download Objectrelator here.
  • Download prepared json file here.

Citation

If you think this work is useful for your research, please use the following BibTeX entry.

@misc{fu2024objectrelatorenablingcrossviewobject,
      title={ObjectRelator: Enabling Cross-View Object Relation Understanding in Ego-Centric and Exo-Centric Videos}, 
      author={Yuqian Fu and Runze Wang and Yanwei Fu and Danda Pani Paudel and Xuanjing Huang and Luc Van Gool},
      year={2024},
      eprint={2411.19083},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.19083}, 
}

Acknowledgement

Thanks for awesome works: PSALM , LLaVA and Ego-Exo4D. Code is based on these works.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support