Interpretability
Collection
Collection of artifacts related to our work on analyzing representations in multilingual transformers. Code: https://github.com/TartuNLP/xsim • 7 items • Updated
How to use delmaksym/aacl22.scale_post with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("feature-extraction", model="delmaksym/aacl22.scale_post") # Load model directly
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("delmaksym/aacl22.scale_post")
model = AutoModel.from_pretrained("delmaksym/aacl22.scale_post")YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Trained from scratch multilingual language model (XLM-Roberta architecture) from our AACL 2022 paper Cross-lingual Similarity of Multilingual Representations Revisited.
Paper (model and training description): https://aclanthology.org/2022.aacl-main.15/
GitHub repo: https://github.com/delmaksym/xsim#cross-lingual-similarity-of-multilingual-representations-revisited