be-tiny-bart

A model for lemmatisation of Belarusian, trained on Belarusian-HSE dataset.

Model Details

Model Description

  • Developed by: Ilia Afanasev
  • Model type: BART
  • Language(s) (NLP): Belarusian
  • License: mpl-2.0
  • Finetuned from model: sshleifer/bart-tiny-random

Model Sources

  • Paper: TBP

Uses

Sequence-to-sequence transformation.

Direct Use

The system was fine-tuned for lemmatisation of Modern Standard Belarusian.

Out-of-Scope Use

Downstream use and further fine-tuning (for instance, for text-to-SQL transformation) seem to be not fruitful: the model has been fine-tuned for a very specific task, which is not scalable to the other types of sequence-to-sequence transformations.

Bias, Risks, and Limitations

The model is fine-tuned only for Modern Standard Belarusian on a rather small Belarusian-HSE dataset. Use its results only after the manual check.

Recommendations

Use this model only for lemmatisation of Modern Standard Belarusian if you aspire for the reliable silver tagging results. Any kind of regional, territorial or social variation is going to lead to the catastrophic forgetting issues.

How to Get Started with the Model

Use the code below to get started with the model. You will need your data in CoNLL-U format.

!pip install simpletransformers
!pip install pyjarowinkler
!pip install Levenshtein

import logging
import pandas as pd
from simpletransformers.seq2seq import Seq2SeqModel
import torch
import Levenshtein
from pyjarowinkler import distance as jw
import numpy as np
from itertools import cycle
import json

def load_conllu_dataset(datafile):
    arr = []
    with open(datafile, encoding='utf-8') as inp:
        strings = inp.readlines()
    for s in strings:
      if (s[0] != "#" and s.strip()):
          split_string = s.split('\t')
          arr.append([split_string[1] + " " + split_string[3]+ " " + split_string[5], split_string[2]])                 
    return pd.DataFrame(arr, columns=["input_text", "target_text"])


MODEL_NAME = "djulian13/be-tiny-bart"

logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)


model = Seq2SeqModel(
    encoder_decoder_type="bart",
    encoder_decoder_name=MODEL_NAME,
    use_cuda = torch.cuda.is_available()
)

DATA_PRED_NAME = "test.conllu"

predictions = load_conllu_dataset(DATA_PRED_NAME)

pred_data = predictions["input_text"].tolist()

predictions = model.predict(pred_data)

predictions = cycle(predictions)
  with open(DATA_PRED_NAME, encoding='utf-8') as inp:
      strings = inp.readlines()
      predicted = []
      for s in strings:
        if (s[0] != "#" and s.strip()):
            split_string = s.split('\t')
            split_string[2] = next(predictions)
            joined_string = '\t'.join(split_string)
            predicted.append(joined_string)
            continue
        predicted.append(s)      
      with open("result.conllu", 'w', encoding='utf-8') as out:
        out.write(''.join(predicted))

Training Details

Training Data

Belarusian-HSE

Training Procedure

Virtual environment:

  • Python 3.10.12
  • Transformers 4.34.0
  • sentence-splitter==1.4
  • simpletransformers==0.64.3
  • stanza==1.8.1
  • torch==2.1.0

The script:

import logging
import pandas as pd
from simpletransformers.seq2seq import Seq2SeqModel
import argparse
import torch
import random


def load_conllu_dataset(datafile):
    arr = []
    with open(datafile, encoding='utf-8') as inp:
        strings = inp.readlines()
    for s in strings:
      if (s[0] != "#" and s.strip()):
          split_string = s.split('\t')
          arr.append([split_string[1] + " " + split_string[3]+ " " + split_string[5], split_string[2]])    
    return pd.DataFrame(arr, columns=["input_text", "target_text"])

def count_matches(labels, preds):
    print(labels)
    print(preds)
    return sum([1 if label == pred else 0 for label, pred in zip(labels, preds)])

def main(args):
    train_df = load_conllu_dataset(args.train_data)
    args.fraction = float(args.fraction)
    print(f'Loading training dataset of {train_df.shape[0]} tokens')
    eval_df = load_conllu_dataset(args.dev_data)
    random.seed(int(args.seed))
    print(f'Setting seed to {args.seed}')
    if args.fraction > 0.0 and args.fraction < 1.0:
        remainder = int(args.fraction * len(train_df))
        train_df = train_df.sample(remainder)
        print(f'Subsampling training dataset to {train_df.shape[0]} tokens')
    model_args = {
        "reprocess_input_data": True,
        "overwrite_output_dir": True,
        "max_seq_length": max([len(token) for token in train_df["target_text"].tolist()]),
        "train_batch_size": int(args.batch),
        "num_train_epochs": int(args.epochs),
        "save_eval_checkpoints": False,
        "save_model_every_epoch": False,
        # "silent": True,
        "evaluate_generated_text": False,
        "evaluate_during_training": False,
        "evaluate_during_training_verbose": False,
        "use_multiprocessing": False,
        "use_multiprocessing_for_evaluation": False,
        "save_best_model": False,
        "max_length": max([len(token) for token in train_df["input_text"].tolist()]),
        "save_steps": -1,
    }
    model = Seq2SeqModel(
        encoder_decoder_type=args.model_type,
        encoder_decoder_name=args.model, 
        args=model_args,
    use_cuda = torch.cuda.is_available(),)    
    model.train_model(train_df, eval_data=eval_df, matches=count_matches)
    
if __name__ == '__main__':    
    parser = argparse.ArgumentParser()
    parser.add_argument('--train_data')
    parser.add_argument('--dev_data')
    parser.add_argument('--model_type', default="bart")
    parser.add_argument('--model', default="tiny-bart")
    parser.add_argument('--epochs', default="2")
    parser.add_argument('--batch', default="4")
    parser.add_argument('--fraction', help="Fraction of data", default=1.0)
    parser.add_argument('--seed', help="random seed", default=1590)
    args = parser.parse_args()
    main(args)

Training Hyperparameters

  • Training regime: fp32
  • Epochs: 2
  • Batch: 7
  • Seed: 1590

Speeds, Sizes, Times

The training took around 2.5 hrs on 4 GB GPU (NVIDIA GeForce RTX 3050).

Evaluation

During the training, no evaluation procedures were introduced.

Testing Data, Factors & Metrics

Testing Data

YABC, a freely downloadable corpus of ≈7.5M words of Belarusian newspaper articles and fiction. For the more detailed representation of the dataset, see its page on Zenodo.

Factors

Genre differences: newspaper articles vs. fiction.

Metrics

The evaluation process used accuracy score for the best possible comparison, alongside with the qualitative analysis of the examples.

Results

When tested out-of-domain, the model often struggles to generate the correct lemma.

Summary

Generally, it is possible to use this model for the preliminary tagging of Belarusian. However, if there are better options (for instance, disambiguation of existing multiple tag candidates with LLMs), it is better to go with them.

Environmental Impact

  • Hardware Type: Personal laptop (Xiaomi Mi Notebook Pro X 15)
  • Hours used: 4h
  • Carbon emitted: approx. 0.1 kg.

Technical Specifications

Model Architecture and Objective

  • Architecture: BART
  • Objective: sequence-to-sequence transformation

Compute Infrastructure

Personal laptop

Hardware

  • Xiaomi Mi Notebook Pro X 15

Software

  • VS Code

Citation

BibTeX:

TBP

APA:

TBP

Model Card Authors

Ilia Afanasev

Model Card Contact

ilia.afanasev.1997@gmail.com

Downloads last month
87
Safetensors
Model size
30.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for djulian13/be-tiny-bart

Finetuned
(1)
this model