Model Description

This model is a fine-tuned version of Qwen/Qwen3-8B-Base, adapted for improved performance on Arabic language tasks. The fine-tuning focused on enhancing its capabilities in instruction-following and conversational AI within the Arabic context. Intended Use

Uses

This model is intended for use as a general-purpose chatbot, for Arabic question answering, and for various text generation tasks. It's best used in a conversational format.

Use the code below to get started with the model.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "Ocelotr/Qwen-8B-SFT"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    dtype=torch.bfloat16,
    device_map="auto"
)

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "ุงุดุฑุญ ู„ูŠ ู…ูู‡ูˆู… ุงู„ุฐูƒุงุก ุงู„ุงุตุทู†ุงุนูŠ"}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=256)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)
Downloads last month
53
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Ocelotr/Qwen-8B-SFT

Base model

Qwen/Qwen3-8B-Base
Finetuned
(309)
this model