Qwen3-8B-GAE
A fine-tuned version of Qwen3-8B optimized for Arabic language tasks.
Model Details
- Model type: Causal Language Model
- Language(s): Arabic, English
- Base model: Qwen/Qwen3-8B
Uses
Direct Use
General Arabic language generation and understanding tasks.
Out-of-Scope Use
Not intended for generating harmful, misleading, or factually incorrect content.
How to Get Started
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Ocelotr/Qwen3-8B-GAE"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
messages = [{"role": "user", "content": "ู
ุฑุญุจุงุ ููู ุญุงููุ"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Procedure
- Training regime: bf16 mixed precision
- Fine-tuning method: PPO with GAE (Generalized Advantage Estimation)
- Downloads last month
- 12
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support