Qwen2.5-Coder Technical Report
Paper
•
2409.12186
•
Published
•
153
This model is a fine-tuned version of Qwen2.5-Coder-7B, specifically optimized for function writing tasks. The base model Qwen2.5-Coder-7B is part of the Qwen2.5-Coder family, which was trained on 5.5 trillion tokens including source code, text-code grounding, and synthetic data.
The model was fine-tuned using LoRA (Low-Rank Adaptation) with the following configuration:
This model is optimized for function writing tasks and can be loaded using the Hugging Face Transformers library. Here's a basic example:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"path_to_your_model",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(
"path_to_your_model",
trust_remote_code=True
)
# Generate text
input_text = "Write a function that..."
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=500)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
This model inherits the Apache 2.0 license from its base model Qwen2.5-Coder-7B.
If you use this model, please cite both the original Qwen2.5-Coder paper and acknowledge the fine-tuning work:
@article{hui2024qwen2,
title={Qwen2.5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}