toncode-v1: Minecraft Plugin Coder

This model is a fine-tuned LoRA adapter for Qwen2.5-Coder-7B-Instruct, specialized in generating high-quality Java code for Minecraft server plugins (Spigot/Paper API).

Model Details

  • Developed by: Akahsizrr
  • Model type: LoRA Adapter (PEFT)
  • Base Model: Qwen/Qwen2.5-Coder-7B-Instruct
  • Language(s): English, Java (Minecraft Spigot/Paper API)
  • License: Apache-2.0
  • Finetuned from model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit

Training Details

The model was trained using Unsloth on a Minecraft-specific dataset containing optimized plugin logic and event handling.

  • Training Steps: 100
  • Optimizer: AdamW 8-bit
  • Learning Rate: 2e-4
  • Hardware: 2x NVIDIA T4 (Kaggle)
  • Batch Size: 1 (with Gradient Accumulation Steps: 8)

How to Get Started

To use this model, you need to load it as an adapter on top of the base Qwen2.5-Coder model using the peft or unsloth library.

from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/qwen2.5-coder-7b-instruct-bnb-4bit",
    max_seq_length = 2048,
    load_in_4bit = True,
)

# Load your fine-tuned adapter
model = FastLanguageModel.for_inference(model)
model.load_adapter("Akahsizrr/toncode-v1")

# Test prompt
instruction = "Create a listener that gives a player a Diamond Sword when they first join the server."
messages = [{"role": "user", "content": instruction}]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")

outputs = model.generate(input_ids=inputs, max_new_tokens=512)
print(tokenizer.batch_decode(outputs)[0])
Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support