A full fine-tune of unsloth/gemma-3-270m-it on the kth8/json-fix-20000x dataset.

Usage example

System prompt

You are a JSON formatting specialist. Convert the provided JSON data into valid JSON format with 2 line indent and no additional commentary.

User prompt

{
"investment_opportunities": [
{
"source_type": "Solar Panels",
"estimated_return": 5.1,
"risk_level": "Medium",
"description": "Invest in solar panel installation for residential use"
},
{
"source_type": "Wind Turbines",
"estimated_return": 4.8,
"risk_level": "High",
"description": "Participate in wind turbine project development for commercial use"
}
]
}

Model Details

  • Base Model: unsloth/gemma-3-270m-it
  • Parameter Count: 268098176
  • Training Method: Full Fine-Tune (FFT) - All parameters updated.
  • Precision: torch.bfloat16

Hardware

  • GPU: NVIDIA L4

Training stats

  • Global step: 2339
  • Training runtime: 6323.3888
  • Average training loss: 0.0036326510180313275
  • Final validation loss: 0.0016531279543414712
  • Epoch: 1.0

Framework versions

  • Unsloth: 2026.3.4
  • TRL: 0.22.2
  • Transformers: 4.56.2
  • Pytorch: 2.10.0+cu128
  • Datasets: 4.3.0
  • Tokenizers: 0.22.2

License

This model is released under the Gemma license. See the Gemma Terms of Use for details.

Downloads last month
83
Safetensors
Model size
0.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kth8/gemma-3-270m-it-JSON-Fixer

Finetuned
(383)
this model
Quantizations
1 model

Dataset used to train kth8/gemma-3-270m-it-JSON-Fixer