Instructions to use Metin/gemma-2b-tr with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Metin/gemma-2b-tr with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Metin/gemma-2b-tr")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Metin/gemma-2b-tr") model = AutoModelForCausalLM.from_pretrained("Metin/gemma-2b-tr") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Metin/gemma-2b-tr with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Metin/gemma-2b-tr" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Metin/gemma-2b-tr", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Metin/gemma-2b-tr
- SGLang
How to use Metin/gemma-2b-tr with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Metin/gemma-2b-tr" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Metin/gemma-2b-tr", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Metin/gemma-2b-tr" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Metin/gemma-2b-tr", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Metin/gemma-2b-tr with Docker Model Runner:
docker model run hf.co/Metin/gemma-2b-tr
Model Card for Model ID
gemma-2b fine-tuned for the task of Turkish text generation.
Model Details
Model Description
- Language(s) (NLP): Turkish, English
- License: Creative Commons Attribution Non Commercial 4.0 (Chosen due to the use of restricted/gated datasets.)
- Finetuned from model [optional]: gemma-2b (https://huggingface.co/google/gemma-2b)
Uses
The model is specifically designed for Turkish text generation. It is not suitable for instruction-following or question-answering tasks.
Restrictions
Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms Please refer to the gemma use restrictions before start using the model. https://ai.google.dev/gemma/terms#3.2-use
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Metin/gemma-2b-tr")
model = AutoModelForCausalLM.from_pretrained("Metin/gemma-2b-tr")
prompt = "Bugün sinemaya gidemedim çünkü"
input_ids = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
Training Details
Training Data
- Dataset size: ~190 Million Token or 100K Document
- Dataset content: Web crawl data
Training Procedure
Training Hyperparameters
- Adapter: QLoRA
- Epochs: 1
- Context length: 1024
- LoRA Rank: 32
- LoRA Alpha: 32
- LoRA Dropout: 0.05
- Downloads last month
- 16