Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ronantakizawa
/
olmo2-32b-instruct-awq
like
1
Text Generation
Transformers
Safetensors
English
olmo2
awq
quantized
4-bit precision
conversational
llm-compressor
compressed-tensors
arxiv:
2501.00656
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
olmo2-32b-instruct-awq
18.2 GB
1 contributor
History:
2 commits
ronantakizawa
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
bc874fb
verified
about 2 months ago
.gitattributes
Safe
1.52 kB
initial commit
about 2 months ago
README.md
4.72 kB
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
chat_template.jinja
Safe
493 Bytes
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
config.json
1.52 kB
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
generation_config.json
147 Bytes
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
merges.txt
Safe
917 kB
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
model-00001-of-00004.safetensors
4.98 GB
xet
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
model-00002-of-00004.safetensors
4.96 GB
xet
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
model-00003-of-00004.safetensors
4.96 GB
xet
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
model-00004-of-00004.safetensors
3.26 GB
xet
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
model.safetensors.index.json
140 kB
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
recipe.yaml
571 Bytes
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
special_tokens_map.json
Safe
581 Bytes
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
tokenizer.json
Safe
7.14 MB
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
tokenizer_config.json
Safe
4.35 kB
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago
vocab.json
Safe
1.61 MB
Upload AWQ 4-bit quantized OLMo-2-32B-Instruct (~16.9GB, 73.6% reduction)
about 2 months ago