AutoGLM-Phone-9B-GGUF

This model is converted from zai-org/AutoGLM-Phone-9B to GGUF using convert_hf_to_gguf.py

To use it:

llama-server -hf ggml-org/AutoGLM-Phone-9B-GGUF
Downloads last month
437
GGUF
Model size
9B params
Architecture
glm4
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ggml-org/AutoGLM-Phone-9B-GGUF

Quantized
(8)
this model

Collection including ggml-org/AutoGLM-Phone-9B-GGUF