Feature Extraction
sentence-transformers
Safetensors
English
multilingual
qwen3
finance
legal
healthcare
code
stem
medical
text-embeddings-inference
Instructions to use zeroentropy/zembed-1-embedding with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use zeroentropy/zembed-1-embedding with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("zeroentropy/zembed-1-embedding") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Notebooks
- Google Colab
- Kaggle
| import torch | |
| from sentence_transformers.models import Transformer | |
| # pyright: basic | |
| class ZembedTransformer(Transformer): | |
| def tokenize( | |
| self, | |
| texts: list[str] | list[dict] | list[tuple[str, str]], | |
| padding: str | bool = True, | |
| ) -> dict[str, torch.Tensor]: | |
| texts = [text + "<|im_end|>\n" for text in texts] # pyright: ignore[reportOperatorIssue] | |
| return self.tokenizer( | |
| texts, | |
| padding=padding, | |
| truncation="longest_first", | |
| return_tensors="pt", | |
| max_length=self.max_seq_length, | |
| ) | |