Text Classification
Transformers
ONNX
Safetensors
Transformers.js
English
bert
Intel
Eval Results (legacy)
text-embeddings-inference
Instructions to use Intel/polite-guard with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Intel/polite-guard with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Intel/polite-guard")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Intel/polite-guard") model = AutoModelForSequenceClassification.from_pretrained("Intel/polite-guard") - Transformers.js
How to use Intel/polite-guard with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('text-classification', 'Intel/polite-guard'); - Inference
- Notebooks
- Google Colab
- Kaggle
Add ONNX file of this model
#2
by ek-id - opened
Beep boop I am the ONNX export bot π€ποΈ. On behalf of ek-id, I would like to add to this repository the model converted to ONNX.
What is ONNX? It stands for "Open Neural Network Exchange", and is the most commonly used open standard for machine learning interoperability. You can find out more at onnx.ai!
The exported ONNX model can be then be consumed by various backends as TensorRT or TVM, or simply be used in a few lines with π€ Optimum through ONNX Runtime, check out how here!
ek-id changed pull request title from Adding ONNX file of this model to Add ONNX file of this model
ek-id changed pull request status to merged