Introduction
This repository hosts the OpenAI/privacy-filter model for the React Native ExecuTorch library. It includes a quantized version in .pte format, ready for use in the ExecuTorch runtime.
If you'd like to run these models in your own ExecuTorch runtime, refer to the official documentation for setup instructions.
Compatibility
If you intend to use this model outside of React Native ExecuTorch, make sure your runtime is compatible with the ExecuTorch version used to export the .pte files. For more details, see the compatibility note in the ExecuTorch GitHub repository. If you work with React Native ExecuTorch, the constants from the library will guarantee compatibility with the runtime used behind the scenes.
Repository Structure
The repository is organized as follows:
- The
.ptefile should be passed to themodelSourceparameter. - The tokenizer files are available in the repo root:
tokenizer.jsonandtokenizer_config.jsonshould be passed totokenizerSourceandtokenizerConfigSourcerespectively.
- Downloads last month
- 14
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for software-mansion/react-native-executorch-privacy-filter-openai
Base model
openai/privacy-filter