Instructions to use isikz/phosphorylation_binaryclassification_esm1b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use isikz/phosphorylation_binaryclassification_esm1b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="isikz/phosphorylation_binaryclassification_esm1b")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("isikz/phosphorylation_binaryclassification_esm1b") model = AutoModelForSequenceClassification.from_pretrained("isikz/phosphorylation_binaryclassification_esm1b") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 0cf7e7c1121757e793deaf7db61fed811ab4dec26017fb23d0008cd6e09b6739
- Size of remote file:
- 2.61 GB
- SHA256:
- 244625ca4e28adb228da5601a580760598181f2544e87fbd52a69948f47b0a91
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.