qiaojin/PubMedQA
Viewer • Updated • 274k • 31.9k • 320
How to use cemde/Domain-Certification-MedQA-Guide-Base with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="cemde/Domain-Certification-MedQA-Guide-Base") # Load model directly
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cemde/Domain-Certification-MedQA-Guide-Base")
model = AutoModel.from_pretrained("cemde/Domain-Certification-MedQA-Guide-Base")How to use cemde/Domain-Certification-MedQA-Guide-Base with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "cemde/Domain-Certification-MedQA-Guide-Base"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "cemde/Domain-Certification-MedQA-Guide-Base",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/cemde/Domain-Certification-MedQA-Guide-Base
How to use cemde/Domain-Certification-MedQA-Guide-Base with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "cemde/Domain-Certification-MedQA-Guide-Base" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "cemde/Domain-Certification-MedQA-Guide-Base",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "cemde/Domain-Certification-MedQA-Guide-Base" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "cemde/Domain-Certification-MedQA-Guide-Base",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use cemde/Domain-Certification-MedQA-Guide-Base with Docker Model Runner:
docker model run hf.co/cemde/Domain-Certification-MedQA-Guide-Base
Collection: https://huggingface.co/collections/cemde/domain-certification-67ba4fb663f8d1348c3c2263
Certify you Large Language Model (LLM)!
With the code in this repository you can reproduce the workflows we use in our ICLR 2025 paper to achieve Domain Certification using our VALID algorithm.
We provide the guide models for our Medical Question Answering experiments.
| Model | Description |
|---|---|
| cemde/Domain-Certification-MedQA-Guide-Base | This is the base model trained on the ground-truth responses. |
| cemde/Domain-Certification-MedQA-Guide-Finetuned | This is the model trained on responses from Llama-3-8B. |
@inproceedings{
emde2025shh,
title={Shh, don't say that! Domain Certification in {LLM}s},
author={Cornelius Emde and Alasdair Paren and Preetham Arvind and Maxime Guillaume Kayser and Tom Rainforth and Bernard Ghanem and Thomas Lukasiewicz and Philip Torr and Adel Bibi},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://arxiv.org/abs/2502.19320}
}