Introduction
The model is used to evaluate the quality of a candidate patent claim compared to the gold claim.
Example Usage
from transformers import AutoModel, AutoTokenizer
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained("lj408/PatClaimEval-Quality", trust_remote_code=True)
model = AutoModel.from_pretrained("lj408/PatClaimEval-Quality", trust_remote_code=True).to(device)
gold_claim = "1. A computer-implemented method comprising: identifying a primary code segment; ..."
candidate_claim = "1. A computer-implemented method for managing logger source code segments in a source code development platform, ..."
res = model.score_pair(gold_claim, candidate_claim , tokenizer, device)
print(res)
Note that this evaluation method can serve as a reference, but it may not be accurate in all cases. Users should rely on evaluations by patent professionals for more precise results.
Citation
If you use this model or code, please cite our paper:
@article{jiang2025towards,
title={Towards Better Evaluation for Generated Patent Claims},
author={Jiang, Lekang and Scherz, Pascal A and Goetz, Stephan},
journal={arXiv preprint arXiv:2505.11095},
year={2025}
}
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support