nielsr HF Staff commited on
Commit
a71ab71
·
verified ·
1 Parent(s): 7fbaa71

Add `library_name: transformers` to metadata

Browse files

This PR adds the `library_name: transformers` metadata tag to the model card.

Evidence from the `Quickstart` section clearly shows the model's compatibility with the `transformers` library, as it imports `AutoTokenizer` and `AutoModelForCausalLM` from `transformers`. The `config.json` also defines `auto_map` entries for `transformers`-compatible classes like `Qwen3ForCausalLM`.

Adding this tag enables the automated "how to use" widget on the Hugging Face Hub, providing users with a quick and convenient way to interact with the model.

Files changed (1) hide show
  1. README.md +12 -3
README.md CHANGED
@@ -1,11 +1,12 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - dllm
5
  - diffusion
6
  - llm
7
  - text_generation
8
- pipeline_tag: text-generation
9
  ---
10
 
11
  # ReFusion
@@ -91,7 +92,7 @@ def generate_refusion(model, tokenizer, prompt, gen_length=128, temperature=0.,
91
  cur_gen_pos_ids = gen_pos_ids[:, serial_num_block*block_length:(serial_num_block+1)*block_length] # (batch_size, block_length)
92
 
93
  cur_gen_blocks_x = cur_gen_x.reshape(batch_size, -1, cur_slot_size)
94
- cur_gen_blocks_pos_ids = cur_gen_pos_ids.reshape(batch_size, -1, cur_slot_size)
95
 
96
  # slot level generation
97
  while cur_gen_blocks_x.numel() > 0:
@@ -416,7 +417,15 @@ def main():
416
  model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16).to(device).eval()
417
  tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
418
 
419
- prompt = "You are an expert Python programmer. Your task is to write a single Python function to solve the problem described below, and here is your task: Write a function to sum all amicable numbers from 1 to a specified number.\n\nDirectly after the '[BEGIN]' marker, you must write only the Python code for the function. Do not provide any explanations, comments, or introductory text. The function must include the 'def' line, its arguments, the function body, and a 'return' statement. Your code should pass these tests:\n\nassert amicable_numbers_sum(999)==504\nassert amicable_numbers_sum(9999)==31626\nassert amicable_numbers_sum(99)==0\n[BEGIN]\n"
 
 
 
 
 
 
 
 
420
 
421
  m = [{"role": "user", "content": prompt}, ]
422
  prompt = tokenizer.apply_chat_template(m, add_generation_prompt=True, tokenize=False, enable_thinking=True)
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-generation
4
  tags:
5
  - dllm
6
  - diffusion
7
  - llm
8
  - text_generation
9
+ library_name: transformers
10
  ---
11
 
12
  # ReFusion
 
92
  cur_gen_pos_ids = gen_pos_ids[:, serial_num_block*block_length:(serial_num_block+1)*block_length] # (batch_size, block_length)
93
 
94
  cur_gen_blocks_x = cur_gen_x.reshape(batch_size, -1, cur_slot_size)
95
+ cur_gen_blocks_pos_ids = cur_gen_blocks_pos_ids.reshape(batch_size, -1, cur_slot_size)
96
 
97
  # slot level generation
98
  while cur_gen_blocks_x.numel() > 0:
 
417
  model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16).to(device).eval()
418
  tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
419
 
420
+ prompt = "You are an expert Python programmer. Your task is to write a single Python function to solve the problem described below, and here is your task: Write a function to sum all amicable numbers from 1 to a specified number.
421
+
422
+ Directly after the '[BEGIN]' marker, you must write only the Python code for the function. Do not provide any explanations, comments, or introductory text. The function must include the 'def' line, its arguments, the function body, and a 'return' statement. Your code should pass these tests:
423
+
424
+ assert amicable_numbers_sum(999)==504
425
+ assert amicable_numbers_sum(9999)==31626
426
+ assert amicable_numbers_sum(99)==0
427
+ [BEGIN]
428
+ "
429
 
430
  m = [{"role": "user", "content": prompt}, ]
431
  prompt = tokenizer.apply_chat_template(m, add_generation_prompt=True, tokenize=False, enable_thinking=True)