ranarag commited on
Commit
b29e8c3
·
verified ·
1 Parent(s): 322e932

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -11,7 +11,7 @@ tags:
11
  **Model Summary:**
12
 
13
 
14
- Granite-3.3-8B-Base is a decoder-only language model with a 128K token context window. It improves upon Granite-3.1-8B-Base by adding support for Fill-in-the-Middle (FIM) using specialized tokens, enabling the model to generate content conditioned on both prefix and suffix. This makes it well-suited for tasks like code completion tasks.
15
 
16
 
17
 
@@ -42,7 +42,7 @@ Then, copy the code snippet below to run the example.
42
  ```python
43
  from transformers import AutoModelForCausalLM, AutoTokenizer
44
  device = "auto"
45
- model_path = "ibm-granite/granite-3.3-8B-base"
46
  tokenizer = AutoTokenizer.from_pretrained(model_path)
47
  # drop device_map if running on CPU
48
  model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
 
11
  **Model Summary:**
12
 
13
 
14
+ Granite-3.3-8B-Base is a decoder-only language model with a 128K token context window. It improves upon Granite-3.1-8B-Base by adding support for Fill-in-the-Middle (FIM) using specialized tokens, enabling the model to generate content conditioned on both prefix and suffix. This makes it well-suited for code completion tasks.
15
 
16
 
17
 
 
42
  ```python
43
  from transformers import AutoModelForCausalLM, AutoTokenizer
44
  device = "auto"
45
+ model_path = "ibm-granite/granite-3.3-8b-base"
46
  tokenizer = AutoTokenizer.from_pretrained(model_path)
47
  # drop device_map if running on CPU
48
  model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)