Skip to content

Commit 9302421

Browse files
authored
Update README.md
1 parent 026cab4 commit 9302421

File tree

1 file changed

+4
-8
lines changed

1 file changed

+4
-8
lines changed

README.md

+4-8
Original file line numberDiff line numberDiff line change
@@ -217,14 +217,10 @@ Here is the example of how to use IPEX optimized model to generate texts.
217217
model_id = "gpt2"
218218
- model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
219219
+ model = IPEXModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, export=True)
220-
tokenizer = AutoTokenizer.from_pretrained("gpt2")
221-
input_sentence = ["Answer the following yes/no question by reasoning step-by-step please. Can you write a whole Haiku in a single tweet?"]
222-
model_inputs = tokenizer(input_sentence, return_tensors="pt")
223-
generation_kwargs = dict(max_new_tokens=32, do_sample=False, num_beams=4, num_beam_groups=1, no_repeat_ngram_size=2, use_cache=True)
224-
225-
generated_ids = model.generate(**model_inputs, **generation_kwargs)
226-
output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
227-
print(output)
220+
tokenizer = AutoTokenizer.from_pretrained(model_id)
221+
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
222+
results = pipe("He's a dreadful magician and")
223+
228224
```
229225

230226
For more details, please refer to the [documentation](https://intel.github.io/intel-extension-for-pytorch/#introduction).

0 commit comments

Comments
 (0)