Skip to content

Commit

Permalink
Fix code example in quicktour.md (#1181)
Browse files Browse the repository at this point in the history
  • Loading branch information
merveenoyan authored Nov 27, 2023
1 parent b4faffe commit e35d46d
Showing 1 changed file with 7 additions and 6 deletions.
13 changes: 7 additions & 6 deletions docs/source/quicktour.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,23 +89,24 @@ This only saves the incremental 🤗 PEFT weights that were trained, meaning it
Easily load your model for inference using the [`~transformers.PreTrainedModel.from_pretrained`] function:

```diff
from transformers import AutoModelForSeq2SeqLM
from transformers import AutoModelForCausalLM, AutoTokenizer
+ from peft import PeftModel, PeftConfig

+ peft_model_id = "smangrul/twitter_complaints_bigscience_T0_3B_LORA_SEQ_2_SEQ_LM"
+ peft_model_id = "merve/Mistral-7B-Instruct-v0.2"
+ config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
+ model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)

model = model.to(device)
model.eval()
inputs = tokenizer("Tweet text : @HondaCustSvc Your customer service has been horrible during the recall process. I will never purchase a Honda again. Label :", return_tensors="pt")
inputs = tokenizer("Tell me the recipe for chocolate chip cookie", return_tensors="pt")

with torch.no_grad():
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=10)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0])
'complaint'
'Tell me the recipe for chocolate chip cookie dough.
1. Preheat oven'
```

## Easy loading with Auto classes
Expand Down Expand Up @@ -146,4 +147,4 @@ peft_model_id = "smangrul/openai-whisper-large-v2-LORA-colab"

Now that you've seen how to train a model with one of the 🤗 PEFT methods, we encourage you to try out some of the other methods like prompt tuning. The steps are very similar to the ones shown in this quickstart; prepare a [`PeftConfig`] for a 🤗 PEFT method, and use the `get_peft_model` to create a [`PeftModel`] from the configuration and base model. Then you can train it however you like!

Feel free to also take a look at the task guides if you're interested in training a model with a 🤗 PEFT method for a specific task such as semantic segmentation, multilingual automatic speech recognition, DreamBooth, and token classification.
Feel free to also take a look at the task guides if you're interested in training a model with a 🤗 PEFT method for a specific task such as semantic segmentation, multilingual automatic speech recognition, DreamBooth, and token classification.

0 comments on commit e35d46d

Please sign in to comment.