Skip to content

Commit

Permalink
Removing instances of llama2 7b model in examples
Browse files Browse the repository at this point in the history
  • Loading branch information
fbnav committed Apr 16, 2024
1 parent dab0ae8 commit 9eaf99d
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ All models support sequence length up to 8192 tokens, but we pre-allocate the ca

These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt.

See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-2-7b model (`nproc_per_node` needs to be set to the `MP` value):
See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-3-8b model (`nproc_per_node` needs to be set to the `MP` value):

```
torchrun --nproc_per_node 1 example_text_completion.py \
Expand All @@ -83,7 +83,7 @@ needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` t
You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/examples/inference.py) of how to add a safety checker to the inputs and outputs of your inference code.
Examples using llama-2-7b-chat:
Examples using llama-3-8b-chat:
```
torchrun --nproc_per_node 1 example_chat_completion.py \
Expand Down

0 comments on commit 9eaf99d

Please sign in to comment.