LLaMA is a Large Language Model developed by Meta AI.
It was trained on more tokens than previous models. The result is that the smallest version with 7 billion parameters has similar performance to GPT-3 with 175 billion parameters.
This guide will cover usage through the official transformers
implementation. For 4-bit mode, head over to GPTQ models (4 bit mode)
.
⚠️ The tokenizers for the Torrent source above and also for many LLaMA fine-tunes available on Hugging Face may be outdated, so I recommend downloading the following universal LLaMA tokenizer:
python download-model.py oobabooga/llama-tokenizer
Once downloaded, it will be automatically applied to every LlamaForCausalLM
model that you try to load.
protobuf
library:pip install protobuf==3.20.1
.pth
format that you, a fellow academic, downloaded using Meta’s official link:python convert_llama_weights_to_hf.py --input_dir /path/to/LLaMA --model_size 7B --output_dir /tmp/outputs/llama-7b
llama-7b
folder inside your text-generation-webui/models
folder.python server.py --model llama-7b