This is a ROS 1 wrapper for llama.cpp. Original is llama_ros. This package support only llama2. If you want to use other LLMs, please add launch files for them.
- ROS1 Noetic
Run on GPU
roslaunch llama_bringup llama2.launch
run on CPU
roslaunch llama_bringup llama2.launch n_gpu_layers:=0
roslaunch llama_ros llama_client_node.launch prompt:=<YOUR PROMPT>
Firstly, you have to an environment for llama2. Then, please follow the instalation procudure.
Please modify the workspace path to adapt ypur environment.
mkdir -p workspace/src
cd workspace/src
git clone --recursive [email protected]:Shunmo17/llama_ros.git
catkin b
Llama2 models finetuned for chat are available here: 7B, 13B, 70B
After downloding a model, please set the correct model path in llama2.launch
.
-
use_default_sampling_config
If it is true, we ignore the samping config in the requested goal. This parameter is added not to use uninitialized parameters because we cannot set default parameters of action messages in ROS1.