Important
RAI is in beta phase now, expect friction. Early contributors are the most welcome!
RAI is developing fast towards a glorious release in time for ROSCon 2024.
The RAI framework aims to:
- Supply a general multi-agent system, bringing Gen AI features to your robots.
- Add human interactivity, flexibility in problem-solving, and out-of-box AI features to existing robot stacks.
- Provide first-class support for multi-modalities, enabling interaction with various data types.
- Incorporate an advanced database for persistent agent memory.
- Include ROS 2-oriented tooling for agents.
- Support a comprehensive task/mission orchestrator.
- Voice interaction (both ways).
- Customizable robot identity, including constitution (ethical code) and documentation (understanding own capabilities).
- Accessing camera ("What do you see?") sensor, utilizing VLMs.
- Reasoning about its own state through ROS logs.
- ROS 2 action calling and other interfaces. The Agent can dynamically list interfaces, check their message type, and publish.
- Integration with LangChain to abstract vendors and access convenient AI tools.
- Tasks in natural language to nav2 goals.
- NoMaD integration.
- OpenVLA integration.
- Improved Human-Robot Interaction with voice and text.
- SDK for RAI developers.
- Support for at least 3 different AI vendors.
- Additional tooling such as GroundingDino.
- UI for configuration to select features and tools relevant for your deployment.
Currently, RAI supports Ubuntu 24.04 with ROS 2 Jazzy and Python 3.12. The release will also support Ubuntu 22.04 with ROS 2 Humble (it should work or be close to working now).
Install poetry (1.8+) with the following line, or
curl -sSL https://install.python-poetry.org | python3 -
by following the official docs
git clone https://github.com/RobotecAI/rai.git
cd rai
poetry install
rosdep install --from-paths src --ignore-src -r -y
colcon build --symlink-install
source ./setup_shell.sh
RAI is fully vendor-agnostic, however the beta development work currently utilizes OpenAI models. Setting the OPENAI_API_KEY
environment variable will yield the best results.
If you do not have a key, see how to generate one here.
export OPENAI_API_KEY=""
Congratulations, your installation is now complete! Head to Running example
RAI is a sophisticated framework targeted at solving near general cases. As of now, we provide the following examples:
- Engage with your ROS 2 network through an intuitive Streamlit chat interface.
- Explore the O3DE Husarion ROSbot XL demo and assign tasks via natural language.
If you are more ambitious:
- Create your own robot description package and unleash it with the rai_whoami node.
- Run Streamlit powered by your custom robot’s description package and effortlessly access your robot's documentation as well as identity and constitution.
- Implement additional tools via LangChain's @tool and use them in your chat.
Chat seamlessly with your setup, retrieve images from cameras, adjust parameters on the fly, and get comprehensive information about your topics.
streamlit run src/rai_hmi/rai_hmi/streamlit_hmi_node.py
Remember to run this command in a sourced shell.
This demo provides a practical way to interact with and control a virtual Husarion ROSbot XL within a simulated environment. Using natural language commands, you can assign tasks to the robot, allowing it to perform a variety of actions.
Given that this is a beta release, consider this demo as an opportunity to explore the framework's capabilities, provide feedback, and contribute. Try different commands, see how the robot responds, and use this experience to understand the potential and limitations of the system.
Follow this guide: husarion-rosbot-xl-demo
Application | Robot | Description | Link |
---|---|---|---|
Mission and obstacle reasoning in orchards | Autonomous tractor | In a beautiful scene of a virtual orchard, RAI goes beyond obstacle detection to analyze best course of action for a given unexpected situation. | 🌾 demo |
Manipulation tasks with natural language | Robot Arm (Franka Panda) | Complete flexible manipulation tasks thanks to RAI and OpenVLA | 🦾 demo |
Quadruped inspection demo | A robot dog (ANYbotics ANYmal) | Perform inspection in a warehouse environment, find and report anomalies | link TBD |
Please take a look at Q&A.
See our Developer Guide.
You are welcome to contribute to RAI! Please see our Contribution Guide.
RAI will be released on October 15th, right before ROSCon 2024. If you are going to the conference, come join us at RAI talk on October 23rd.