Due to raspberry pi's low 2gb memory we could not run slam&navigation on the device. You can still create mobile robot that can be controlled on your phone throuh web controll gui
This project integrates multiple sensors and actuators on a mobile robot platform to achieve autonomous navigation. The system runs on a Raspberry Pi 4B (2GB) and combines motor control, encoder feedback, LIDAR scanning, SLAM, navigation, camera vision, and line tracking.
Status Overview:
- Completed: Motor control node, encoder node, and web GUI for remote control
- In Progress: RP‑Lidar scan node, SLAM integration, navigation stack, camera & object detection, and line tracking aBelow is a revised, polished version of your README that consolidates information, removes duplicates, fills in missing details, and adds a section on Dockerization. You can adjust details (such as package names, file paths, or contact information) as needed.
This project integrates multiple sensors and actuators on a mobile robot platform to achieve autonomous navigation. The system runs on a Raspberry Pi 4B (2GB) with Ubuntu 22.04 and ROS 2 Humble. It combines motor control, encoder feedback, LIDAR scanning, SLAM, navigation, camera vision, and line tracking.
Status Overview:
- Completed: Dual Motor Control Node, Encoder Node, Web GUI for remote control
- In Progress: RP‑Lidar Scan Node, SLAM integration, Navigation Stack, Camera & Object Detection, Line Tracking
- Hardware Components
- Circuit Connection & Wiring Details
- Software Overview and Nodes
- Creating New Nodes
- To-Do List
- Installation and Launch Instructions
- Dockerization
- License
- Raspberry Pi 4B (2GB)
- Runs Ubuntu 22.04 and ROS 2 Humble.
- TB6612 Dual Motor Driver
- Controls two DC motors via PWM and direction signals.
- 2 × JGA25-370 DC 6V Geared Motors with Encoders and Wheels
- Provide propulsion and motion feedback.
- 7.2V 4500mAh Li-ion Battery
- Powers the motors (via the motor driver's VM and GND outputs).
- 5V 5000mAh Power Bank
- Powers the Raspberry Pi via its Type-C connector.
- RP‑Lidar A1
- Provides 2D LaserScan data on the
/scan
topic for SLAM and obstacle detection.
- Provides 2D LaserScan data on the
- Raspberry Pi Camera
- For vision-based tasks such as object detection and line tracking.
- Extra Balance Wheel
- Helps maintain stability.
- XL6015 Boost Converter
- Used if voltage adaptation is needed (e.g., when using a 12V battery).
-
Motor Driver (TB6612FNG):
- PWMA (Left Motor PWM): GPIO18 → TB6612FNG PWMA
- AI1 (Left Motor Direction): GPIO23 → TB6612FNG AI1
- AI2 (Left Motor Direction): GPIO24 → TB6612FNG AI2
- PWMB (Right Motor PWM): GPIO13 → TB6612FNG PWMB
- BI1 (Right Motor Direction): GPIO19 → TB6612FNG BI1
- BI2 (Right Motor Direction): GPIO26 → TB6612FNG BI2
- STBY & VCC: 5V → TB6612FNG (enables driver and provides logic power)
- GND: Common ground
-
Motors with Encoders (JGA25-371dc):
- Motor Power:
- Motor Power +: Connected to TB6612FNG outputs (A01/B01)
- Motor Power -: Connected to TB6612FNG outputs (A02/B02)
- Encoder Power:
- Encoder Power +: 3V3 from Raspberry Pi
- Encoder Power -: GND
- Encoder Signals:
- Channel 1 / A (C1/A): Connect to GPIO5 and/or GPIO17
- Channel 2 / B (C2/B): Connect to GPIO6 and/or GPIO27
- Motor Power:
-
Power:
- Raspberry Pi: Powered via a 5V 5000mAh power bank (Type-C)
- Motors: Powered by a 7.2V 4500mAh Li-ion battery connected to TB6612FNG VM and GND
- Inputs (from Raspberry Pi):
- Left Motor: PWMA (GPIO18), AI1 (GPIO23), AI2 (GPIO24)
- Right Motor: PWMB (GPIO13), BI1 (GPIO19), BI2 (GPIO26)
- Power & Enable:
- STBY & VCC: 5V (from Raspberry Pi)
- VM: 7.2V Li-ion battery
- GND: Common ground
- Outputs (to Motors):
- A01, B01: Motor Power +
- A02, B02: Motor Power -
- RP‑Lidar A1:
- Connect via USB (e.g.,
/dev/ttyUSB0
) and configure in its launch file.
- Connect via USB (e.g.,
- Raspberry Pi Camera:
- Connect to the dedicated camera interface.
- Balance Wheel:
- Mount mechanically to help stabilize the robot.
- Status: Complete
- Location:
src/dual_motor/dual_motor/dual_motor_controller.py
- Description:
Subscribes to/cmd_vel
(Twist messages) and drives the TB6612FNG using PWM signals to control motor speed and direction. - Key Topics:
- Subscribed:
/cmd_vel
- Subscribed:
- Creation Steps:
- Create package:
ros2 pkg create --build-type ament_python dual_motor --dependencies rclpy std_msgs
- Implement node using RPi.GPIO.
- Update
package.xml
andsetup.py
.
- Create package:
- Status: Complete
- Location:
src/motor_encoder/motor_encoder/dual_encoder_node.py
- Description:
Reads encoder signals via GPIO interrupts and publishes RPM data on/left_motor_rpm
and/right_motor_rpm
. - Key Topics:
- Published:
/left_motor_rpm
,/right_motor_rpm
- Published:
- Creation Steps:
- Create package:
ros2 pkg create --build-type ament_python motor_encoder --dependencies rclpy sensor_msgs std_msgs
- Implement node using RPi.GPIO.
- Update
package.xml
andsetup.py
.
- Create package:
- Status: Complete
- Location:
src/my_robot_launch/launch/web_gui/index.html
- Description:
An HTML/JavaScript interface (using roslib.js) for remote control and monitoring. It publishes motor commands to/cmd_vel
and subscribes to encoder topics. - Key Topics:
- Published:
/cmd_vel
- Subscribed:
/left_motor_rpm
,/right_motor_rpm
- Published:
- Creation Steps:
- Place your
index.html
androslib.min.js
files inlaunch/web_gui/
. - Serve the directory using a simple HTTP server (see Unified Launch File below).
- Place your
- Status: Complete
- All Nodes Location:
src/my_robot_launch/launch/autonomous_car_launch.py
- Launch file:
src/my_robot_launch/launch/launchpr.py
- Description:
All nodes to launch at once. - Creation Steps:
- Place web_gui in launch folder.
- Update
package.xml
andsetup.py
.
- Status: In Progress
- Location: Typically provided by the
rplidar_ros
package. - Description:
Reads LIDAR data from the RP‑Lidar A1 and publishes LaserScan messages on/scan
. - Key Topics:
- Published:
/scan
- Published:
- Creation Steps:
Install via apt:and include it in your autonomous car launch file.sudo apt install ros-humble-rplidar-ros
- Status: In Progress
- Location:
src/my_robot_slam/
(using slam_toolbox) - Description:
Usesasync_slam_toolbox_node
(or sync version) to build a map from LIDAR data and publishes/map
along with TF transforms. - Key Topics:
- Subscribed:
/scan
- Published:
/map
- Subscribed:
- Creation Steps:
Create a configuration file inconfig/
and a launch file to start the SLAM node. Install via:sudo apt install ros-humble-slam-toolbox
- Status: In Progress
- Location: In a package like
my_robot_nav
or via the Navigation2 stack. - Description:
Provides autonomous navigation using planners and costmaps. - Key Topics:
- Subscribed:
/map
,/scan
,/odom
- Published:
/cmd_vel
- Subscribed:
- Creation Steps:
Create navigation parameters (e.g.,nav2_params.yaml
) and a launch file for Nav2.
- Status: In Progress
- Location:
src/camera_node/
orsrc/object_detection/
- Description:
Captures images with the Raspberry Pi camera (or USB camera) using OpenCV and publishes them on/camera/image_raw
. Optionally runs object detection. - Key Topics:
- Published:
/camera/image_raw
- Published:
- Creation Steps:
Create a package:Implement the node with OpenCV and cv_bridge.ros2 pkg create --build-type ament_python camera_node --dependencies rclpy sensor_msgs cv_bridge
- Status: In Progress
- Location:
src/line_tracking/line_tracking/line_tracking_node.py
- Description:
Processes camera images to detect and follow a line, publishing velocity commands to/cmd_vel
. - Key Topics:
- Subscribed:
/camera/image_raw
- Published:
/cmd_vel
- Subscribed:
- Creation Steps:
Create a package:Implement the node using OpenCV.ros2 pkg create --build-type ament_python line_tracking --dependencies rclpy sensor_msgs cv_bridge geometry_msgs
- Create a New Package:
ros2 pkg create --build-type ament_python <package_name> --dependencies rclpy <other_dependencies>
- Write the Node Script:
Place your Python script (with a proper shebang, e.g.,#!/usr/bin/env python3
) insrc/<package_name>/
and make it executable:chmod +x src/<package_name>/<node_script>.py
- Update
package.xml
andsetup.py
:
Ensure all dependencies are declared. - Build the Package:
colcon build --packages-select <package_name> source install/setup.bash
- (Optional) Create a Launch File:
Use Python launch files to start your node(s).
- Dual Motor Control Node
- Encoder Node
- Web GUI
- RP‑Lidar Scan Node
- SLAM Integration (using slam_toolbox)
- Navigation Stack (Nav2)
- Camera / Object Detection Node
- Line Tracking Node
-
Build the Workspace:
From the workspace root (autonomous_ROS/
):colcon build source install/setup.bash
-
Launch All Nodes:
Use the unified launch file in themy_robot_launch
package:ros2 launch my_robot_launch all_nodes_launch.py
This will:
- Launch autonomous car nodes (motor control, encoder, etc.)
- Start the rosbridge server (on port 9090)
- Start the HTTP server (serving the web GUI on port 8000)
-
Access the Web GUI:
Open a web browser on a device in the same network and navigate to:http://<raspberry_pi_ip>:8000
For example, if the Pi's IP is
192.168.219.100
, use:http://192.168.219.100:8000
-
Remote Visualization (Optional):
Run RViz on another machine (with ROS 2 Humble) to view topics such as/map
and/scan
.
To simplify deployment and ensure a consistent environment, you can containerize your project using Docker.
Place check Dockerfile in the root directory (autonomous_ROS/Dockerfile
)
-
Build the Image:
docker build -t autonomous_ros_project .
-
Run the Container (using host networking for ROS 2 DDS discovery):
docker run -d \ --privileged \ --restart=always \ --name auto_ros \ --network=host \ autonomous_ros_project
-
Access the Web GUI:
Open a browser and navigate to:http://<raspberry_pi_ip>:8000
This project is licensed under the MIT License. Feel free to modify, distribute, or use this code for personal, educational, or commercial purposes, provided proper attribution is given.
For issues or contributions, please contact [email protected]