The primary aim of the project is build an autonomous robot, capable of navigating the race course using predefined waypoints. The project starts with an entry script that encompasses all components, consolidating essential parameters for easy tuning.
Upon establishing a connection with the robot, we employ Rviz to locally visualize the remote robot. This not only alleviates computational load but also allows us to assess the functionality of the particle filter. Instead of directly running PlannerNode, we opt to precompute all necessary plans to further mitigate computational demands.
Subsequently, we optimize the parameters of Line Follower or the plans generated by PlannerNode based on the robot's navigation along each trajectory. Once we were ready to race, we launched the ParticleFilter and through our finalRace.py script we started the race. For each attempt we used a different finalrace.py file to account for different speed trials.
To launch the path planner, run the planner.py
script. This script precomputes all necessary plans for the robot's navigation.
Execute final_race.py
python script and start the autonomous following of the path. This script starts the race and uses the parameters of the Line Follower to follow the plans generated by the PlannerNode along each trajectory.
In this project, there are 3 main modules which are LineFollower, ParticleFilter, and PlannerNode.
Line follower plays a critical role in our system as it directly influences the trajectory. Utilizing PID control, the line follower ensures adherence to a specified path. Throughout the robot's journey, the line follower continuously updates controls by incorporating plans and poses obtained from the particle filter.
In essence, the line follower subscribes to the particle filter module to obtain the robot's position and determine the variance between the ideal and actual paths. Subsequently, it employs PID control to compute a new steering angle based on this error, aiming to guide the robot along the ideal path or in close proximity to it. Ultimately, the line follower utilizes the calculated steering angle to navigate the robot effectively.
Parameters: ki, kp, kd = (1, 0.05, 0.3)
The particle filter consists of four modules: Motion Model, Sensor Model, ReSampler, and Particle Filter. It is designed to determine the localization of the robot on a specified map, providing details about the global pose of our robot denoted as Xt = <xt, yt, θt>, where (xt, yt) represents the robot's position and θt is the heading angle. By leveraging these four modules, the particle filter can compute the vector Xt.
The kinematic model, associated with the motion state and servo state topics, provides an estimate of the noise in our position and control. This estimate is derived by sampling from a Gaussian distribution or normal distribution. Particle filters will utilize this information to ascertain their state.
Parameters:
- KM_V_NOISE = 0.02
- KM_DELTA_NOISE = 0.1
- KM_X_FIX_NOISE = 0.02
- KM_Y_FIX_NOISE = 0.02
- KM_THETA_FIX_NOISE = 0.01
The sensor model utilizes raycasting from range_libc to provide an estimated laser feedback of the surrounding environment. This feedback, derived from laser scanning observations, serves as a basis for determining the likelihood of a specific pose. Subsequently, the weights of the particles are adjusted based on the received probability.
Parameters:
- Z_SHORT = 0.05
- Z_MAX = 0.05
- Z_RAND = 0.05
- SIGMA_HIT = 2.0
- Z_HIT = 0.85
In the previous assignment, we implemented two types of resampler: naive and low variance. In this project, we use a low variance version. By applying a low variance sampling algorithm, the Resampler can resample the particles according to the weights of those particles.
Parameters: trials = 10
The particle filter integrated all the modules we mentioned above in this section. It calculates an estimated pose and is then published to the /pf/viz/inferred_pose topic.
Parameters:
- n_particles = 500
- n_viz_particles = 20
- laser_ray_step = 30
- max_range_meters = 11.0
This module aims to create a practical route for the robot based on specified source and target positions on a designated map. In the ultimate race, waypoints come in two varieties: red and blue. The robot is required to traverse the blue waypoints while steering clear of the red ones. With the foreknowledge of the locations of both types of waypoints, a precomputing method becomes viable. Consequently, we have developed a standalone script dedicated to generating paths for the final race, saving the resulting poses in a text file. This design eliminates the need for the high-level entry script to interact with a running PlannerNode, as it can simply read the information directly from the text file.
Parameters:
- halton_points = 500
- disc_radius = 3
In the final race, some paths from one blue waypoint to another can be quite challenging, potentially leading to instability in the robot's navigation. To ensure each path is as smooth as possible, we have added numerous intermediate waypoints based on the provided waypoints.
Because our PlannerNode can only generate a path between a source position and target position, we are not able to generate an overall path which passes through all the blue waypoints within. Hence, we decided to separate the path into several sub paths.
In the ultimate competition, waypoints come in two variations: red and blue. The robot is tasked with navigating through the blue waypoints while avoiding the red ones. With advanced knowledge of the locations for both types of waypoints, a precomputing method becomes feasible. As a result, we've developed a dedicated standalone script for generating paths in the final race, saving the resulting poses in a text file. This design removes the necessity for the high-level entry script to interact with a running PlannerNode, as it can simply extract information directly from the text file.
Multi-agent System for non-Holonomic Racing (MuSHR)
Srinivasa, S., Lancaster, P., Michalove, J., Schmittle, M., Summers, C., Rockett, M., Smith, J., Chouhury, S., Mavrogiannis, C., & Sadeghi, F. (2019). MuSHR: A Low-Cost, Open-Source Robotic Racecar for Education and Research. CoRR, abs/1908.08031.