In this project our team developed a integrated system using ROS, composed by the main components of CARLA - Udacity Self-Driving Car
Name | Udacity Account Email Address |
---|---|
Jefferson Nascimento | [email protected] |
Pratima Nagare | [email protected] |
Carla's Software Architecture is based on ROS which is a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. Following the state-of-car architecture for self-driving cars, Carla's archicture can be divided in 3 main domains:
- Perception: The process of perception in self-driving cars uses a combination of high-tech sensors and cameras, combined with state-of-the-art software to process and comprehend the environment around the vehicle, in real-time. In the case of Carla, Perception contains a component for obstacle detection and also a Traffic Light Detection.
- Planning: Here is where the decisions are made. It’s about implementing the brain of an autonomous vehicle. Here the Waypoint Loader Software Component and Waypoint Updater are implemented.
- Control: The Control step consists of following the trajectory generated as faithfully as possible. A path is a sequence of waypoints each containing a position (x; y) an angle (yaw) and a speed (v). The purpose of a controller is to generate instructions for the vehicle such as steering wheel angle or acceleration level taking into account the actual constraints and the trajectory generated. In Carla the Control domain is represented by a Drive-by-Wire Software component and a Waypoint Follower.
The detector tries to find the closest traffic light to the car, classify it's state(RED, YELLOW, GREEN) and in case of a RED light, it publishes it's waypoint to /traffic_waypoint
.
The original data is raw without labels or annotations, we found this dataset where the images were annotated and converted to TFRecord, hence we used it to train the object detector The data can be found here. We have trained only for simulator data but same can be done for real data.
Model used : ssd_inception_v2_coco_2018_01_28
Tensorflow version: 1.15(GPU)
Steps to train: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1.md
Pipeline Config (Tensorflow Object Detection API) used: https://github.com/jnsagai/Traffic_Light_Detector/blob/main/traffic_light_classification.config
Label Map Used: https://github.com/jnsagai/Traffic_Light_Detector/blob/main/label_map.pbtxt.txt
Some test image results:
This package loads the static waypoint data and publishes to /base_waypoints
.
This package contains the waypoint updater node: waypoint_updater.py. The purpose of this node is to update the target velocity property of each waypoint based on traffic light and obstacle detection data. This node will subscribe to the /base_waypoints
, /current_pose
, /obstacle_waypoint
, and /traffic_waypoint topics
, and publish a list of waypoints ahead of the car with target velocities to the /final_waypoints topic.
This node publishs waypoints from the car's current position to some x
distance ahead.
The first step was to implement a version which does not care about traffic lights or obstacles. Afterwards, this node was updated in order to use the status of traffic lights too in order to react and stop whenever a Red light is detected.
Every 100 miliseconds (10Hz) the following tasks are performed:
- Find the nearest
n
waypoints ahead of the vehicle wheren
is defined as a set number of waypoints. - Determine if a red traffic light falls in the range of waypoints ahead of the traffic light.
- Calculate target velocities for each waypoint.
- Publish the target waypoints with velocities to the
final_waypoints
topic.
Once messages are being published to /final_waypoints
, the vehicle's waypoint follower will publish twist commands to the /twist_cmd topic
. The goal for this package is to implement the drive-by-wire node which will subscribe to /twist_cmd
and use various controllers to provide appropriate throttle, brake, and steering commands. These commands are then being published to the following topics:
/vehicle/throttle_cmd
/vehicle/brake_cmd
/vehicle/steering_cmd
This package containing code from Autoware which subscribes to /final_waypoints
and publishes target vehicle linear and angular velocities in the form of twist commands to the /twist_cmd
topic.
You can see a video of the car driving here
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
Download the Udacity Simulator.
Build the docker container
docker build . -t capstone
Run the docker file
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone
To set up port forwarding, please refer to the "uWebSocketIO Starter Guide" found in the classroom (see Extended Kalman Filter Project lesson).
- Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
- Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
- Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
- Run the simulator
- Download training bag that was recorded on the Udacity self-driving car.
- Unzip the file
unzip traffic_light_bag_file.zip
- Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
- Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
- Confirm that traffic light detection works on real life images
Outside of requirements.txt
, here is information on other driver/library versions used in the simulator and Carla:
Specific to these libraries, the simulator grader and Carla use the following:
Simulator | Carla | |
---|---|---|
Nvidia driver | 384.130 | 384.130 |
CUDA | 8.0.61 | 8.0.61 |
cuDNN | 6.0.21 | 6.0.21 |
TensorRT | N/A | N/A |
OpenCV | 3.2.0-dev | 2.4.8 |
OpenMP | N/A | N/A |
We are working on a fix to line up the OpenCV versions between the two.