Skip to content

Commit

Permalink
modify readme
Browse files Browse the repository at this point in the history
  • Loading branch information
yanguohang committed Dec 10, 2021
1 parent 1dcf473 commit b5d6bd6
Show file tree
Hide file tree
Showing 4 changed files with 44 additions and 8 deletions.
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,2 @@
## Introduction

To solve the problem of sensor calibration for autonomous vehicles, we provide a sensors calibration toolbox. The calibration toolbox can be used to calibrate sensors such as IMU, GPS, Lidar, Camera, and Radar.
Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion may be implemented. Precise calibrations are vital for further processing steps, such as sensor fusion and implementation of algorithms for obstacle detection, localization and mapping, and control. Further, sensor fusion is one of the essential tasks in AD applications that fuses information obtained from multiple sensors to reduce the uncertainties compared to when sensors are used individually. To solve the problem of sensor calibration for autonomous vehicles, we provide a sensors calibration toolbox. The calibration toolbox can be used to calibrate sensors such as IMU, Lidar, Camera, and Radar.
37 changes: 37 additions & 0 deletions camera_intrinsic/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
## Introduction

This is a project for intrinsic calibration and evalution.

It mainly includes two part: intrinsic calibration, distortion measurement.

## Dependecies

- opencv 2.4
- eigen 3

## Compile
Compile in their respective folders

```shell
# mkdir build
mkdir -p build && cd build
# build
cmake .. && make
```

## Input data
- <calibration_image_dir>: contains only seleted chessboard calibration image
- <distortion_image_path>: distortion harp image

## How to run
run command:
```shell
# run intrinsic calibration
./bin/run_intrinsic_calibration <calibration_image_dir>
# run distortion measurement
./bin/run_distortion_measure <distortion_image_path>
```

- **Intrinsic Calibration:** Program will automatically run intrinsic calibration. Corner-detect result will be displayed. All input calibration images will be undistorted and save to `<calibration_image_dir>/undistort/` dir.

- **Distortion Evaluation:** Sampled points of original and undistorted images will be displayed. Undistorted distortion_image will be save to `<output_dir>`.
6 changes: 3 additions & 3 deletions lidar2camera/auto_calib/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Introduction
# Introduction

This automatic and user-friendly calibration tool is for calibrating the extrinsic parameter of LiDAR and camera in road scenes. Line features from static straight-line-shaped objects such as road lanes, lights, and telegraphy poles are extracted for both image and LiDAR point cloud, later on, calibration will be achieved by aligning those two kind of features.

Expand All @@ -8,7 +8,7 @@ This automatic and user-friendly calibration tool is for calibrating the extrins

<img src="./data/mask.jpg" width="100%" height="100%" alt="Line Feature for Image" div align=center /><br>

## Prerequisites
# Prerequisites

**C++ libraries for main calibration project **

Expand All @@ -24,7 +24,7 @@ This automatic and user-friendly calibration tool is for calibrating the extrins
* tensorflow 1.15 (with CUDA 10.0 cudnn 7.6) `pip3 install tensorflow-gpu=1.15`
* [pydensecrf](https://github.com/lucasb-eyer/pydensecrf) `pip3 install git+https://github.com/lucasb-eyer/pydensecrf.git`

## Usage
# Usage

1. Five Input files:

Expand Down
6 changes: 3 additions & 3 deletions lidar2camera/auto_calib/tool/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
## Introduction
# Introduction
This project is used to extract features of lane lines and pillars in the picture.

## Requirements
# Requirements
It is recommended to use the conda environment, created by the following command:
```
conda env create -f lidar2camera.yaml
Expand All @@ -10,7 +10,7 @@ Pull the model by the following command: (pretrained network for semantic segmen
```
wget http://download.tensorflow.org/models/deeplabv3_cityscapes_train_2018_02_06.tar.gz
```
## Usage
# Usage
```
python merge_mask.py calib.png mask.png
```
Expand Down

0 comments on commit b5d6bd6

Please sign in to comment.