forked from rpng/open_vins
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathgs-calibration.dox
144 lines (100 loc) · 10.7 KB
/
gs-calibration.dox
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
/**
@page gs-calibration Sensor Calibration
@section gs-visual-inertial-sensor Visual-Inertial Sensors
One may ask why use a visual-inertial sensor?
The main reason is because of the complimentary nature of the two different sensing modalities.
The camera provides high density external measurements of the environment,
while the IMU measures internal ego-motion of the sensor platform.
The IMU is crucial in providing robustness to the estimator while also providing system scale in the case of a monocular camera.
However, there are some challenges when leveraging the IMU in estimation.
An IMU requires estimating of additional bias terms and requires highly accurate calibration between the camera and IMU.
Additionally small errors in the relative timestamps between the sensors can also degrade performance very quickly in dynamic trajectories.
Within this *OpenVINS* project we address these by advocating the *online* estimation of these extrinsic and time offset parameters between the cameras and IMU.
@image html rig_hydra_old.jpg width=60%
@section gs-calib-video Video Walkthrough Guide to Calibration
This first video walks through the process of performing visual-inertial sensor calibration and is a complete overview of the below text.
The video was recorded in a single session from start to finish, so please use the chapters to skip to the sections which are of interest.
The sensor used is the Intel Realsense D455 color camera and internal IMU.
The key software used is [Kalibr](https://github.com/ethz-asl/kalibr) and [allan_variance_ros](https://github.com/ori-drs/allan_variance_ros).
@m_div{m-text-center}
<a href="http://www.youtube.com/watch?v=BtzmsuJemgI">
@image html https://raw.githubusercontent.com/rpng/open_vins/master/docs/youtube/BtzmsuJemgI-MQ.jpg width=60%
</a>
@m_enddiv
In this video takes from having a sensor, to collecting data, performing calibration, and finally processing that data live with OpenVINS to recover a 6dof pose estimate.
The later part is similar to the @subpage gs-tutorial but is for a live sensor.
First we create a launch file for the [Intel Realsense T265](https://www.intelrealsense.com/tracking-camera-t265/) sensor, after which we perform calibration.
Finally we use the calibration to process data with OpenVINS and demo the recovered trajectory.
@m_div{m-text-center}
<a href="http://www.youtube.com/watch?v=rBT5O5TEOV4">
@image html https://raw.githubusercontent.com/rpng/open_vins/master/docs/youtube/rBT5O5TEOV4-MQ.jpg width=60%
</a>
@m_enddiv
@section gs-calib-cam-static Camera Intrinsic Calibration (Offline)
The first task is to calibrate the camera intrinsic values such as the focal length, camera center, and distortion coefficients.
Our group often uses the [Kalibr](https://github.com/ethz-asl/kalibr/) @cite Furgale2013IROS calibration toolbox to perform both intrinsic and extrinsic offline calibrations, by proceeding the following steps:
1. Clone and build the [Kalibr](https://github.com/ethz-asl/kalibr/) toolbox
2. Print out a calibration board to use (we normally use the [Aprilgrid 6x6 0.8x0.8 m (A0 page)](https://drive.google.com/file/d/1TCZJ1KPJrsj3ffCNnj001ege54jffc19/view))
3. Ensure that your sensor driver is publishing onto ROS with correct timestamps.
4. Sensor preparations
- Limit motion blur by decreasing exposure time (can be achieved through high framerate)
- Ensure that your calibration board can be viewed in all areas of the image
- Ensure that your sensor is in focus (can use their `kalibr_camera_focus` or just manually)
5. Record a ROS bag and ensure that the calibration board can be seen from different orientations, distances, and in each part of the image plane. You can either move the calibration board and keep the camera still or move the camera and keep the calibration board stationary.
6. Finally run the calibration
- Use the `kalibr_calibrate_cameras` with your specified topic
- We recommend filtering the bag with their `--bag-freq 10.0` to reduce the processing time
- Depending on amount of distortion, use the *pinhole-equi* for fisheye, or if a low distortion then use the *pinhole-radtan*
- Depending on how many frames are in your dataset this can take on the upwards of a few hours.
7. Inspect the final result, pay close attention to the final reprojection error graphs, with a good calibration having less than < 0.2-0.5 pixel reprojection errors. If you are planning on performing online calibration of the camera, then larger values might be acceptable (e.g. 1 pixel), but a more accurate offline calibration is always preferred.
@image html kalibr_reprojection.png width=60%
An example script [calibrate_camera_static.sh](https://github.com/rpng/ar_table_dataset/blob/9d556a789e2d01387e5ba2aeb2453269bc2c4001/calibrate_camera_static.sh), dataset, and configuration can be found in our group's [ar\_table\_dataset](https://github.com/rpng/ar_table_dataset/) repository.
@section gs-calib-imu-static IMU Noise Calibration (Offline)
The other important intrinsic calibration is to compute the inertial sensor intrinsic noise characteristics,
which are needed for the batch optimization to calibrate the camera to IMU transform and in any VINS estimator so that we can properly probabilistically fuse the images and inertial readings.
Specifically we are estimating the following noise parameters:
Parameter | YAML element | Symbol | Units
--- | --- | --- | ---
Gyroscope "white noise" | `gyroscope_noise_density` | \f$\sigma_g\f$ | \f$\frac{rad}{s}\frac{1}{\sqrt{Hz}}\f$
Accelerometer "white noise" | `accelerometer_noise_density` | \f$\sigma_a\f$ | \f$\frac{m}{s^2}\frac{1}{\sqrt{Hz}}\f$
Gyroscope "random walk" | `gyroscope_random_walk` | \f$\sigma_{b_g}\f$ | \f$\frac{rad}{s^2}\frac{1}{\sqrt{Hz}}\f$
Accelerometer "random walk" | `accelerometer_random_walk` | \f$\sigma_{b_a}\f$ | \f$\frac{m}{s^3}\frac{1}{\sqrt{Hz}}\f$
The standard way as explained in
[[IEEE Standard Specification Format Guide and Test Procedure for Single-Axis Interferometric Fiber Optic Gyros](https://ieeexplore.ieee.org/document/660628/) (page 71, section C)]
is that we can compute an [Allan variance](https://en.wikipedia.org/wiki/Allan_variance) plot of the sensor readings over different observation times (see below).
@image html allanvar_gyro.png width=75%
As shown in the above figure, if we compute the Allan variance we we can look at the value of a line at \f$\tau=1\f$ with a -1/2 slope fitted to the left side of the plot to get the white noise of the sensor.
Similarly, a line with 1/2 fitted to the right side can be evaluated at \f$\tau=3\f$ to get the random walk noise.
We recommend using the [allan_variance_ros](https://github.com/ori-drs/allan_variance_ros) project to recover these parameters experimentally.
1. Clone and build [allan_variance_ros](https://github.com/ori-drs/allan_variance_ros)
2. Ensure that your sensor driver is publishing onto ROS with correct timestamps.
3. Collect a *stationary* dataset on the upwards of 20 hours
4. Finally run the calibration
- Construct a [config file](https://github.com/ori-drs/allan_variance_ros/blob/2430a4498e8487a0b311296f033869560663ef7a/config/realsense_t265.yaml) with the IMU topic, sensor rate, and datset time
- Generate the allan variance data files with teh `allan_variance` command
- Finally process the command via their `analysis.py` script
5. Typically these noise values should be inflated by maybe 10-20x their values to take into account unmodelled errors (one can test to see how different inflations perform)
An example script [calibrate_imu.sh](https://github.com/rpng/ar_table_dataset/blob/9d556a789e2d01387e5ba2aeb2453269bc2c4001/calibrate_imu.sh), dataset, and configuration can be found in our group's [ar\_table\_dataset](https://github.com/rpng/ar_table_dataset/) repository.
@section gs-calib-cam-dynamic Dynamic IMU-Camera Calibration (Offline)
After obtaining the intrinsic calibration of both the camera and IMU, we can now perform dynamic calibration of the transform between the two sensors.
For this we again leverage the [[Kalibr](https://github.com/ethz-asl/kalibr/) calibration toolbox].
For these collected datasets, it is important to minimize the motion blur in the camera while also ensuring that you excite all axes of the IMU.
One needs to have at least one translational motion along with two degrees of orientation change for these calibration parameters to be observable (please see our recent paper on why: [[Degenerate Motion Analysis for Aided INS With Online Spatial and Temporal Sensor Calibration](https://ieeexplore.ieee.org/abstract/document/8616792)]).
We recommend having as much change in orientation as possible in order to ensure convergence.
1. Clone and build the [Kalibr](https://github.com/ethz-asl/kalibr/) toolbox
2. Print out a calibration board to use (we normally use the [Aprilgrid 6x6 0.8x0.8 m (A0 page)](https://drive.google.com/file/d/1TCZJ1KPJrsj3ffCNnj001ege54jffc19/view))
3. Ensure that your sensor driver is publishing onto ROS with correct timestamps (inspect the IMU time dt plot carefully in the PDF report!)
4. Sensor preparations
- Limit motion blur by decreasing exposure time
- Publish at high-ish framerate (20-30hz)
- Publish your inertial reading at a reasonable rate (200-500hz)
5. Record a ROS bag and ensure that the calibration board can be seen from different orientations, distances, and mostly in the center of the image. You should move in *smooth* non-jerky motions with a trajectory that excites as many orientation and translational directions as possible at the same time. A 30-60 second dataset is normally enough to allow for calibration.
6. Finally run the calibration
- Use the `kalibr_calibrate_imu_camera`
- Input your static calibration file which will have the camera topics in it
- You will need to make an [imu.yaml](https://drive.google.com/file/d/1F9e9I1Xdm14nJw_E1j06HyinlBxRp3yu/view) file with your noise parameters.
- Depending on how many frames are in your dataset this can take on the upwards of half a day.
7. Inspect the final result. You will want to make sure that the spline fitted to the inertial reading was properly fitted. Your accelerometer and gyroscope errors are within their 3-sigma bounds (if not then your IMU noise or the dataset are incorrect). Ensure that your estimated biases do not leave your 3-sigma bounds. If they do leave then your trajectory was too dynamic, or your noise values are not good. Sanity check your final rotation and translation with hand-measured values.
@image html kalibr_accel_err.png width=60%
An example script [calibrate_camera_dynamic.sh](https://github.com/rpng/ar_table_dataset/blob/9d556a789e2d01387e5ba2aeb2453269bc2c4001/calibrate_camera_dynamic.sh), dataset, and configuration can be found in our group's [ar\_table\_dataset](https://github.com/rpng/ar_table_dataset/) repository.
*/