Using RT-SLAM
Configuring the demo
There are two configuration files :
--config-setup=data/setup.cfg that contains everything related to the hardware setup (camera, IMU, calibrations, hardware noise specs, ...). If none specified, the default file is data/setup.cfg in live mode, and <data-path>/setup.cfg in replay mode.
--config-estimation=data/estimation.cfg that contains everything related to estimation and Slam (size of map, number of landmarks, strategies, ...). If none specified, the default file is data/estimation.cfg.
You can use the '@' character to refer to <data-path> when specifying the location of the config file to use : --config-estimation=@/estimation.cfg
The parameters are documented in the demo_slam.cpp file, with the definition of the data structures ConfigSetup and ConfigEstimation.
Offline processing
If you already have some data and you want to process them with RT-SLAM:
convert the images to a sequence named with the format "image_%07d.png" with a corresponding date file "image_%07d.time" that contains the timestamp in seconds as a float number.
The images MUST be acquired with a global shutter camera, not a rolling shutter camera such as most webcams, because with such cameras movement creates special distortion that cannot be calibrated with standard models.
convert the IMU data to a file named MTI.log with the image files, with the format "[10](<date>, <acc_x>, <acc_y>, <acc_z>, <gyr_x>, <gyr_y>, <gyr_z>, <mag_x>, <mag_y>, <mag_z>)". The date has the same format than for images, and must be synchronized with the images timestamps (but you can shift it with the option IMU_TIMESTAMP_CORRECTION in setup.cfg). The gyrometers values corresponding to the rotation velocity around the specified axis, in rad/s. The magnetometers values are ignored for now.
don't forget to change the file setup.cfg for the camera calibration, IMU noise specs, transformation camera<->IMU etc.
For the transformation camera<->IMU check the datasheets of the devices to know where is exactly their origin (for an IMU it is the physical location of the accelerometers, for a camera it is the focal point which is located at focal length in front of the image plane).
Sample demo
You can download here a sample image and IMU sequence to quickly test RT-SLAM offline. Give the path where the archive has been unzipped with the option --data-path.
Note that this sequence was meant to illustrate the high dynamic movements that the system can withstand thanks to IMU-vision fusion, so it is recommended to enable the use of IMU data with --robot=1 option. It will still diverge at the very end because it goes beyond the IMU limits (300 deg/s).
As an example you can use a command similar to this one to run the demo:
demo_suite/x86_64-linux-gnu/demo_slam --disp-2d=1 --disp-3d=1 --robot=1 --camera=1 --map=1 --render-all=0 --replay=1 --rand-seed=1 --pause=1 --data-path=~/2011-02-15_inertial-high-dyn/
Online processing
Same requirements on cameras (global shutter) and calibration apply than for offline processing.
For now only firewire and Ueye USB cameras are supported by RT-SLAM, and XSense MTi IMUs. If you want to use other hardware you have to write an adapter (called hardwareSensor in RT-SLAM).
Running the demo
Control options:
--disp-2d=0/1 use 2D display
--disp-3d=0/1 use 3D display
--render-all=0/1 (needs --replay 1)
--replay=0/1/2 (off/on/off no slam) (needs --data-path and having dumped here an online demo first)
--dump=0/1 dump the images (--replay=0) or the rendered views (--replay=1) (needs --data-path). If --replay=0, --data-path should be in a ram disk.
--export=0/1/2. 0=Off, 1=socket, 2=poster
--log=0/1/filename log result in text file
--rand-seed=0/1/n. 0=generate new one, 1=in replay use the saved one, n=use seed n
--pause=0/n. 0=don't, n=pause for frames>n (needs --replay 1)
--data-path=/mnt/ram/rtslam
--config-setup=data/setup.cfg
--config-estimation=data/estimation.cfg
--verbose=0/1/2/3/4/5. Off/Trace/Warning/Debug/VerboseDebug/VeryVerboseDebug (only if compiled in debug)
--help
--usage
Slam options:
--robot=0/1. 0=constant velocity, 1=inertial
--map=0/1/2. 0=odometry, 1=global, 2=local/multimap
--gps=0/1/2/3 (off/pos/pos+vel/pos+ori) use GPS
--heading=<double,rad>. initial absolute heading
Raw data options:
--simu=0/<environment id>*10+<trajectory id>
--trigger=0/1/2. 0=internal, 1=external with MTI control, 2=external without control
--freq=<double,Hz>. camera frequency (only with trigger==0/1)
--shutter=<double,seconds>. shutter time (only with trigger==1)
Controlling the demo
When doing a replay, you can control how slam is running by interacting with the 2D viewer:
Press "[space]" to switch between play and pause mode.
Press "N" to process one image when in pause mode.
Press "F" to switch between render-all mode and fast mode.
Press "Q" to quit.
Click on an observation to get textual and visual information about the observation and associated landmark.
Click on an empty part of the image to get information about the sensor and the associated robot.
Interpreting the demo
Color code:
magenta: InitialParametrization landmarks that were observed this frame
dark red: InitialParametrization landmarks that were not observed this frame
cyan: ConvergedParametrization landmarks that were observed this frame
dark blue: ConvergedParametrization landmarks that were not observed this frame
Another way to remember the color code:
bright colors represent observed landmarks
dark colors represent non observed landmarks
reddish hues represent InitialParametrization landmarks (like inverse depth for points)
bluish hues represent ConvergedParametrization landmarks (like euclidean for points)
2D landmark representation:
a colored + cross: the predicted position of the landmark
a colored ellipse: the 3-sigma prediction uncertainty
an orange x cross: the measured position of the landmark
a colored label: the id of the landmark, followed by the matching score (/100)
Reasons for a landmark not being observed:
- outside of the sensor frame (you can still see them in the 3D view)
- matching failed (you can see the matching score that is below the threshold)
- the observation was inconsistent (the measure is out of the uncertainty ellipse)
- too many landmarks are in the sensor frame and we didn't try to observe some of them (no matching score is present). In that case the landmark is regardless drawn with the "observed" color, because it wasn't due to a problem with the landmark.