开源SLAM系统:LIO-SAM源码解析

LIO-SAM是原LEGO-LOAM作者TixiaoShan发表在IROS 2020上的论文 LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping。这是一套Lidar IMU紧耦合的SLAM系统,号称运行速度快10倍,但对传感器要求较高,100hz+的9轴IMU以及带有ringtime通道的激光雷达(velodyne和ouster),其代码最近已开源,并且还在保持迭代更新中。

demo

我跑了一下作者给的数据集,其在手持激光雷达,保持各种姿势旋转的情况下,也能保持稳定的精度,效果确实比较震撼,如图所示。粗略的阅读了代码及论文,其在LEGO-LOAM的框架上做出的改进,主要有以下几点

  • 里程计部分去除scan2scan匹配,改为scan2localmap匹配,特征提取部分去除了原LEGO LOAM中的聚、分割并提取较为突出的边缘点和平面点,而是沿用LOAM中的边缘和平面点。(算不上创新)
  • 维护两个因子图,预积分因子图可优化联合优化激光雷达odom和IMU,并估计IMU偏差,进行实时的里程计估算。全局因子图联合优化雷达odom和GPS因子以及回环因子。(LIO-Mapping中已有)
  • 利用robot_localization中的ekf节点融合GPS和9轴IMU数据,得到提供初始姿态的gps odom。(工程整合)

项目地址:https://github.com/TixiaoShan/LIO-SAM

我的注释版地址:https://github.com/JokerJohn/opensource_slam_noted

image-20200818210634258

简介

系统的整体框架如图:

system

系统的因子图结构如下

在这里插入图片描述

主要有4种因子,imu预积分因子(橙色)、激光里程计因子(绿色)、GPS因子(黄色)和回环因子(黑色)。在激光里程计部分,沿用LOAM中的特征提取方法,提取边缘点和平面点,将当前帧特征点分别与之前n+1帧的局部特征点地图进行匹配,分别建立点到直线和点到平面的约束,最后联合优化估计位姿。而GPS因子则是在位姿估计协方差矩阵变得很大的时候通过对其时间戳进行线性插值而添加进来。

下面是系统的节点图:

image-20200818214847148

这里可以很清晰的看到系统中odometry部分有三个数据,/odometry/gps/odometry/navsat是用robot_localization包融合GPS而得到的,而/odometry/imu是由优化后的激光odom和实时的imu_raw数据,通过imu预积分得到一个实时的预测的odom,并更新imu偏差。

再看其tf_tree,由于还有robot_localization包的使用,这里遵循REP-105(坐标系约定)。REP-105指定了四个主要坐标系:base_linkodommapEarth(ROS系统中一般叫做World)。base_link坐标系牢固地固定在机器人上。mapodom是固定的世界坐标系,其原点通常与机器人的起始位置对齐。

image-20200818215317107

其代码的组织结构来看,和LEGO-LOAM区别不大:

  • imageProjection.cpp:点云投影成深度图,并分类,利用ring和time给每个点去畸变。
  • featureExtraction.cpp:提取边缘点和平面点特征。
  • mapOptmization.cpp:基于因子图,构建点线、点面约束,回环,GPS等各种因子,最后增量平滑全图位姿。
  • imuPreintegration.cpp:联合优化激光odom和imu数据,估计IMU偏差,进行实时的里程计估算。

EKF融合GPS和IMU

robot_localization的状态估计节点会生成状态估计,其状态在mapodom坐标系中给出,其速度在base_link坐标系中给出。在与状态融合之前,所有传入的数据都将转换为这些坐标系之一。对于IMU数据,目前存在一些歧义。大多数IMU在固定的世界坐标系中报告方向数据,该坐标系的X和Z轴分别由指向磁北和地球中心的向量定义,Y轴朝东(与磁北向量偏移90度),此坐标系通常称为NED(北,东,下)。robot_localization假定所有IMU数据都使用ENU坐标系,并且不适用于NED坐标系数据。并且IMU数据在处理加速度是,需要格外注意处理地球重力加速度天然存在的事实,做好相关坐标轴上的消减考量。而这种考量是要结合IMU传感器具体的安装方位做出应对的。

此系统直接采用robot_localization包融合GPS和IMU信息,需要IMU具有全局的姿态,所以要求九轴IMU。这里主要配置navsat_transform_nodeekf_localization_node节点,如图在launch文件配置相关的节点话题即可。

1
2
3
4
5
6
7
8
9
10
11
<!-- EKF GPS-->
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_gps" respawn="true">
<remap from="odometry/filtered" to="odometry/navsat" />
</node>

<!-- Navsat -->
<node pkg="robot_localization" type="navsat_transform_node" name="navsat" respawn="true">
<remap from="imu/data" to="imu_correct" />
<remap from="gps/fix" to="gps/fix" />
<remap from="odometry/filtered" to="odometry/navsat" />
</node>

然后在params中配置传感器的参数及过程噪声等,详细的参数配置参考以下博客

https://blog.csdn.net/u012899885/article/details/103326979

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
# Navsat (convert GPS coordinates to Cartesian)
navsat:
frequency: 50
wait_for_datum: false
delay: 0.0
magnetic_declination_radians: 0
yaw_offset: 0
zero_altitude: true
broadcast_utm_transform: false
broadcast_utm_transform_as_parent_frame: false
publish_filtered_gps: false

# EKF for Navsat
ekf_gps:
publish_tf: false
map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: odom

frequency: 50
two_d_mode: false
sensor_timeout: 0.01
# -------------------------------------
# External IMU:
# -------------------------------------
imu0: imu_correct
# make sure the input is aligned with ROS REP105. "imu_correct" is manually transformed by myself. EKF can also transform the data using tf between your imu and base_link
imu0_config: [false, false, false,
true, true, true,
false, false, false,
false, false, true,
true, true, true]
imu0_differential: false
imu0_queue_size: 50
imu0_remove_gravitational_acceleration: true
# -------------------------------------
# Odometry (From Navsat):
# -------------------------------------
odom0: odometry/gps
odom0_config: [true, true, true,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
odom0_differential: false
odom0_queue_size: 10

# x y z r p y x_dot y_dot z_dot r_dot p_dot y_dot x_ddot y_ddot z_ddot
process_noise_covariance: [ 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 10.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.25, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.25, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.015]

navsat_transform_node这个节点的存在就是为了能够融合GPS数据而存在的。

来自知乎@天马微云

如上图所示,navsat_transform_node节点的输入有3个:

  • nav_msgs/Odometry (EKF输出,需要机器人当前的位置),odometry/navsat
  • sensor_msgs/Imu (必须有陀螺仪,需要确定全局朝向),imu_correct
  • sensor_msgs/NavSatFix (从导航卫星设备输出),odometry/gps(ekf gps的输入)

其处理过程主要包含以下几步

  1. 将gps数据转换成UTM坐标
  2. 使用初始的UTM坐标,EKF/UKF输出和IMU生成从UTM网格到机器人世界框架的静态变换T
  3. 使用T变换所有测量的gps数据
  4. 将数据发给EKF/UKF

详细使用方法参考以下文档:

https://zhuanlan.zhihu.com/p/121622661

http://docs.ros.org/melodic/api/robot_localization/html/index.html

点云投影

在此模块中,作者摈除了LEGO-LOAM中的聚类分割等工作,延续了LOAM的特征提取方法,但对点云还是用投影成的深度图来组织,此外利用高频imu数据进行畸变去除。

1
2
3
4
5
// 这里原来是std::thread, 现在改成了ros MultiThreadedSpinner
// 阻塞微调, 类似于ros::spin(), 你可以在它的构造函数中指定线程数量,
// 但如果不指定或者设为0, 它会根据你的CPU内核数创建线程.
ros::MultiThreadedSpinner spinner(3);
spinner.spin();

首先是话题订阅上,使用了imu_raw以及gps odom以及原始点云帧,最终发布去除畸变的点云,业务逻辑是直接写在点云的callback中的,这一点上,(⊙o⊙)…

1
2
3
4
5
6
7
8
9
10
// subscriber, 订阅imu和odometry以及原始点云,这里的odom应该是gps+imu通过robot_localization包融合得到的.
// odom callback中主要是用来装数据
// 点云处理的逻辑全部在cloudHandler中
subImu = nh.subscribe<sensor_msgs::Imu>(imuTopic, 2000, &ImageProjection::imuHandler, this,ros::TransportHints().tcpNoDelay()); // 允许指定hints到roscpp的传输层
subOdom = nh.subscribe<nav_msgs::Odometry>(odomTopic, 2000, &ImageProjection::odometryHandler, this,ros::TransportHints().tcpNoDelay());
subLaserCloud = nh.subscribe<sensor_msgs::PointCloud2>(pointCloudTopic, 5, &ImageProjection::cloudHandler, this,ros::TransportHints().tcpNoDelay());

// publisher, 发布自定义的cloud_info和用于odometry的cloud
pubExtractedCloud = nh.advertise<sensor_msgs::PointCloud2>("lio_sam/deskew/cloud_deskewed", 1);
pubLaserCloudInfo = nh.advertise<lio_sam::cloud_info>("lio_sam/deskew/cloud_info", 1);

发布的点云采用自定义的msg来组织,cloud_info详细结构如下,可以清晰的看到,点云中每个点包含其深度图的行列id,起始的ring id,以及当前时刻的初始位姿(后续计算得到)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# Cloud Info
Header header

int32[] startRingIndex
int32[] endRingIndex

int32[] pointColInd # point column index in range image
float32[] pointRange # point range

int64 imuAvailable
int64 odomAvailable

# Attitude for LOAM initialization
float32 imuRollInit
float32 imuPitchInit
float32 imuYawInit

# Initial guess from imu pre-integration
float32 initialGuessX
float32 initialGuessY
float32 initialGuessZ
float32 initialGuessRoll
float32 initialGuessPitch
float32 initialGuessYaw

# Preintegration reset ID
int64 imuPreintegrationResetId

# Point cloud messages
sensor_msgs/PointCloud2 cloud_deskewed # original cloud deskewed
sensor_msgs/PointCloud2 cloud_corner # extracted corner feature
sensor_msgs/PointCloud2 cloud_surface # extracted surface feature

这里imuHandler中,将imu数据转换到雷达坐标系下,然后用一个双端队列来装imu数据帧。其坐标系转换需要预先设定好imu->lidar的外参数,在params中设定

1
2
3
4
5
6
7
8
# Extrinsics (lidar -> IMU)
extrinsicTrans: [0.0, 0.0, 0.0]
extrinsicRot: [-1, 0, 0,
0, 1, 0,
0, 0, -1]
extrinsicRPY: [0, 1, 0,
-1, 0, 0,
0, 0, 1]

其转换流程如下,主要是把角速度和加速度转换到雷达坐标系下。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
sensor_msgs::Imu imuConverter(const sensor_msgs::Imu &imu_in) {
sensor_msgs::Imu imu_out = imu_in;
// rotate acceleration 加速度转到雷达系
Eigen::Vector3d acc(imu_in.linear_acceleration.x, imu_in.linear_acceleration.y, imu_in.linear_acceleration.z);
acc = extRot * acc; // ext是lidar到imu的外参中的R矩阵
imu_out.linear_acceleration.x = acc.x();
imu_out.linear_acceleration.y = acc.y();
imu_out.linear_acceleration.z = acc.z();
// rotate gyroscope 角速度转到雷达系
Eigen::Vector3d gyr(imu_in.angular_velocity.x, imu_in.angular_velocity.y, imu_in.angular_velocity.z);
gyr = extRot * gyr;
imu_out.angular_velocity.x = gyr.x();
imu_out.angular_velocity.y = gyr.y();
imu_out.angular_velocity.z = gyr.z();
// rotate roll pitch yaw
Eigen::Quaterniond q_from(imu_in.orientation.w, imu_in.orientation.x, imu_in.orientation.y,
imu_in.orientation.z);
Eigen::Quaterniond q_final = q_from * extQRPY; // rpy转过去
imu_out.orientation.x = q_final.x();
imu_out.orientation.y = q_final.y();
imu_out.orientation.z = q_final.z();
imu_out.orientation.w = q_final.w();

return imu_out;
}

odometryHandler中也是同样用一个双端队列来装数据帧,因为odom数据已经是全局的,不需要转换。

cloudHandler中是全部的业务逻辑。其主要流程包括配置畸变参数、点云投影、去畸变等。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
void cloudHandler(const sensor_msgs::PointCloud2ConstPtr &laserCloudMsg) {
// 清除临时点云,并检查当前点云帧里面是否有ring和time通道
if (!cachePointCloud(laserCloudMsg))
return;

// 配置好用于去畸变的imu和odom参数
// 找到点云时间戳前后的GPS odom和imu 数据,并分别计算在这两帧时间内的位姿增量和旋转增量
// 用于后续去畸变
if (!deskewInfo())
return;

// 每一帧点云进来都这么处理

// 点云投影成深度图->去畸变
projectPointCloud();

// 从深度图中提取点云, 给lidar odometry用
cloudExtraction();

// 发布点云
publishClouds();

// 重置参数
resetParameters();
}

cachePointCloud中主要检查当前点云是否存在ring和time通道。deskewInfo函数中主要是遍历当前的imu和odom队列,计算当前点云帧对应的imu和odom数据,包括积分计算这段时间内转过的角度。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
bool deskewInfo() {
std::lock_guard<std::mutex> lock1(imuLock);
std::lock_guard<std::mutex> lock2(odoLock);

// 保证imu和odom数据,并且当前帧点云数据, 其时间戳在队列中的imu数据之间
if (imuQueue.empty() || imuQueue.front().header.stamp.toSec() > timeScanCur ||
imuQueue.back().header.stamp.toSec() < timeScanNext) {
ROS_DEBUG("Waiting for IMU data ...");
return false;
}

// 遍历imu队列, 计算点云帧对应的imu数据, 包括积分计算其转过的角度
// 用于后续去畸变
imuDeskewInfo();

// odom去畸变参数配置
odomDeskewInfo();

return true;
}

首先是imuDeskewInfo函数中会计算当前每一帧imu相对于上一帧转过的角度,这里通过积分计算。并且前提是保证当前点云帧的时间戳在imu队列中间,然后将imuAvailable标志位设为true,这表明当前点云帧前后的imu数据均可得到。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
void imuDeskewInfo() {
//这个参数在地图优化程序中用到 首先为false,完成相关操作后置true
cloudInfo.imuAvailable = false;

// imu去畸变参数
// timeScanCur指当前点云帧的时间戳
while (!imuQueue.empty()) {
// 以0.01为阈值 舍弃较旧的imu数据
if (imuQueue.front().header.stamp.toSec() < timeScanCur - 0.01)
imuQueue.pop_front();
else
break;
}

if (imuQueue.empty())
return;

imuPointerCur = 0;

for (int i = 0; i < imuQueue.size(); ++i) {
sensor_msgs::Imu thisImuMsg = imuQueue[i];
double currentImuTime = thisImuMsg.header.stamp.toSec();

// get roll, pitch, and yaw estimation for this scan
if (currentImuTime <= timeScanCur) // 点云时间戳在前
// 用imu的欧拉角做扫描的位姿估计,直接把值给了cloudInfo在地图优化程序中使用
imuRPY2rosRPY(&thisImuMsg, &cloudInfo.imuRollInit, &cloudInfo.imuPitchInit, &cloudInfo.imuYawInit);
// 如果当前Imu时间比下一帧时间大于0.01退出, imu频率大于100hz才比较有用
if (currentImuTime > timeScanNext + 0.01)
break;

// 第一次初始化时以下值都是0
if (imuPointerCur == 0) {
imuRotX[0] = 0;
imuRotY[0] = 0;
imuRotZ[0] = 0;
imuTime[0] = currentImuTime;
++imuPointerCur;
continue;
}

// get angular velocity 从imu数据中提取角速度
double angular_x, angular_y, angular_z;
imuAngular2rosAngular(&thisImuMsg, &angular_x, &angular_y, &angular_z);

// integrate rotation
// 在此时间间隔内把角速度积分出转角,用于后续的去畸变
double timeDiff = currentImuTime - imuTime[imuPointerCur - 1];
imuRotX[imuPointerCur] = imuRotX[imuPointerCur - 1] + angular_x * timeDiff;
imuRotY[imuPointerCur] = imuRotY[imuPointerCur - 1] + angular_y * timeDiff;
imuRotZ[imuPointerCur] = imuRotZ[imuPointerCur - 1] + angular_z * timeDiff;
imuTime[imuPointerCur] = currentImuTime;
++imuPointerCur;
}

--imuPointerCur;

if (imuPointerCur <= 0)
return;

// 这一帧点云的imu数据可用
cloudInfo.imuAvailable = true;
}

同样在odomDeskewInfo中也进行了类似的操作。这里是找到当前点云帧前后的odom数据,计算这个过程中的位姿变换增量,用于后续去畸变,这里的每一帧点云同样有一个odomAvailable的标志位。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
void odomDeskewInfo() {
// 类似imu数据,用于标志当前点云帧的odom处理的信息是否有效
cloudInfo.odomAvailable = false;

while (!odomQueue.empty()) {
// 保证点云帧的时间戳在odom队列中间
if (odomQueue.front().header.stamp.toSec() < timeScanCur - 0.01)
odomQueue.pop_front();
else
break;
}

if (odomQueue.empty())
return;

// 点云数据在前
if (odomQueue.front().header.stamp.toSec() > timeScanCur)
return;

// get start odometry at the beinning of the scan
// 遍历odom队列,将odom的位姿作为点云信息中预测位姿
nav_msgs::Odometry startOdomMsg;

for (int i = 0; i < odomQueue.size(); ++i) {
startOdomMsg = odomQueue[i];

// 之前已经将小于timeScanCur超过0.01的数据弹出
// 所以startOdomMsg已经可代表起始激光扫描的起始时刻的里程计消息
if (ROS_TIME(&startOdomMsg) < timeScanCur)
continue;
else
break;
}

tf::Quaternion orientation;
tf::quaternionMsgToTF(startOdomMsg.pose.pose.orientation, orientation);

double roll, pitch, yaw;
tf::Matrix3x3(orientation).getRPY(roll, pitch, yaw);

// Initial guess used in mapOptimization
// 用当前odom队列的起始位姿作为当前点云的初始位姿
cloudInfo.initialGuessX = startOdomMsg.pose.pose.position.x;
cloudInfo.initialGuessY = startOdomMsg.pose.pose.position.y;
cloudInfo.initialGuessZ = startOdomMsg.pose.pose.position.z;
cloudInfo.initialGuessRoll = roll;
cloudInfo.initialGuessPitch = pitch;
cloudInfo.initialGuessYaw = yaw;
cloudInfo.imuPreintegrationResetId = round(startOdomMsg.pose.covariance[0]);

cloudInfo.odomAvailable = true;

// get end odometry at the end of the scan
// 获得一帧扫描末尾的里程计消息,和初始位姿无关, 主要用于去畸变、运动补偿
odomDeskewFlag = false;

// 计算激光扫描阵=帧结尾时刻的里程计消息
if (odomQueue.back().header.stamp.toSec() < timeScanNext)
return;

// 扫描结束时的odom
nav_msgs::Odometry endOdomMsg;
for (int i = 0; i < odomQueue.size(); ++i) {
endOdomMsg = odomQueue[i];

if (ROS_TIME(&endOdomMsg) < timeScanNext)
continue;
else
break;
}

// 位姿协方差矩阵判断,位姿协方差不一致的话退出
if (int(round(startOdomMsg.pose.covariance[0])) != int(round(endOdomMsg.pose.covariance[0])))
return;

// 获得起始变换
Eigen::Affine3f transBegin = pcl::getTransformation(startOdomMsg.pose.pose.position.x,
startOdomMsg.pose.pose.position.y,
startOdomMsg.pose.pose.position.z, roll, pitch, yaw);

// 获得结尾变换
tf::quaternionMsgToTF(endOdomMsg.pose.pose.orientation, orientation);
tf::Matrix3x3(orientation).getRPY(roll, pitch, yaw);
Eigen::Affine3f transEnd = pcl::getTransformation(endOdomMsg.pose.pose.position.x,
endOdomMsg.pose.pose.position.y,
endOdomMsg.pose.pose.position.z, roll, pitch, yaw);

// 获得一帧扫描起始与结束时刻间的变换, 参考loam
Eigen::Affine3f transBt = transBegin.inverse() * transEnd;

// 计算这个首尾odom的增量, 用于后续去畸变
float rollIncre, pitchIncre, yawIncre;
pcl::getTranslationAndEulerAngles(transBt, odomIncreX, odomIncreY, odomIncreZ, rollIncre, pitchIncre, yawIncre);

// 标志位
odomDeskewFlag = true;
}

经过以上流程,可以得到每一帧点云对应的相邻的odom和imu数据帧及其位姿增量,后续的projectPointCloud函数则是主要将点云投影成深度图,去畸变,是本模块的核心,其详细流程如下。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
void projectPointCloud() {
// 将点云投影成深度图
int cloudSize = laserCloudIn->points.size();
// range image projection
for (int i = 0; i < cloudSize; ++i) {
// 遍历每个点, 按线束进行分类
PointType thisPoint;
thisPoint.x = laserCloudIn->points[i].x;
thisPoint.y = laserCloudIn->points[i].y;
thisPoint.z = laserCloudIn->points[i].z;
thisPoint.intensity = laserCloudIn->points[i].intensity;

// 没有ring 的话需要依据垂直角度计算
int rowIdn = laserCloudIn->points[i].ring;
if (rowIdn < 0 || rowIdn >= N_SCAN)
continue;
// 激光点的水平角度, 计算一帧点云转过多少度,进而计算每个点投影到深度图中的列id和时间戳
float horizonAngle = atan2(thisPoint.x, thisPoint.y) * 180 / M_PI;
// 角分辨率 360/1800
float ang_res_x = 360.0 / float(Horizon_SCAN); // 深度图的列数
int columnIdn = -round((horizonAngle - 90.0) / ang_res_x) + Horizon_SCAN / 2; // 列
if (columnIdn >= Horizon_SCAN)
columnIdn -= Horizon_SCAN;

if (columnIdn < 0 || columnIdn >= Horizon_SCAN)
continue;

// 计算点到雷达的距离, 过近过远均舍弃
float range = pointDistance(thisPoint);

if (range < 1.0)
continue;

if (rangeMat.at<float>(rowIdn, columnIdn) != FLT_MAX)
continue;

// for the amsterdam dataset
// if (range < 6.0 && rowIdn <= 7 && (columnIdn >= 1600 || columnIdn <= 200))
// continue;
// if (thisPoint.z < -2.0)
// continue;

// 深度图中像素值存深度
rangeMat.at<float>(rowIdn, columnIdn) = range;

// 点云去畸变,运动补偿,这里需要用到雷达信息中的time这个field
thisPoint = deskewPoint(&thisPoint, laserCloudIn->points[i].time);
// 索引值,类似于图像中像素索引的概念应该比较好理解
int index = columnIdn + rowIdn * Horizon_SCAN;
// 去完畸变的点云存储到fullCloud中
fullCloud->points[index] = thisPoint;
}
}

这里去除畸变的点云都存在fullCloud中,其去除畸变的主要写在deskewPoint,其流程如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
PointType deskewPoint(PointType *point, double relTime) {
// 根据时间戳,对每个点去畸变
if (deskewFlag == -1 || cloudInfo.imuAvailable == false)
return *point;

// relTime是点在当前帧内的实际时间
double pointTime = timeScanCur + relTime;

// 用imu补偿旋转和平移,根据点云的时间戳去计算其旋转
float rotXCur, rotYCur, rotZCur;
findRotation(pointTime, &rotXCur, &rotYCur, &rotZCur);

// 用odom数据补偿位移
float posXCur, posYCur, posZCur;
findPosition(relTime, &posXCur, &posYCur, &posZCur);

// 如果是第一帧数据
if (firstPointFlag == true) {
// 起始矩阵赋值再取逆
transStartInverse = (pcl::getTransformation(posXCur, posYCur, posZCur, rotXCur, rotYCur,
rotZCur)).inverse();
firstPointFlag = false;
}

// transform points to start
// 把点投影到每一帧扫描的起始时刻,参考Loam
Eigen::Affine3f transFinal = pcl::getTransformation(posXCur, posYCur, posZCur, rotXCur, rotYCur, rotZCur);
Eigen::Affine3f transBt = transStartInverse * transFinal;

// 去完畸变的点
PointType newPoint;
newPoint.x = transBt(0, 0) * point->x + transBt(0, 1) * point->y + transBt(0, 2) * point->z + transBt(0, 3);
newPoint.y = transBt(1, 0) * point->x + transBt(1, 1) * point->y + transBt(1, 2) * point->z + transBt(1, 3);
newPoint.z = transBt(2, 0) * point->x + transBt(2, 1) * point->y + transBt(2, 2) * point->z + transBt(2, 3);
newPoint.intensity = point->intensity;

return newPoint;
}

其中对于每个点去畸变,分别用imu数据补偿旋转,用odom数据补偿平移,这里一帧点云中的点是按时间顺序排列的,需要精准计算其在imu队列以及odom队列中的位姿增量,在findRotationfindPosition函数中处理。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
void findRotation(double pointTime, float *rotXCur, float *rotYCur, float *rotZCur) {
*rotXCur = 0;
*rotYCur = 0;
*rotZCur = 0;

// 当前point_time在imu_time前面,舍弃
// 最终要保证点云时间在两帧imu数据中间。
int imuPointerFront = 0;
while (imuPointerFront < imuPointerCur) {
if (pointTime < imuTime[imuPointerFront])
break;
++imuPointerFront;
}

// pointTime在imu队列的起点,直接获取其旋转增量(在imuDeskewInfo函数中已经计算了)
// pointTime在imu队列中间的话,按比率获取
if (pointTime > imuTime[imuPointerFront] || imuPointerFront == 0) {
*rotXCur = imuRotX[imuPointerFront];
*rotYCur = imuRotY[imuPointerFront];
*rotZCur = imuRotZ[imuPointerFront];
} else {
// 根据点的时间信息,获得每个点的时刻的旋转变化量
int imuPointerBack = imuPointerFront - 1;
// back point_time front
// 计算point_time在imu队列的前后占比
double ratioFront =
(pointTime - imuTime[imuPointerBack]) / (imuTime[imuPointerFront] - imuTime[imuPointerBack]);
double ratioBack = (imuTime[imuPointerFront] - pointTime) / (imuTime[imuPointerFront] - imuTime[imuPointerBack]);
*rotXCur = imuRotX[imuPointerFront] * ratioFront + imuRotX[imuPointerBack] * ratioBack;
*rotYCur = imuRotY[imuPointerFront] * ratioFront + imuRotY[imuPointerBack] * ratioBack;
*rotZCur = imuRotZ[imuPointerFront] * ratioFront + imuRotZ[imuPointerBack] * ratioBack;
}
}

void findPosition(double relTime, float *posXCur, float *posYCur, float *posZCur) {

// 这里注释掉了,如果高速移动可能有用,低速车辆提升不大
// 用到了里程计增量值
*posXCur = 0;
*posYCur = 0;
*posZCur = 0;

// If the sensor moves relatively slow, like walking speed, positional deskew seems to have little benefits. Thus code below is commented.

// if (cloudInfo.odomAvailable == false || odomDeskewFlag == false)
// return;

// float ratio = relTime / (timeScanNext - timeScanCur);

// *posXCur = ratio * odomIncreX;
// *posYCur = ratio * odomIncreY;
// *posZCur = ratio * odomIncreZ;
}

最后在深度图中提取点云并发布出去,略。

特征提取

在特征提取模块,主要是对去除畸变的点云,计算每个点的曲率,按曲率去提取边缘和平面特征即可,参考LOAM。

1
2
3
4
5
6
7
// 这里只订阅去完畸变的点云, 业务逻辑依然在点云callback里面写
subLaserCloudInfo = nh.subscribe<lio_sam::cloud_info>("lio_sam/deskew/cloud_info", 1,&FeatureExtraction::laserCloudInfoHandler, this, ros::TransportHints().tcpNoDelay());

// 提取2种特征, surface和corner
pubLaserCloudInfo = nh.advertise<lio_sam::cloud_info>("lio_sam/feature/cloud_info", 1);
pubCornerPoints = nh.advertise<sensor_msgs::PointCloud2>("lio_sam/feature/cloud_corner", 1);
pubSurfacePoints = nh.advertise<sensor_msgs::PointCloud2>("lio_sam/feature/cloud_surface", 1);

laserCloudInfoHandler中,其主要流程就是计算曲率、标记遮挡和平行点、提取特征和发布特征点云。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
void laserCloudInfoHandler(const lio_sam::cloud_infoConstPtr &msgIn) {
cloudInfo = *msgIn; // new cloud info
cloudHeader = msgIn->header; // new cloud header
pcl::fromROSMsg(msgIn->cloud_deskewed, *extractedCloud); // new cloud for extraction

// 计算每个点的曲率
calculateSmoothness();

// 标记遮挡和平行点
markOccludedPoints();

// 提取surface和corner特征
extractFeatures();

// 发布特征点云
publishFeatureCloud();
}

关于曲率的计算就是计算点的前后5个点在xyz方向上距离差的平方和,用来描述点云局部的一个起伏情况,这里改成了深度,但没什么区别。注意遍历点云的时候,点是按扫描的时间先后顺序排列的,所以相邻点就在附近,这个毫无疑问。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
void calculateSmoothness() {
int cloudSize = extractedCloud->points.size();
// 计算曲率, 年后五个点深度的平方和, 存到cloudCurvature中
// 前五个点的距离属性之和加后五个点的距离之和-10倍该点的距离,算出差值
// 确定一种连续点之间的起伏趋势
for (int i = 5; i < cloudSize - 5; i++) {
float diffRange = cloudInfo.pointRange[i - 5] + cloudInfo.pointRange[i - 4]
+ cloudInfo.pointRange[i - 3] + cloudInfo.pointRange[i - 2]
+ cloudInfo.pointRange[i - 1] - cloudInfo.pointRange[i] * 10
+ cloudInfo.pointRange[i + 1] + cloudInfo.pointRange[i + 2]
+ cloudInfo.pointRange[i + 3] + cloudInfo.pointRange[i + 4]
+ cloudInfo.pointRange[i + 5];

cloudCurvature[i] = diffRange * diffRange;//diffX * diffX + diffY * diffY + diffZ * diffZ;

cloudNeighborPicked[i] = 0;
cloudLabel[i] = 0;
// cloudSmoothness for sorting
// 记录曲率以及索引, h后续需要根据曲率进行排序,打乱点的顺序
cloudSmoothness[i].value = cloudCurvature[i];
cloudSmoothness[i].ind = i;
}
}

之后就是标记遮挡点和平行点,参考LOAM

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
void markOccludedPoints() {
// cloudNeighborPicked 中有了点是否选择为特征点的标记
int cloudSize = extractedCloud->points.size();
// mark occluded points and parallel beam points
for (int i = 5; i < cloudSize - 6; ++i) {
// occluded points
float depth1 = cloudInfo.pointRange[i];
float depth2 = cloudInfo.pointRange[i + 1];
// 深度图中列的差值
int columnDiff = std::abs(int(cloudInfo.pointColInd[i + 1] - cloudInfo.pointColInd[i]));

// 平行线和遮挡的判断参考LOAM
if (columnDiff < 10) {
// 10 pixel diff in range image
// 根据深度差异进行区分,并设定标志变量为1
if (depth1 - depth2 > 0.3) {
cloudNeighborPicked[i - 5] = 1;
cloudNeighborPicked[i - 4] = 1;
cloudNeighborPicked[i - 3] = 1;
cloudNeighborPicked[i - 2] = 1;
cloudNeighborPicked[i - 1] = 1;
cloudNeighborPicked[i] = 1;
} else if (depth2 - depth1 > 0.3) {
cloudNeighborPicked[i + 1] = 1;
cloudNeighborPicked[i + 2] = 1;
cloudNeighborPicked[i + 3] = 1;
cloudNeighborPicked[i + 4] = 1;
cloudNeighborPicked[i + 5] = 1;
cloudNeighborPicked[i + 6] = 1;
}
}
// parallel beam
// 平行线的情况,根据左右两点与该点的深度差,确定该点是否会被选择为特征点
float diff1 = std::abs(float(cloudInfo.pointRange[i - 1] - cloudInfo.pointRange[i]));
float diff2 = std::abs(float(cloudInfo.pointRange[i + 1] - cloudInfo.pointRange[i]));

if (diff1 > 0.02 * cloudInfo.pointRange[i] && diff2 > 0.02 * cloudInfo.pointRange[i])
cloudNeighborPicked[i] = 1;
}
}

特征提取主要按曲率大小来筛选平面点和边缘点,其中去除了一些聚集点的影响,并保证各方向上能均匀的提取特征(划分子图),这里主要是一些工程商的trick需要注意。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
void extractFeatures() {
cornerCloud->clear();
surfaceCloud->clear();

// 提取corner和surface特征
// 为了保证各方向均匀提取, 将深度图分为6个子图
// 每个子图中对点的曲率进行排序,sp和ep分别是这段点云的起始点与终止点
// 从而判断出角点与平面点
pcl::PointCloud<PointType>::Ptr surfaceCloudScan(new pcl::PointCloud<PointType>());
pcl::PointCloud<PointType>::Ptr surfaceCloudScanDS(new pcl::PointCloud<PointType>());

for (int i = 0; i < N_SCAN; i++) {
surfaceCloudScan->clear();

// 只用下方的7根线作为平面点的提取
for (int j = 0; j < 6; j++) {
int sp = (cloudInfo.startRingIndex[i] * (6 - j) + cloudInfo.endRingIndex[i] * j) / 6;
int ep = (cloudInfo.startRingIndex[i] * (5 - j) + cloudInfo.endRingIndex[i] * (j + 1)) / 6 - 1;

if (sp >= ep)
continue;

// 每个子图中的点按曲率排序
std::sort(cloudSmoothness.begin() + sp, cloudSmoothness.begin() + ep, by_value());

int largestPickedNum = 0;
for (int k = ep; k >= sp; k--) {
int ind = cloudSmoothness[k].ind;
// 曲率比较大的是边缘点
if (cloudNeighborPicked[ind] == 0 && cloudCurvature[ind] > edgeThreshold) {
largestPickedNum++;
if (largestPickedNum <= 20) { // 选取20个边缘点
cloudLabel[ind] = 1;
cornerCloud->push_back(extractedCloud->points[ind]);
} else {
break;
}
// 防止特征点聚集
// 从ind+l开始后面5个点,每个点index之间的差值,
// 确保columnDiff<=10,然后标记为我们需要的点
cloudNeighborPicked[ind] = 1;
for (int l = 1; l <= 5; l++) {
int columnDiff = std::abs(int(cloudInfo.pointColInd[ind + l] - cloudInfo.pointColInd[ind + l - 1]));
if (columnDiff > 10)
break;
cloudNeighborPicked[ind + l] = 1;
}
for (int l = -1; l >= -5; l--) {
int columnDiff = std::abs(
int(cloudInfo.pointColInd[ind + l] - cloudInfo.pointColInd[ind + l + 1]));
if (columnDiff > 10)
break;
cloudNeighborPicked[ind + l] = 1;
}
}
}

// 标记平面点
int smallestPickedNum = 0;
for (int k = sp; k <= ep; k++) {
int ind = cloudSmoothness[k].ind;
if (cloudNeighborPicked[ind] == 0 && cloudCurvature[ind] < surfThreshold) {
// 在还没有标记并且曲率较小的点里面选
cloudLabel[ind] = -1;
cloudNeighborPicked[ind] = 1;

// 防止平面点聚集
for (int l = 1; l <= 5; l++) {
int columnDiff = std::abs(
int(cloudInfo.pointColInd[ind + l] - cloudInfo.pointColInd[ind + l - 1]));
if (columnDiff > 10)
break;
cloudNeighborPicked[ind + l] = 1;
}
for (int l = -1; l >= -5; l--) {
int columnDiff = std::abs(
int(cloudInfo.pointColInd[ind + l] - cloudInfo.pointColInd[ind + l + 1]));
if (columnDiff > 10)
break;
cloudNeighborPicked[ind + l] = 1;
}
}
}

// label小于0的点是平面点
for (int k = sp; k <= ep; k++) {
if (cloudLabel[k] <= 0) {
surfaceCloudScan->push_back(extractedCloud->points[k]);
}
}
}

// 下采样
surfaceCloudScanDS->clear();
downSizeFilter.setInputCloud(surfaceCloudScan);
downSizeFilter.filter(*surfaceCloudScanDS);

*surfaceCloud += *surfaceCloudScanDS;
}
}

地图优化

这个模块将激光里程计、回环检测、全局优化全部写在一起,代码看起来不太清晰。主要是scan to map构建点线、点面约束,并且利用回环检测,GPS ODOM进行全图的一致性优化。这里主要有两个线程来分别处理回环检测和可视化。loopClosureEnableFlag主要是检测回环并且执行回环优化,visualizeGlobalMapThread线程则是主要用于全局地图的发布、保存以及可视化。

1
2
3
// 两个线程,一边按固定的频率进行回环检测、添加约束边,另外一边进行地图发布和保存
std::thread loopthread(&mapOptimization::loopClosureThread, &MO);
std::thread visualizeMapThread(&mapOptimization::visualizeGlobalMapThread, &MO);

在节点订阅中,主要利用了GPS ODOM和特征点云数据,最终发布优化后的激光oodm。所有的业务逻辑依然在laserCloudInfoHandler中处理。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// subscriber 主要订阅分类好的cloud_info以及gps,用于后端优化和回环检测,
// 注意gps接受的是nav_msgs::Odometry消息, 是通过robot_localization节点融合imu和gps数据得到的
// callback里面只是装数据到队列
subLaserCloudInfo = nh.subscribe<lio_sam::cloud_info>("lio_sam/feature/cloud_info", 1,
&mapOptimization::laserCloudInfoHandler, this,
ros::TransportHints().tcpNoDelay());
subGPS = nh.subscribe<nav_msgs::Odometry>(gpsTopic, 200, &mapOptimization::gpsHandler, this,
ros::TransportHints().tcpNoDelay());

// publisher 发布一些odometry之类的
pubKeyPoses = nh.advertise<sensor_msgs::PointCloud2>("lio_sam/mapping/trajectory", 1);
pubLaserCloudSurround = nh.advertise<sensor_msgs::PointCloud2>("lio_sam/mapping/map_global", 1); // 全局地图
pubOdomAftMappedROS = nh.advertise<nav_msgs::Odometry>("lio_sam/mapping/odometry", 1); // 优化后的odom
pubPath = nh.advertise<nav_msgs::Path>("lio_sam/mapping/path", 1);

// 回环检测相关的一些历史帧
pubHistoryKeyFrames = nh.advertise<sensor_msgs::PointCloud2>("lio_sam/mapping/icp_loop_closure_history_cloud", 1);
pubIcpKeyFrames = nh.advertise<sensor_msgs::PointCloud2>("lio_sam/mapping/icp_loop_closure_corrected_cloud", 1);

// local map
pubRecentKeyFrames = nh.advertise<sensor_msgs::PointCloud2>("lio_sam/mapping/map_local", 1);
pubRecentKeyFrame = nh.advertise<sensor_msgs::PointCloud2>("lio_sam/mapping/cloud_registered", 1);
pubCloudRegisteredRaw = nh.advertise<sensor_msgs::PointCloud2>("lio_sam/mapping/cloud_registered_raw", 1);

整体的流程如下,首先在点云投影的模块中已经计算了每帧点云的相邻imu以及GPS ODOM数据帧,并计算好其位姿增量,分别用odomAvailableimuAvailable两个标志位来记录。这里的第一步就是更新初始位姿,之后就是提取附近关键帧、构建点面点线约束、GPS约束、因子图优化、更新ODOM。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// 0.15s更新一下map
if (timeLaserCloudInfoLast - timeLastProcessing >= mappingProcessInterval) {

timeLastProcessing = timeLaserCloudInfoLast;

updateInitialGuess(); // 利用imu预计分优化结果或者gps odom更新初始位姿

extractSurroundingKeyFrames(); // 从关键帧里面提取附近回环候选帧

downsampleCurrentScan(); // 不同的leaf size进行下采样,主要是corner cloud和surface cloud

scan2MapOptimization(); // 构建点到平面、点到直线的残差, 用高斯牛顿法进行优化

saveKeyFramesAndFactor(); // 添加factor,保存key pose之类的

correctPoses(); // 更新位姿

// publish odom
publishOdometry(); // 发布增量平滑后的odom

publishFrames(); // 发布一些关键帧点云之类的
}

位姿初始化

在位姿初始化部分,初始姿态存在transformTobeMapped中,包括xyz和rpy角。第一帧点云进来的时候直接用记录的imuRollInit初始化姿态,这里有些疑问,后续补充。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
void updateInitialGuess() {
// 更新初始位姿, 来源可以是GPS ODOM, 也可以是上一帧的位姿, 存在transformTobeMapped中
// initialization
if (cloudKeyPoses3D->points.empty()) {
// 第一帧点云进来,直接使用这帧点云所在的imu队列积分的结果作为初始rpy角
transformTobeMapped[0] = cloudInfo.imuRollInit;
transformTobeMapped[1] = cloudInfo.imuPitchInit;
transformTobeMapped[2] = cloudInfo.imuYawInit;

if (!useImuHeadingInitialization)
transformTobeMapped[2] = 0;

// 获取初始的transform
lastImuTransformation = pcl::getTransformation(0, 0, 0, cloudInfo.imuRollInit, cloudInfo.imuPitchInit,
cloudInfo.imuYawInit); // save imu before return;
return;
}

// use imu pre-integration estimation for pose guess
//gps odom可用的话, 使用在第一步去畸变时计算的点云帧附近的gps odom作为当前位姿初值
if (cloudInfo.odomAvailable == true && cloudInfo.imuPreintegrationResetId == imuPreintegrationResetId) {
transformTobeMapped[0] = cloudInfo.initialGuessRoll;
transformTobeMapped[1] = cloudInfo.initialGuessPitch;
transformTobeMapped[2] = cloudInfo.initialGuessYaw;

transformTobeMapped[3] = cloudInfo.initialGuessX;
transformTobeMapped[4] = cloudInfo.initialGuessY;
transformTobeMapped[5] = cloudInfo.initialGuessZ;

lastImuTransformation = pcl::getTransformation(0, 0, 0, cloudInfo.imuRollInit, cloudInfo.imuPitchInit,
cloudInfo.imuYawInit); // save imu before return;
return;
}

// use imu incremental estimation for pose guess (only rotation)
// imu 可用的话, 使用第一步中的点云前后的imu帧积分计算的位姿为初值,这里只有姿态,位置来源于ekf gps。
if (cloudInfo.imuAvailable == true) {
Eigen::Affine3f transBack = pcl::getTransformation(0, 0, 0, cloudInfo.imuRollInit, cloudInfo.imuPitchInit,
cloudInfo.imuYawInit);
Eigen::Affine3f transIncre = lastImuTransformation.inverse() * transBack;
Eigen::Affine3f transTobe = trans2Affine3f(transformTobeMapped);
Eigen::Affine3f transFinal = transTobe * transIncre;
pcl::getTranslationAndEulerAngles(transFinal, transformTobeMapped[3], transformTobeMapped[4],
transformTobeMapped[5],transformTobeMapped[0], transformTobeMapped[1], transformTobeMapped[2]);
lastImuTransformation = pcl::getTransformation(0, 0, 0, cloudInfo.imuRollInit, cloudInfo.imuPitchInit,
cloudInfo.imuYawInit); // save imu before return;
return;
}
}

局部地图优化

初始化位姿之后,提取附近的关键帧(localmap),然后进行点线、点面约束构建,使用高斯牛顿法完成优化。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
 // 这部分和lego loam一致,不详细概述 
void extractSurroundingKeyFrames() {
if (cloudKeyPoses3D->points.empty() == true)
return;

// 检测到了回环就提取回环帧,否则提取附近点云
// 第一次进来loopClosureEnableFlag = false, 直接提取附近关键帧
if (loopClosureEnableFlag == true) {
extractForLoopClosure();
} else {
extractNearby();
}
}

// 提取附近的点云帧, 包括corner和surface, cloudKeyPoses3D中存储当前位置
void extractNearby() {
pcl::PointCloud<PointType>::Ptr surroundingKeyPoses(new pcl::PointCloud<PointType>());
pcl::PointCloud<PointType>::Ptr surroundingKeyPosesDS(new pcl::PointCloud<PointType>());
std::vector<int> pointSearchInd;
std::vector<float> pointSearchSqDis;

// extract all the nearby key poses and downsample them, 50m范围内的关键帧
kdtreeSurroundingKeyPoses->setInputCloud(cloudKeyPoses3D); // create kd-tree
kdtreeSurroundingKeyPoses->radiusSearch(cloudKeyPoses3D->back(), (double) surroundingKeyframeSearchRadius,
pointSearchInd, pointSearchSqDis);
// 将满足要求的点云帧加到surroundingKeyPoses中
for (int i = 0; i < pointSearchInd.size(); ++i) {
int id = pointSearchInd[i];
surroundingKeyPoses->push_back(cloudKeyPoses3D->points[id]);
}

downSizeFilterSurroundingKeyPoses.setInputCloud(surroundingKeyPoses);
downSizeFilterSurroundingKeyPoses.filter(*surroundingKeyPosesDS);

// also extract some latest key frames in case the robot rotates in one position
// 把10s内同方向的关键帧也加到surroundingKeyPosesDS中
int numPoses = cloudKeyPoses3D->size();
for (int i = numPoses - 1; i >= 0; --i) {
// 10s内的位姿态都加进来
if (timeLaserCloudInfoLast - cloudKeyPoses6D->points[i].time < 10.0)
surroundingKeyPosesDS->push_back(cloudKeyPoses3D->points[i]);
else
break;
}

extractCloud(surroundingKeyPosesDS);
}

提取完的局部地图放在laserCloudCornerLastDSlaserCloudSurfLastDS中。在scan2MapOptimization函数中构建约束,优化求解。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
void scan2MapOptimization() {
if (cloudKeyPoses3D->points.empty())
return;

// 特征需要满足一定要求才可以进行
if (laserCloudCornerLastDSNum > edgeFeatureMinValidNum && laserCloudSurfLastDSNum > surfFeatureMinValidNum) {
// 构建kdtree搜索的map, 两类
kdtreeCornerFromMap->setInputCloud(laserCloudCornerFromMapDS);
kdtreeSurfFromMap->setInputCloud(laserCloudSurfFromMapDS);

// 迭代30次进行优化
for (int iterCount = 0; iterCount < 30; iterCount++) {
laserCloudOri->clear();
coeffSel->clear();

// 点到平面, 点到直线的残差, 这里写法还与aloam有点区别
cornerOptimization();
surfOptimization();

// 联合两类的残差
combineOptimizationCoeffs();

// 高斯牛顿法迭代优化
if (LMOptimization(iterCount) == true)
break;
}

// 更新transform
transformUpdate();
} else {
ROS_WARN("Not enough features! Only %d edge and %d planar features available.", laserCloudCornerLastDSNum,
laserCloudSurfLastDSNum);
}
}

下面是点线约束和点面约束构建, 和ALOAM中类似,但后者采用ceres solver直接求解,这里则是集成LOAM中张集的手推求解,自行计算了两种特征点的coeff

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
  void cornerOptimization() {
updatePointAssociateToMap(); // 将points转到地图系

#pragma omp parallel for num_threads(numberOfCores)
// 遍历点云, 构建点到直线的约束
for (int i = 0; i < laserCloudCornerLastDSNum; i++) {
PointType pointOri, pointSel, coeff;
std::vector<int> pointSearchInd;
std::vector<float> pointSearchSqDis;

// 在map中搜索当前点的5个紧邻点
pointOri = laserCloudCornerLastDS->points[i];
pointAssociateToMap(&pointOri, &pointSel);
kdtreeCornerFromMap->nearestKSearch(pointSel, 5, pointSearchInd, pointSearchSqDis);

cv::Mat matA1(3, 3, CV_32F, cv::Scalar::all(0));
cv::Mat matD1(1, 3, CV_32F, cv::Scalar::all(0));
cv::Mat matV1(3, 3, CV_32F, cv::Scalar::all(0));

// 只有最近的点都在一定阈值内(1米)才进行计算
if (pointSearchSqDis[4] < 1.0) {
float cx = 0, cy = 0, cz = 0;
for (int j = 0; j < 5; j++) {
cx += laserCloudCornerFromMapDS->points[pointSearchInd[j]].x;
cy += laserCloudCornerFromMapDS->points[pointSearchInd[j]].y;
cz += laserCloudCornerFromMapDS->points[pointSearchInd[j]].z;
}
// 计算其算数平均值/均值
cx /= 5;
cy /= 5;
cz /= 5;

// 计算协方差
float a11 = 0, a12 = 0, a13 = 0, a22 = 0, a23 = 0, a33 = 0;
for (int j = 0; j < 5; j++) {
float ax = laserCloudCornerFromMapDS->points[pointSearchInd[j]].x - cx;
float ay = laserCloudCornerFromMapDS->points[pointSearchInd[j]].y - cy;
float az = laserCloudCornerFromMapDS->points[pointSearchInd[j]].z - cz;

a11 += ax * ax;
a12 += ax * ay;
a13 += ax * az;
a22 += ay * ay;
a23 += ay * az;
a33 += az * az;
}
a11 /= 5;
a12 /= 5;
a13 /= 5;
a22 /= 5;
a23 /= 5;
a33 /= 5;

matA1.at<float>(0, 0) = a11;
matA1.at<float>(0, 1) = a12;
matA1.at<float>(0, 2) = a13;
matA1.at<float>(1, 0) = a12;
matA1.at<float>(1, 1) = a22;
matA1.at<float>(1, 2) = a23;
matA1.at<float>(2, 0) = a13;
matA1.at<float>(2, 1) = a23;
matA1.at<float>(2, 2) = a33;

// 求协方差矩阵的特征值和特征向量, 特征值:matD1,特征向量:保存在矩阵matV1中。
cv::eigen(matA1, matD1, matV1);

// 其中一个特征值远远大于其他两个,则呈线状
if (matD1.at<float>(0, 0) > 3 * matD1.at<float>(0, 1)) {
float x0 = pointSel.x;
float y0 = pointSel.y;
float z0 = pointSel.z;
float x1 = cx + 0.1 * matV1.at<float>(0, 0);
float y1 = cy + 0.1 * matV1.at<float>(0, 1);
float z1 = cz + 0.1 * matV1.at<float>(0, 2);
float x2 = cx - 0.1 * matV1.at<float>(0, 0);
float y2 = cy - 0.1 * matV1.at<float>(0, 1);
float z2 = cz - 0.1 * matV1.at<float>(0, 2);

// 与里程计的计算类似,计算到直线的距离
float a012 = sqrt(((x0 - x1) * (y0 - y2) - (x0 - x2) * (y0 - y1)) *
((x0 - x1) * (y0 - y2) - (x0 - x2) * (y0 - y1))
+ ((x0 - x1) * (z0 - z2) - (x0 - x2) * (z0 - z1)) *
((x0 - x1) * (z0 - z2) - (x0 - x2) * (z0 - z1))
+ ((y0 - y1) * (z0 - z2) - (y0 - y2) * (z0 - z1)) *
((y0 - y1) * (z0 - z2) - (y0 - y2) * (z0 - z1)));

float l12 = sqrt((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2) + (z1 - z2) * (z1 - z2));

float la = ((y1 - y2) * ((x0 - x1) * (y0 - y2) - (x0 - x2) * (y0 - y1))
+ (z1 - z2) * ((x0 - x1) * (z0 - z2) - (x0 - x2) * (z0 - z1))) / a012 / l12;

float lb = -((x1 - x2) * ((x0 - x1) * (y0 - y2) - (x0 - x2) * (y0 - y1))
- (z1 - z2) * ((y0 - y1) * (z0 - z2) - (y0 - y2) * (z0 - z1))) / a012 / l12;

float lc = -((x1 - x2) * ((x0 - x1) * (z0 - z2) - (x0 - x2) * (z0 - z1))
+ (y1 - y2) * ((y0 - y1) * (z0 - z2) - (y0 - y2) * (z0 - z1))) / a012 / l12;

float ld2 = a012 / l12;

// 下面涉及到一个鲁棒核函数,作者简单地设计了这个核函数。
float s = 1 - 0.9 * fabs(ld2);

coeff.x = s * la;
coeff.y = s * lb;
coeff.z = s * lc;
coeff.intensity = s * ld2;

// 程序末尾根据s的值来判断是否将点云点放入点云集合laserCloudOri以及coeffSel中。
if (s > 0.1) {
laserCloudOriCornerVec[i] = pointOri;
coeffSelCornerVec[i] = coeff;
laserCloudOriCornerFlag[i] = true;
}
}
}
}
}

// 点、面约束
void surfOptimization() {
updatePointAssociateToMap();

#pragma omp parallel for num_threads(numberOfCores)
for (int i = 0; i < laserCloudSurfLastDSNum; i++) {
PointType pointOri, pointSel, coeff;
std::vector<int> pointSearchInd;
std::vector<float> pointSearchSqDis;

// 寻找5个紧邻点, 计算其特征值和特征向量
pointOri = laserCloudSurfLastDS->points[i];
pointAssociateToMap(&pointOri, &pointSel);
kdtreeSurfFromMap->nearestKSearch(pointSel, 5, pointSearchInd, pointSearchSqDis);

Eigen::Matrix<float, 5, 3> matA0;
Eigen::Matrix<float, 5, 1> matB0;
Eigen::Vector3f matX0;

matA0.setZero(); // 5*3 存储5个紧邻点
matB0.fill(-1);
matX0.setZero();

// 只考虑附近1.0m内
if (pointSearchSqDis[4] < 1.0) {
for (int j = 0; j < 5; j++) {
matA0(j, 0) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].x;
matA0(j, 1) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].y;
matA0(j, 2) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].z;
}

// 求maxA0中点构成的平面法向量
matX0 = matA0.colPivHouseholderQr().solve(matB0);

// 法向量参数 ax+by+cz +d = 0
float pa = matX0(0, 0);
float pb = matX0(1, 0);
float pc = matX0(2, 0);
float pd = 1;

float ps = sqrt(pa * pa + pb * pb + pc * pc);
pa /= ps;
pb /= ps;
pc /= ps;
pd /= ps;

// 这里再次判断求解的方向向量和每个点相乘,最后结果是不是在误差范围内。
bool planeValid = true;
for (int j = 0; j < 5; j++) {
if (fabs(pa * laserCloudSurfFromMapDS->points[pointSearchInd[j]].x +
pb * laserCloudSurfFromMapDS->points[pointSearchInd[j]].y +
pc * laserCloudSurfFromMapDS->points[pointSearchInd[j]].z + pd) > 0.2) {
planeValid = false;
break;
}
}

// 是有效的平面
if (planeValid) {
float pd2 = pa * pointSel.x + pb * pointSel.y + pc * pointSel.z + pd;
float s = 1 - 0.9 * fabs(pd2) / sqrt(sqrt(pointSel.x * pointSel.x + pointSel.y * pointSel.y + pointSel.z * pointSel.z));

coeff.x = s * pa;
coeff.y = s * pb;
coeff.z = s * pc;
coeff.intensity = s * pd2;

// 误差在允许的范围内的话把这个点放到点云laserCloudOri中去,把对应的向量coeff放到coeffSel中
if (s > 0.1) {
laserCloudOriSurfVec[i] = pointOri;
coeffSelSurfVec[i] = coeff;
laserCloudOriSurfFlag[i] = true;
}
}
}
}
}

最后将两类误差联合起来,高斯牛顿法优化, 在判断是否是有效的优化时,要求旋转部分的模长小于0.05m,平移部分的模长也小于0.05度。(后续填坑)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
void combineOptimizationCoeffs() {
// 把两类损失和协方差丢到laserCloudOri和coeffSel中, 后续进行联合优化
// combine corner coeffs
for (int i = 0; i < laserCloudCornerLastDSNum; ++i) {
if (laserCloudOriCornerFlag[i] == true) {
laserCloudOri->push_back(laserCloudOriCornerVec[i]);
coeffSel->push_back(coeffSelCornerVec[i]);
}
}
// combine surf coeffs
for (int i = 0; i < laserCloudSurfLastDSNum; ++i) {
if (laserCloudOriSurfFlag[i] == true) {
laserCloudOri->push_back(laserCloudOriSurfVec[i]);
coeffSel->push_back(coeffSelSurfVec[i]);
}
}
// reset flag for next iteration 重置参数, 下一帧还要继续用
std::fill(laserCloudOriCornerFlag.begin(), laserCloudOriCornerFlag.end(), false);
std::fill(laserCloudOriSurfFlag.begin(), laserCloudOriSurfFlag.end(), false);
}

// 高斯牛顿法
bool LMOptimization(int iterCount) {
// This optimization is from the original loam_velodyne by Ji Zhang, need to cope with coordinate transformation
// lidar <- camera --- camera <- lidar
// x = z --- x = y
// y = x --- y = z
// z = y --- z = x
// roll = yaw --- roll = pitch
// pitch = roll --- pitch = yaw
// yaw = pitch --- yaw = roll

// 高斯牛顿优化, 参考LOAM
// lidar -> camera
float srx = sin(transformTobeMapped[1]);
float crx = cos(transformTobeMapped[1]);
float sry = sin(transformTobeMapped[2]);
float cry = cos(transformTobeMapped[2]);
float srz = sin(transformTobeMapped[0]);
float crz = cos(transformTobeMapped[0]);

// 初次优化时,特征值门限设置为50,小于这个值认为是退化了,修改matX,matX=matP*matX2
int laserCloudSelNum = laserCloudOri->size();
if (laserCloudSelNum < 50) {
return false;
}

cv::Mat matA(laserCloudSelNum, 6, CV_32F, cv::Scalar::all(0));
cv::Mat matAt(6, laserCloudSelNum, CV_32F, cv::Scalar::all(0));
cv::Mat matAtA(6, 6, CV_32F, cv::Scalar::all(0));
cv::Mat matB(laserCloudSelNum, 1, CV_32F, cv::Scalar::all(0));
cv::Mat matAtB(6, 1, CV_32F, cv::Scalar::all(0));
cv::Mat matX(6, 1, CV_32F, cv::Scalar::all(0));
cv::Mat matP(6, 6, CV_32F, cv::Scalar::all(0));

PointType pointOri, coeff;

for (int i = 0; i < laserCloudSelNum; i++) {
// lidar -> camera
pointOri.x = laserCloudOri->points[i].y;
pointOri.y = laserCloudOri->points[i].z;
pointOri.z = laserCloudOri->points[i].x;
// lidar -> camera
coeff.x = coeffSel->points[i].y;
coeff.y = coeffSel->points[i].z;
coeff.z = coeffSel->points[i].x;
coeff.intensity = coeffSel->points[i].intensity;
// in camera
// 计算雅克比
float arx = (crx * sry * srz * pointOri.x + crx * crz * sry * pointOri.y - srx * sry * pointOri.z) * coeff.x
+ (-srx * srz * pointOri.x - crz * srx * pointOri.y - crx * pointOri.z) * coeff.y
+ (crx * cry * srz * pointOri.x + crx * cry * crz * pointOri.y - cry * srx * pointOri.z) *
coeff.z;

float ary = ((cry * srx * srz - crz * sry) * pointOri.x
+ (sry * srz + cry * crz * srx) * pointOri.y + crx * cry * pointOri.z) * coeff.x
+ ((-cry * crz - srx * sry * srz) * pointOri.x
+ (cry * srz - crz * srx * sry) * pointOri.y - crx * sry * pointOri.z) * coeff.z;

float arz =
((crz * srx * sry - cry * srz) * pointOri.x + (-cry * crz - srx * sry * srz) * pointOri.y) * coeff.x
+ (crx * crz * pointOri.x - crx * srz * pointOri.y) * coeff.y
+
((sry * srz + cry * crz * srx) * pointOri.x + (crz * sry - cry * srx * srz) * pointOri.y) * coeff.z;
// lidar -> camera
matA.at<float>(i, 0) = arz;
matA.at<float>(i, 1) = arx;
matA.at<float>(i, 2) = ary;
matA.at<float>(i, 3) = coeff.z;
matA.at<float>(i, 4) = coeff.x;
matA.at<float>(i, 5) = coeff.y;
matB.at<float>(i, 0) = -coeff.intensity;
}

cv::transpose(matA, matAt);
matAtA = matAt * matA;
matAtB = matAt * matB;
cv::solve(matAtA, matAtB, matX, cv::DECOMP_QR);

// 第一次进来
if (iterCount == 0) {
cv::Mat matE(1, 6, CV_32F, cv::Scalar::all(0));
cv::Mat matV(6, 6, CV_32F, cv::Scalar::all(0));
cv::Mat matV2(6, 6, CV_32F, cv::Scalar::all(0));

cv::eigen(matAtA, matE, matV);
matV.copyTo(matV2);

isDegenerate = false;
float eignThre[6] = {100, 100, 100, 100, 100, 100};
for (int i = 5; i >= 0; i--) {
if (matE.at<float>(0, i) < eignThre[i]) {
for (int j = 0; j < 6; j++) {
matV2.at<float>(i, j) = 0;
}
isDegenerate = true;
} else {
break;
}
}
matP = matV.inv() * matV2;
}

if (isDegenerate) {
cv::Mat matX2(6, 1, CV_32F, cv::Scalar::all(0));
matX.copyTo(matX2);
matX = matP * matX2;
}

transformTobeMapped[0] += matX.at<float>(0, 0);
transformTobeMapped[1] += matX.at<float>(1, 0);
transformTobeMapped[2] += matX.at<float>(2, 0);
transformTobeMapped[3] += matX.at<float>(3, 0);
transformTobeMapped[4] += matX.at<float>(4, 0);
transformTobeMapped[5] += matX.at<float>(5, 0);

float deltaR = sqrt(
pow(pcl::rad2deg(matX.at<float>(0, 0)), 2) +
pow(pcl::rad2deg(matX.at<float>(1, 0)), 2) +
pow(pcl::rad2deg(matX.at<float>(2, 0)), 2));
float deltaT = sqrt(
pow(matX.at<float>(3, 0) * 100, 2) +
pow(matX.at<float>(4, 0) * 100, 2) +
pow(matX.at<float>(5, 0) * 100, 2));

// 在判断是否是有效的优化时,要求旋转部分的模长小于0.05m,平移部分的模长也小于0.05度
if (deltaR < 0.05 && deltaT < 0.05) {
return true; // converged
}
return false; // keep optimizing
}

优化完成后,更新当前的位姿,在点云帧对应的imu队列中的数据补偿roll和pitch的扰动。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
void transformUpdate() {
// IMU可用的话更新transformTobeMapped
if (cloudInfo.imuAvailable == true) {
if (std::abs(cloudInfo.imuPitchInit) < 1.4) {
double imuWeight = 0.05;
tf::Quaternion imuQuaternion;
tf::Quaternion transformQuaternion;
double rollMid, pitchMid, yawMid;

// slerp roll
transformQuaternion.setRPY(transformTobeMapped[0], 0, 0);
imuQuaternion.setRPY(cloudInfo.imuRollInit, 0, 0);
// 线性插值
tf::Matrix3x3(transformQuaternion.slerp(imuQuaternion, imuWeight)).getRPY(rollMid, pitchMid, yawMid);
transformTobeMapped[0] = rollMid;

// slerp pitch
transformQuaternion.setRPY(0, transformTobeMapped[1], 0);
imuQuaternion.setRPY(0, cloudInfo.imuPitchInit, 0);
tf::Matrix3x3(transformQuaternion.slerp(imuQuaternion, imuWeight)).getRPY(rollMid, pitchMid, yawMid);
transformTobeMapped[1] = pitchMid;
}
}

transformTobeMapped[0] = constraintTransformation(transformTobeMapped[0], rotation_tollerance);
transformTobeMapped[1] = constraintTransformation(transformTobeMapped[1], rotation_tollerance);
transformTobeMapped[5] = constraintTransformation(transformTobeMapped[5], z_tollerance);
}

全图增量平滑

之后则是添加激光里程计、GPS 因子,联合优化。这里注意优化后 需要边缘化得到每个变量的协方差。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
void saveKeyFramesAndFactor() {
if (saveFrame() == false)
return;
// 添加各种factor、保存关键帧
// odom factor
addOdomFactor();

// gps factor
addGPSFactor();

// cout << "****************************************************" << endl;
// gtSAMgraph.print("GTSAM Graph:\n");

// update iSAM
isam->update(gtSAMgraph, initialEstimate);
isam->update();

// update multiple-times till converge
if (aLoopIsClosed == true) {
isam->update();
isam->update();
isam->update();
isam->update();
isam->update();
}

gtSAMgraph.resize(0);
initialEstimate.clear();

//save key poses
PointType thisPose3D;
PointTypePose thisPose6D;
Pose3 latestEstimate;

// 最新的pose
isamCurrentEstimate = isam->calculateEstimate();
latestEstimate = isamCurrentEstimate.at<Pose3>(isamCurrentEstimate.size() - 1);
// cout << "****************************************************" << endl;
// isamCurrentEstimate.print("Current estimate: ");

// 这里不断的增加关键帧到cloudKeyPoses3D、cloudKeyPoses6D中
thisPose3D.x = latestEstimate.translation().x();
thisPose3D.y = latestEstimate.translation().y();
thisPose3D.z = latestEstimate.translation().z();
thisPose3D.intensity = cloudKeyPoses3D->size(); // this can be used as index
cloudKeyPoses3D->push_back(thisPose3D);

thisPose6D.x = thisPose3D.x;
thisPose6D.y = thisPose3D.y;
thisPose6D.z = thisPose3D.z;
thisPose6D.intensity = thisPose3D.intensity; // this can be used as index
thisPose6D.roll = latestEstimate.rotation().roll();
thisPose6D.pitch = latestEstimate.rotation().pitch();
thisPose6D.yaw = latestEstimate.rotation().yaw();
thisPose6D.time = timeLaserCloudInfoLast;
cloudKeyPoses6D->push_back(thisPose6D);

// cout << "****************************************************" << endl;
// cout << "Pose covariance:" << endl;
// cout << isam->marginalCovariance(isamCurrentEstimate.size()-1) << endl << endl;
// 边缘化得到每个变量的协方差
poseCovariance = isam->marginalCovariance(isamCurrentEstimate.size() - 1);

// save updated transform
transformTobeMapped[0] = latestEstimate.rotation().roll();
transformTobeMapped[1] = latestEstimate.rotation().pitch();
transformTobeMapped[2] = latestEstimate.rotation().yaw();
transformTobeMapped[3] = latestEstimate.translation().x();
transformTobeMapped[4] = latestEstimate.translation().y();
transformTobeMapped[5] = latestEstimate.translation().z();

// save all the received edge and surf points
pcl::PointCloud<PointType>::Ptr thisCornerKeyFrame(new pcl::PointCloud<PointType>());
pcl::PointCloud<PointType>::Ptr thisSurfKeyFrame(new pcl::PointCloud<PointType>());
pcl::copyPointCloud(*laserCloudCornerLastDS, *thisCornerKeyFrame);
pcl::copyPointCloud(*laserCloudSurfLastDS, *thisSurfKeyFrame);

// save key frame cloud
cornerCloudKeyFrames.push_back(thisCornerKeyFrame);
surfCloudKeyFrames.push_back(thisSurfKeyFrame);

// save path for visualization
updatePath(thisPose6D);
}

激光odom是二元因子,gps因子则只是在位姿估计协方差比较大的时候才考虑加入。因为在GPS接收可用时不断增加GPS因子是不必要的,激光雷达惯性测程法的漂移增长非常缓慢。在实际应用中,我们只在估计的GPS位置协方差大于接收到的GPS位置协方差时添加一个GPS因子。gps 因子加入的时候aLoopIsClosed标志位改为true,后面可以执行因子图优化了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
  void addOdomFactor() {
if (cloudKeyPoses3D->points.empty()) {
// 第一帧进来时初始化gtsam参数
noiseModel::Diagonal::shared_ptr priorNoise = noiseModel::Diagonal::Variances(
(Vector(6) << 1e-2, 1e-2, M_PI * M_PI, 1e8, 1e8, 1e8).finished()); // rad*rad, meter*meter
// 先验因子
gtSAMgraph.add(PriorFactor<Pose3>(0, trans2gtsamPose(transformTobeMapped), priorNoise));
initialEstimate.insert(0, trans2gtsamPose(transformTobeMapped));
} else {
noiseModel::Diagonal::shared_ptr odometryNoise = noiseModel::Diagonal::Variances(
(Vector(6) << 1e-6, 1e-6, 1e-6, 1e-4, 1e-4, 1e-4).finished());
gtsam::Pose3 poseFrom = pclPointTogtsamPose3(cloudKeyPoses6D->points.back());
gtsam::Pose3 poseTo = trans2gtsamPose(transformTobeMapped);
// 二元因子
gtSAMgraph.add(BetweenFactor<Pose3>(cloudKeyPoses3D->size() - 1, cloudKeyPoses3D->size(), poseFrom.between(poseTo), odometryNoise));
initialEstimate.insert(cloudKeyPoses3D->size(), poseTo); // 添加值
}
}

void addGPSFactor() {
if (gpsQueue.empty())
return;

// wait for system initialized and settles down
if (cloudKeyPoses3D->points.empty())
return;
else {
if (pointDistance(cloudKeyPoses3D->front(), cloudKeyPoses3D->back()) < 5.0)
return;
}

// pose covariance small, no need to correct
if (poseCovariance(3, 3) < poseCovThreshold && poseCovariance(4, 4) < poseCovThreshold)
return;

// pose的协方差比较大的时候才去添加gps factor
while (!gpsQueue.empty()) {
// 时间戳对齐
if (gpsQueue.front().header.stamp.toSec() < timeLaserCloudInfoLast - 0.2) {
// message too old
gpsQueue.pop_front();
} else if (gpsQueue.front().header.stamp.toSec() > timeLaserCloudInfoLast + 0.2) {
// message too new
break;
} else {
nav_msgs::Odometry thisGPS = gpsQueue.front();
gpsQueue.pop_front();

// GPS too noisy, skip
float noise_x = thisGPS.pose.covariance[0];
float noise_y = thisGPS.pose.covariance[7];
float noise_z = thisGPS.pose.covariance[14];
if (noise_x > gpsCovThreshold || noise_y > gpsCovThreshold)
continue;

float gps_x = thisGPS.pose.pose.position.x;
float gps_y = thisGPS.pose.pose.position.y;
float gps_z = thisGPS.pose.pose.position.z;
if (!useGpsElevation) {
gps_z = transformTobeMapped[5]; // gps的z一般不可信
noise_z = 0.01;
}

// GPS not properly initialized (0,0,0)
if (abs(gps_x) < 1e-6 && abs(gps_y) < 1e-6)
continue;

// 添加GPS因子
gtsam::Vector Vector3(3);
Vector3 << noise_x, noise_y, noise_z;
noiseModel::Diagonal::shared_ptr gps_noise = noiseModel::Diagonal::Variances(Vector3); // 噪声定义
gtsam::GPSFactor gps_factor(cloudKeyPoses3D->size(), gtsam::Point3(gps_x, gps_y, gps_z), gps_noise);
gtSAMgraph.add(gps_factor);

aLoopIsClosed = true;
break;
}
}
}

每次加入新的因子,aLoopIsClosed标志位都会设为true,并执行因子图优化,然后修正当前的位姿,最后重置因子标志位,以及更新预积分标志位imuPreintegrationResetIdimuPreintegrationResetId会存储在激光odom协方差的第一位,来标志pose修正成功。它有两个用途

  • 在lidar imu联合优化的模块里面odometryHandler,会取出odom,判断位姿修正是否完成,未完成的话则需要重置预积分参数。
  • 在地图优化模块的更新初始位姿里面判定预测的odom是否可用。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
void correctPoses() {
if (cloudKeyPoses3D->points.empty())
return;

if (aLoopIsClosed == true) {
// clear path
globalPath.poses.clear();
// update key poses 更新位姿
int numPoses = isamCurrentEstimate.size();
for (int i = 0; i < numPoses; ++i) {
cloudKeyPoses3D->points[i].x = isamCurrentEstimate.at<Pose3>(i).translation().x();
cloudKeyPoses3D->points[i].y = isamCurrentEstimate.at<Pose3>(i).translation().y();
cloudKeyPoses3D->points[i].z = isamCurrentEstimate.at<Pose3>(i).translation().z();

cloudKeyPoses6D->points[i].x = cloudKeyPoses3D->points[i].x;
cloudKeyPoses6D->points[i].y = cloudKeyPoses3D->points[i].y;
cloudKeyPoses6D->points[i].z = cloudKeyPoses3D->points[i].z;
cloudKeyPoses6D->points[i].roll = isamCurrentEstimate.at<Pose3>(i).rotation().roll();
cloudKeyPoses6D->points[i].pitch = isamCurrentEstimate.at<Pose3>(i).rotation().pitch();
cloudKeyPoses6D->points[i].yaw = isamCurrentEstimate.at<Pose3>(i).rotation().yaw();

updatePath(cloudKeyPoses6D->points[i]);
}

aLoopIsClosed = false;
// ID for reseting IMU pre-integration
++imuPreintegrationResetId;
}
}

后续的可视化之类的这里不概述了。

回环检测

基于欧式距离的回环检测,最后icp配准得到回环帧的相对位姿,这样作为二元因子,加入到因子图中,这里和lego loam一致。最后把回环检测的标志位aLoopIsClosed改成true,就可以在cloudHandler中执行因子图优化了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
void performLoopClosure() {
if (cloudKeyPoses3D->points.empty() == true)
return;

int latestFrameIDLoopCloure; // 关键帧队列中最新的关键帧id
int closestHistoryFrameID; // 最近的关键帧id
if (detectLoopClosure(&latestFrameIDLoopCloure, &closestHistoryFrameID) == false)
return;

// 检测到了回环进入以下流程,将两帧点云进行icp配准得到最终的trans
// ICP Settings
pcl::IterativeClosestPoint<PointType, PointType> icp;
icp.setMaxCorrespondenceDistance(100);
icp.setMaximumIterations(100);
icp.setTransformationEpsilon(1e-6);
icp.setEuclideanFitnessEpsilon(1e-6);
icp.setRANSACIterations(0);

// Downsample map cloud
pcl::PointCloud<PointType>::Ptr cloud_temp(new pcl::PointCloud<PointType>());
downSizeFilterICP.setInputCloud(nearHistoryKeyFrameCloud);
downSizeFilterICP.filter(*cloud_temp);
*nearHistoryKeyFrameCloud = *cloud_temp;
// publish history near key frames
publishCloud(&pubHistoryKeyFrames, nearHistoryKeyFrameCloud, timeLaserInfoStamp, "odom");

// Align clouds 将回环帧与local map进行匹配
icp.setInputSource(latestKeyFrameCloud);
icp.setInputTarget(nearHistoryKeyFrameCloud);
pcl::PointCloud<PointType>::Ptr unused_result(new pcl::PointCloud<PointType>());
icp.align(*unused_result);

// 通过icp score阈值判断是否匹配成功
// std::cout << "ICP converg flag:" << icp.hasConverged() << ". Fitness score: " << icp.getFitnessScore() << std::endl;
if (icp.hasConverged() == false || icp.getFitnessScore() > historyKeyframeFitnessScore)
return;

// publish corrected cloud
if (pubIcpKeyFrames.getNumSubscribers() != 0) {
pcl::PointCloud<PointType>::Ptr closed_cloud(new pcl::PointCloud<PointType>());
pcl::transformPointCloud(*latestKeyFrameCloud, *closed_cloud, icp.getFinalTransformation());
publishCloud(&pubIcpKeyFrames, closed_cloud, timeLaserInfoStamp, "odom");
}

// Get pose transformation
float x, y, z, roll, pitch, yaw;
Eigen::Affine3f correctionLidarFrame;
// icp得到的两帧之间的转换
correctionLidarFrame = icp.getFinalTransformation();
// transform from world origin to wrong pose
Eigen::Affine3f tWrong = pclPointToAffine3f(cloudKeyPoses6D->points[latestFrameIDLoopCloure]);
// transform from world origin to corrected pose
Eigen::Affine3f tCorrect =
correctionLidarFrame * tWrong;// pre-multiplying -> successive rotation about a fixed frame
pcl::getTranslationAndEulerAngles(tCorrect, x, y, z, roll, pitch, yaw);

// gtsam中添加回环的约束
gtsam::Pose3 poseFrom = Pose3(Rot3::RzRyRx(roll, pitch, yaw), Point3(x, y, z));
gtsam::Pose3 poseTo = pclPointTogtsamPose3(cloudKeyPoses6D->points[closestHistoryFrameID]);
gtsam::Vector Vector6(6);
float noiseScore = icp.getFitnessScore();
Vector6 << noiseScore, noiseScore, noiseScore, noiseScore, noiseScore, noiseScore;
noiseModel::Diagonal::shared_ptr constraintNoise = noiseModel::Diagonal::Variances(Vector6);

// Add pose constraint
std::lock_guard<std::mutex> lock(mtx);
gtSAMgraph.add(BetweenFactor<Pose3>(latestFrameIDLoopCloure, closestHistoryFrameID, poseFrom.between(poseTo),constraintNoise));
isam->update(gtSAMgraph);
isam->update();
isam->update();
isam->update();
isam->update();
isam->update();
gtSAMgraph.resize(0);

// 标志位,有新的因子加入即执行
aLoopIsClosed = true;
}

具体如何检测回环的,原理很简单,欧氏距离+时间差过滤,和lego loam一致。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
bool detectLoopClosure(int *latestID, int *closestID) {
int latestFrameIDLoopCloure;
int closestHistoryFrameID;

latestKeyFrameCloud->clear();
nearHistoryKeyFrameCloud->clear();

std::lock_guard<std::mutex> lock(mtx);

// find the closest history key frame
std::vector<int> pointSearchIndLoop;
std::vector<float> pointSearchSqDisLoop;
kdtreeHistoryKeyPoses->setInputCloud(cloudKeyPoses3D);
kdtreeHistoryKeyPoses->radiusSearch(cloudKeyPoses3D->back(), historyKeyframeSearchRadius, pointSearchIndLoop, pointSearchSqDisLoop, 0);

// 两帧时间差也满足最小要求
closestHistoryFrameID = -1;
for (int i = 0; i < pointSearchIndLoop.size(); ++i) {
int id = pointSearchIndLoop[i];
if (abs(cloudKeyPoses6D->points[id].time - timeLaserCloudInfoLast) > historyKeyframeSearchTimeDiff) {
closestHistoryFrameID = id;
break;
}
}

if (closestHistoryFrameID == -1)
return false;

if (cloudKeyPoses3D->size() - 1 == closestHistoryFrameID)
return false;

// save latest key frames
latestFrameIDLoopCloure = cloudKeyPoses3D->size() - 1;
*latestKeyFrameCloud += *transformPointCloud(cornerCloudKeyFrames[latestFrameIDLoopCloure],
&cloudKeyPoses6D->points[latestFrameIDLoopCloure]);
*latestKeyFrameCloud += *transformPointCloud(surfCloudKeyFrames[latestFrameIDLoopCloure],
&cloudKeyPoses6D->points[latestFrameIDLoopCloure]);

// save history near key frames
bool nearFrameAvailable = false;
for (int j = -historyKeyframeSearchNum; j <= historyKeyframeSearchNum; ++j) {
if (closestHistoryFrameID + j < 0 || closestHistoryFrameID + j > latestFrameIDLoopCloure)
continue;
*nearHistoryKeyFrameCloud += *transformPointCloud(cornerCloudKeyFrames[closestHistoryFrameID + j], &cloudKeyPoses6D->points[closestHistoryFrameID + j]);
*nearHistoryKeyFrameCloud += *transformPointCloud(surfCloudKeyFrames[closestHistoryFrameID + j], &cloudKeyPoses6D->points[closestHistoryFrameID + j]);
nearFrameAvailable = true;
}

if (nearFrameAvailable == false)
return false;

*latestID = latestFrameIDLoopCloure;
*closestID = closestHistoryFrameID;

return true;
}

Lidar IMU联合优化

此模块主要联合优化激光odom和imu数据,并估计imu偏差,进行实时的里程计估算(局部的local path)。我们可以通过slam位姿,计算相邻帧的相对位姿,认为是估计项。如果要构建一个优化问题,还需要误差项和测量项,测量项是通过IMU计算的,从而构建误差函数对相邻帧位姿的迭代优化,这个过程就是预积分。其具体的计算过程后续填坑。

1
2
3
4
5
6
7
8
9
10
11
12
// subscriber 订阅imu数据和激光Odom
// 业务逻辑都在callback里面写, 两个数据是耦合关系, imu通过激光odom给出优化后的预积分预测
// odom根据预测的位姿优化、融合出新的odom
subImu = nh.subscribe<sensor_msgs::Imu>(imuTopic, 2000, &IMUPreintegration::imuHandler, this,
ros::TransportHints().tcpNoDelay());
subOdometry = nh.subscribe<nav_msgs::Odometry>("lio_sam/mapping/odometry", 5,
&IMUPreintegration::odometryHandler, this,
ros::TransportHints().tcpNoDelay());

// publisher 发布融合后的imu path和预积分完成优化后预测的odom
pubImuOdometry = nh.advertise<nav_msgs::Odometry>(odomTopic, 2000);
pubImuPath = nh.advertise<nav_msgs::Path>("lio_sam/imu/path", 1);

这里需要注意噪声及bias的初始化,采用的是GTSAM实现。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// 下面是预积分使用到的gtsam的一些参数配置
boost::shared_ptr<gtsam::PreintegrationParams> p = gtsam::PreintegrationParams::MakeSharedU(imuGravity);
p->accelerometerCovariance = gtsam::Matrix33::Identity(3, 3) * pow(imuAccNoise, 2); // acc white noise in continuous
p->gyroscopeCovariance = gtsam::Matrix33::Identity(3, 3) * pow(imuGyrNoise, 2); // gyro white noise in continuous
p->integrationCovariance = gtsam::Matrix33::Identity(3, 3) * pow(1e-4, 2); // error committed in integrating position from velocities
gtsam::imuBias::ConstantBias prior_imu_bias((gtsam::Vector(6) << 0, 0, 0, 0, 0, 0).finished());; // assume zero initial bias

priorPoseNoise = gtsam::noiseModel::Diagonal::Sigmas((gtsam::Vector(6)
<< 1e-2, 1e-2, 1e-2, 1e-2, 1e-2, 1e-2).finished()); // rad,rad,rad,m, m, m
priorVelNoise = gtsam::noiseModel::Isotropic::Sigma(3, 1e2); // m/s
priorBiasNoise = gtsam::noiseModel::Isotropic::Sigma(6, 1e-3); // 1e-2 ~ 1e-3 seems to be good
correctionNoise = gtsam::noiseModel::Isotropic::Sigma(6, 1e-2); // meter
noiseModelBetweenBias =
(gtsam::Vector(6) << imuAccBiasN, imuAccBiasN, imuAccBiasN, imuGyrBiasN, imuGyrBiasN, imuGyrBiasN).finished();

// 预积分前后的imu
imuIntegratorImu_ = new gtsam::PreintegratedImuMeasurements(p, prior_imu_bias); // setting up the IMU integration for IMU message thread
imuIntegratorOpt_ =new gtsam::PreintegratedImuMeasurements(p, prior_imu_bias); // setting up the IMU integration for optimization

imuHandler中主要是用两个双端队列来装imu数据,将imu数据转换到雷达系下,计算相邻两帧imu的时间差,然后进行预积分,可以得到一个预测的odom。在GTSAM中integrateMeasurement函数可以添加一帧IMU测量,而predict函数则是预测系统下一时刻的状态,这里的状态量就是gtsam::NavState: Pose (rotation, translation) + velocity。需要注意的是在做出预测之前,需要先在odomhandler中初始化系统,并且执行过一次优化。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
void imuHandler(const sensor_msgs::Imu::ConstPtr &imu_raw) {
// imu数据转换到雷达坐标系下
sensor_msgs::Imu thisImu = imuConverter(*imu_raw);
// publish static tf map->odom
tfMap2Odom.sendTransform(tf::StampedTransform(map_to_odom, thisImu.header.stamp, "map", "odom"));

// 两个双端队列分别装着优化前后的imu数据
imuQueOpt.push_back(thisImu);
imuQueImu.push_back(thisImu);

// 检查有没有执行过一次优化,这里需要先在odomhandler中优化一次后再进行该函数后续的工作
if (doneFirstOpt == false)
return;

// 获得时间间隔, 第一次为1/500,之后是两次imuTime间的差
double imuTime = ROS_TIME(&thisImu);
double dt = (lastImuT_imu < 0) ? (1.0 / 500.0) : (imuTime - lastImuT_imu);
lastImuT_imu = imuTime;

// integrate this single imu message
// 进行预积分
imuIntegratorImu_->integrateMeasurement(gtsam::Vector3(thisImu.linear_acceleration.x, thisImu.linear_acceleration.y,
thisImu.linear_acceleration.z),
gtsam::Vector3(thisImu.angular_velocity.x,
thisImu.angular_velocity.y,
thisImu.angular_velocity.z), dt);

// predict odometry
// 根据预计分结果, 预测odom
gtsam::NavState currentState = imuIntegratorImu_->predict(prevStateOdom, prevBiasOdom);

// publish odometry 发布预测的odom
nav_msgs::Odometry odometry;
odometry.header.stamp = thisImu.header.stamp;
odometry.header.frame_id = "odom";
odometry.child_frame_id = "odom_imu";

// transform imu pose to ldiar imu位姿转到雷达系
// 预测值currentState获得imu位姿, 再由imu到雷达变换, 获得雷达位姿 lidarPose
gtsam::Pose3 imuPose = gtsam::Pose3(currentState.quaternion(), currentState.position());
gtsam::Pose3 lidarPose = imuPose.compose(imu2Lidar);

odometry.pose.pose.position.x = lidarPose.translation().x();
odometry.pose.pose.position.y = lidarPose.translation().y();
odometry.pose.pose.position.z = lidarPose.translation().z();
odometry.pose.pose.orientation.x = lidarPose.rotation().toQuaternion().x();
odometry.pose.pose.orientation.y = lidarPose.rotation().toQuaternion().y();
odometry.pose.pose.orientation.z = lidarPose.rotation().toQuaternion().z();
odometry.pose.pose.orientation.w = lidarPose.rotation().toQuaternion().w();

odometry.twist.twist.linear.x = currentState.velocity().x();
odometry.twist.twist.linear.y = currentState.velocity().y();
odometry.twist.twist.linear.z = currentState.velocity().z();
odometry.twist.twist.angular.x = thisImu.angular_velocity.x + prevBiasOdom.gyroscope().x();
odometry.twist.twist.angular.y = thisImu.angular_velocity.y + prevBiasOdom.gyroscope().y();
odometry.twist.twist.angular.z = thisImu.angular_velocity.z + prevBiasOdom.gyroscope().z();
odometry.pose.covariance[0] = double(imuPreintegrationResetId);
pubImuOdometry.publish(odometry);

// publish imu path
// 发布预测的imu path, 只保留3s内的轨迹
static nav_msgs::Path imuPath;
static double last_path_time = -1;
if (imuTime - last_path_time > 0.1) {
last_path_time = imuTime;
geometry_msgs::PoseStamped pose_stamped;
pose_stamped.header.stamp = thisImu.header.stamp;
pose_stamped.header.frame_id = "odom";
pose_stamped.pose = odometry.pose.pose;
imuPath.poses.push_back(pose_stamped);
while (!imuPath.poses.empty() &&
abs(imuPath.poses.front().header.stamp.toSec() - imuPath.poses.back().header.stamp.toSec()) > 3.0)
imuPath.poses.erase(imuPath.poses.begin());
if (pubImuPath.getNumSubscribers() != 0) {
imuPath.header.stamp = thisImu.header.stamp;
imuPath.header.frame_id = "odom";
pubImuPath.publish(imuPath);
}
}

// publish transformation
// 发布odom->base_link的变换
tf::Transform tCur;
tf::poseMsgToTF(odometry.pose.pose, tCur);
tf::StampedTransform odom_2_baselink = tf::StampedTransform(tCur, thisImu.header.stamp, "odom", "base_link");
tfOdom2BaseLink.sendTransform(odom_2_baselink);
}

odometryHandler中则是将雷达odom转到imu系下进行计算,这里的PriorFactor主要有位置pose、速度vel和偏差bias,这里首先会从odom队列中取最新的pose作为先验,并做好时间对齐。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
double currentCorrectionTime = ROS_TIME(odomMsg);

// make sure we have imu data to integrate
// 保证有imu数据,两个回调函数是互有联系的,
// 在imu的回调里就强调要完成一次优化才往下执行
if (imuQueOpt.empty())
return;

// 从雷达odom中取出位姿数据
float p_x = odomMsg->pose.pose.position.x;
float p_y = odomMsg->pose.pose.position.y;
float p_z = odomMsg->pose.pose.position.z;
float r_x = odomMsg->pose.pose.orientation.x;
float r_y = odomMsg->pose.pose.orientation.y;
float r_z = odomMsg->pose.pose.orientation.z;
float r_w = odomMsg->pose.pose.orientation.w;
int currentResetId = round(odomMsg->pose.covariance[0]);
gtsam::Pose3 lidarPose = gtsam::Pose3(gtsam::Rot3::Quaternion(r_w, r_x, r_y, r_z),
gtsam::Point3(p_x, p_y, p_z));

// correction pose jumped, reset imu pre-integration
// ?
if (currentResetId != imuPreintegrationResetId) {
resetParams();
imuPreintegrationResetId = currentResetId;
}

第一次进来时需要初始化,加入PriorFactor,进行一次优化后重置积分器?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
// 0. initialize system
// 第一帧进来初始化系统
if (systemInitialized == false) {
resetOptimization(); // 重置优化参数

// pop old IMU message
// 去掉一些比较旧的imu数据, 只需要保证雷达odom时间戳在imu队列中间
// 因为imu是高频数据, 这里是有效的
while (!imuQueOpt.empty()) {
if (ROS_TIME(&imuQueOpt.front()) < currentCorrectionTime - delta_t) {
lastImuT_opt = ROS_TIME(&imuQueOpt.front());
imuQueOpt.pop_front();
} else
break;
}
// initial pose
prevPose_ = lidarPose.compose(lidar2Imu); // 雷达odom转到imu系下
//PriorFactor,包括了位姿、速度和bias
//加入PriorFactor在图优化中基本都是必需的前提
gtsam::PriorFactor<gtsam::Pose3> priorPose(X(0), prevPose_, priorPoseNoise);
graphFactors.add(priorPose);
// initial velocity
prevVel_ = gtsam::Vector3(0, 0, 0);
gtsam::PriorFactor<gtsam::Vector3> priorVel(V(0), prevVel_, priorVelNoise);
graphFactors.add(priorVel);
// initial bias
prevBias_ = gtsam::imuBias::ConstantBias();
gtsam::PriorFactor<gtsam::imuBias::ConstantBias> priorBias(B(0), prevBias_, priorBiasNoise);
graphFactors.add(priorBias);
// add values、
// 除了因子外, 还要有节点value
graphValues.insert(X(0), prevPose_);
graphValues.insert(V(0), prevVel_);
graphValues.insert(B(0), prevBias_);
// optimize once
// 进行一次优化
optimizer.update(graphFactors, graphValues);
graphFactors.resize(0);
graphValues.clear(); //图和节点均清零, 为什么要清零?

// 重置积分器
imuIntegratorImu_->resetIntegrationAndSetBias(prevBias_);
imuIntegratorOpt_->resetIntegrationAndSetBias(prevBias_);

key = 1; // 计数
systemInitialized = true;
return;
}

这里的key作为计数器,在key超过100时重置整个因子图

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
// reset graph for speed
// key超过设定的100则重置整个图
// 减小计算压力,保存最后的噪声值
if (key == 100) {
// get updated noise before reset
gtsam::noiseModel::Gaussian::shared_ptr updatedPoseNoise = gtsam::noiseModel::Gaussian::Covariance(
optimizer.marginalCovariance(X(key - 1)));
gtsam::noiseModel::Gaussian::shared_ptr updatedVelNoise = gtsam::noiseModel::Gaussian::Covariance(
optimizer.marginalCovariance(V(key - 1)));
gtsam::noiseModel::Gaussian::shared_ptr updatedBiasNoise = gtsam::noiseModel::Gaussian::Covariance(
optimizer.marginalCovariance(B(key - 1)));
// reset graph 重置参数
resetOptimization();

// 重置之后还有类似与初始化的过程 区别在于噪声值不同
// add pose
gtsam::PriorFactor<gtsam::Pose3> priorPose(X(0), prevPose_, updatedPoseNoise);
graphFactors.add(priorPose);
// add velocity
gtsam::PriorFactor<gtsam::Vector3> priorVel(V(0), prevVel_, updatedVelNoise);
graphFactors.add(priorVel);
// add bias
gtsam::PriorFactor<gtsam::imuBias::ConstantBias> priorBias(B(0), prevBias_, updatedBiasNoise);
graphFactors.add(priorBias);
// add values
graphValues.insert(X(0), prevPose_);
graphValues.insert(V(0), prevVel_);
graphValues.insert(B(0), prevBias_);
// optimize once
// 优化一次, 相当于初始化
optimizer.update(graphFactors, graphValues);
graphFactors.resize(0);
graphValues.clear();

key = 1;
}

紧接着才是开始真正的优化流程,主要是对两帧imu数据计算得到最新的状态,用到的是imu数据的加速度和角速度,作者要求的9轴imu在这里没有用到?完成预积分后,增加ImuFactor以及lidarodometry factor 到因子图中。这里新增了一个对优化结果进行判定的函数failureDetection,检测到速度和偏差bias的异常值,比如vel>10或者bias(acc/gyro) > 0.1,此时需要重置一些标志位,比如初始化标志位systemInitialized

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
   // 1. integrate imu data and optimize
// 这里才开始主要的优化流程
while (!imuQueOpt.empty()) {
// pop and integrate imu data that is between two optimizations
// 计算两帧imu时间差
sensor_msgs::Imu *thisImu = &imuQueOpt.front(); // 最新的imu数据帧
double imuTime = ROS_TIME(thisImu);
if (imuTime < currentCorrectionTime - delta_t) {
// 求dt,初始是1/500,后续是两帧imu数据的时间差
double dt = (lastImuT_opt < 0) ? (1.0 / 500.0) : (imuTime - lastImuT_opt);

// 进行预积分得到新的状态值,注意用到的是imu数据的加速度和角速度
// 作者要求的9轴imu数据中欧拉角在本程序中没有任何用到, 全在地图优化里用到的
imuIntegratorOpt_->integrateMeasurement(
gtsam::Vector3(thisImu->linear_acceleration.x, thisImu->linear_acceleration.y,
thisImu->linear_acceleration.z),
gtsam::Vector3(thisImu->angular_velocity.x, thisImu->angular_velocity.y,
thisImu->angular_velocity.z), dt);

//在pop出一次数据前保存上一个数据的时间戳
lastImuT_opt = imuTime;
imuQueOpt.pop_front();
} else
break;
}

// add imu factor to graph
// 利用两帧之间的IMU数据完成了预积分后增加imu因子到因子图中
// add imu factor to graph
const gtsam::PreintegratedImuMeasurements
&preint_imu = dynamic_cast<const gtsam::PreintegratedImuMeasurements &>(*imuIntegratorOpt_);
gtsam::ImuFactor imu_factor(X(key - 1), V(key - 1), X(key), V(key), B(key - 1), preint_imu);
graphFactors.add(imu_factor);
// add imu bias between factor
graphFactors.add(gtsam::BetweenFactor<gtsam::imuBias::ConstantBias>(B(key - 1), B(key), gtsam::imuBias::ConstantBias(), gtsam::noiseModel::Diagonal::Sigmas(sqrt(imuIntegratorOpt_->deltaTij()) * noiseModelBetweenBias)));
// add pose factor
// 还加入了pose factor,其实对应于作者论文中的因子图结构
// 就是与imu因子一起的 Lidar odometry factor
gtsam::Pose3 curPose = lidarPose.compose(lidar2Imu);
gtsam::PriorFactor<gtsam::Pose3> pose_factor(X(key), curPose, correctionNoise);
graphFactors.add(pose_factor);
// insert predicted values
// 插入预测的值 即因子图中x0 x1 x2……节点
gtsam::NavState propState_ = imuIntegratorOpt_->predict(prevState_, prevBias_);
graphValues.insert(X(key), propState_.pose());
graphValues.insert(V(key), propState_.v());
graphValues.insert(B(key), prevBias_);

// optimize 优化后重置
optimizer.update(graphFactors, graphValues);
optimizer.update();
graphFactors.resize(0);
graphValues.clear();
// Overwrite the beginning of the preintegration for the next step.
// 用这次的优化结果重写或者说是覆盖相关初始值, 为下一次优化准备
gtsam::Values result = optimizer.calculateEstimate();
prevPose_ = result.at<gtsam::Pose3>(X(key));
prevVel_ = result.at<gtsam::Vector3>(V(key));
prevState_ = gtsam::NavState(prevPose_, prevVel_);
prevBias_ = result.at<gtsam::imuBias::ConstantBias>(B(key));
// Reset the optimization preintegration object.
imuIntegratorOpt_->resetIntegrationAndSetBias(prevBias_);
// check optimization
// 检查是否有失败情况,如有则重置参数
if (failureDetection(prevVel_, prevBias_)) {
resetParams();
return;
}

这里校验优化结果的函数failureDetection用处比较大。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
bool failureDetection(const gtsam::Vector3 &velCur, const gtsam::imuBias::ConstantBias &biasCur) {
// 检测预计分失败的函数, 即时爆出错误,重置系统
Eigen::Vector3f vel(velCur.x(), velCur.y(), velCur.z());
if (vel.norm() > 10) {
ROS_WARN("Large velocity, reset IMU-preintegration!");
return true;
}

Eigen::Vector3f ba(biasCur.accelerometer().x(), biasCur.accelerometer().y(), biasCur.accelerometer().z());
Eigen::Vector3f bg(biasCur.gyroscope().x(), biasCur.gyroscope().y(), biasCur.gyroscope().z());
if (ba.norm() > 0.1 || bg.norm() > 0.1) {
ROS_WARN("Large bias, reset IMU-preintegration!");
return true;
}

return false;
}

为了维持实时性的odom估算,在每次odom触发优化之后需要立即获得最新的bias, 同时对imu测量值imuQueImu执行bias改变的状态重传播处理, 这样可以最大限度的保证实时性和准确性。这里如何理解?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
// 2. after optiization, re-propagate imu odometry preintegration
// 为了维持实时性imuIntegrateImu就得在每次odom触发优化后立刻获取最新的bias,
// 同时对imu测量值imuQueImu队列执行bias改变的状态重传播处理, 这样可以最大限度的保证实时性和准确性?
prevStateOdom = prevState_;
prevBiasOdom = prevBias_;
// first pop imu message older than current correction data
// 去除旧的imu数据
double lastImuQT = -1;
while (!imuQueImu.empty() && ROS_TIME(&imuQueImu.front()) < currentCorrectionTime - delta_t) {
lastImuQT = ROS_TIME(&imuQueImu.front());
imuQueImu.pop_front();
}
// repropogate
// 重传播?
if (!imuQueImu.empty()) {
// reset bias use the newly optimized bias
// 使用最新的优化后的bias更新原bias值
imuIntegratorImu_->resetIntegrationAndSetBias(prevBiasOdom);
// integrate imu message from the beginning of this optimization
// 利用imuQueImu中的数据进行预积分,主要区别旧在于上一行的更新了bias
for (int i = 0; i < (int) imuQueImu.size(); ++i) {
sensor_msgs::Imu *thisImu = &imuQueImu[i];
double imuTime = ROS_TIME(thisImu); // 时间戳
double dt = (lastImuQT < 0) ? (1.0 / 500.0) : (imuTime - lastImuQT);

// 进行预积分
imuIntegratorImu_->integrateMeasurement(
gtsam::Vector3(thisImu->linear_acceleration.x, thisImu->linear_acceleration.y,
thisImu->linear_acceleration.z),
gtsam::Vector3(thisImu->angular_velocity.x, thisImu->angular_velocity.y,
thisImu->angular_velocity.z), dt);
lastImuQT = imuTime;
}
}

最后,以上流程全部执行成功,需要更新计数器key,然后doneFirstOpt则是告诉imu数据,在odomhandler中已经优化过一次了。

1
2
++key;
doneFirstOpt = true;

总结此模块的主要就是,利用激光odom初始化系统->imu预测odom->联合优化_>bias重传播_,递推的进行预测-优化-更新的流程。

总结

就实用性和精度而言,LIO-SAM应该是近年来,激光SLAM领域的集大成之作了,虽然其创新点不够突出,对传感器要求较高,但目前就性能和精度而言,是非常友好的,代码里面还加入了一些常用的trick。本篇还有一些细节的地方需要进一步分析和深入学习。

-------------    本文结束  感谢您的阅读    -------------
胡想成 wechat
欢迎关注个人公众号!
您的支持,是我装逼的最大动力!

本文标题:开源SLAM系统:LIO-SAM源码解析

文章作者:胡想成

发布时间:2020年08月19日 - 15:08

最后更新:2020年09月04日 - 17:09

原始链接:xchu.net/2020/08/19/51liosam/

版权声明:本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0许可协议,转载请注明出处!