利用G2o实现BA优化—学习笔记

目录

  • 前言
  • 一、G2o简单例程
  • 二、利用G2o实现BA优化
    • 2.1 初始化G2o求解器
    • 2.2 添加顶点
    • 2.3 添加边
    • 2.4 优化
    • 2.5 获取优化的值
    • 2.6 保存为g2o文件
  • 三、优化后的程序
  • 参考链接

前言

使用G2o需要图论的知识。一般来说,要优化的参数就是顶点,边就是顶点之间的映射关系。
BA优化,即Bundle Adjustment,需要对相机姿态和特征点同时进行优化,以求得到最优的状态。

对于优化的参数:
x=[ξ1ξ2...ξm∣p1p2...pn]T\pmb x=[\xi_1 \quad \xi_2 ...\xi_m| \quad p_1 \quad p_2...p_n]^T xxx=[ξ1​ξ2​...ξm​∣p1​p2​...pn​]T
我们需要利用到g2o的类来分别定义(当然我们可以自己定义),并定义相关的边,最后利用优化器优化。

一、G2o简单例程

官方例程:

// g2o - General Graph Optimization
// Copyright (C) 2012 R. Kümmerle
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright notice,
//   this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright
//   notice, this list of conditions and the following disclaimer in the
//   documentation and/or other materials provided with the distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
// TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
// PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
// TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.#include <Eigen/Core>
#include <iostream>#include "g2o/stuff/sampler.h"
#include "g2o/stuff/command_args.h"
#include "g2o/core/sparse_optimizer.h"
#include "g2o/core/block_solver.h"
#include "g2o/core/solver.h"
#include "g2o/core/optimization_algorithm_levenberg.h"
#include "g2o/core/base_vertex.h"
#include "g2o/core/base_unary_edge.h"
#include "g2o/solvers/dense/linear_solver_dense.h"
#include <memory>
#include <boost/shared_array.hpp>
#include <boost/make_shared.hpp>
#include <boost/shared_ptr.hpp>using namespace std;/*** \brief the params, a, b, and lambda for a * exp(-lambda * t) + b*/
class VertexParams : public g2o::BaseVertex<3, Eigen::Vector3d>
{public:EIGEN_MAKE_ALIGNED_OPERATOR_NEW;VertexParams(){}virtual bool read(std::istream& /*is*/){cerr << __PRETTY_FUNCTION__ << " not implemented yet" << endl;return false;}virtual bool write(std::ostream& /*os*/) const{cerr << __PRETTY_FUNCTION__ << " not implemented yet" << endl;return false;}virtual void setToOriginImpl(){cerr << __PRETTY_FUNCTION__ << " not implemented yet" << endl;}virtual void oplusImpl(const double* update){Eigen::Vector3d::ConstMapType v(update);_estimate += v;}
};/*** \brief measurement for a point on the curve** Here the measurement is the point which is lies on the curve.* The error function computes the difference between the curve* and the point.*/
class EdgePointOnCurve : public g2o::BaseUnaryEdge<1, Eigen::Vector2d, VertexParams>
{public:EIGEN_MAKE_ALIGNED_OPERATOR_NEWEdgePointOnCurve(){}virtual bool read(std::istream& /*is*/){cerr << __PRETTY_FUNCTION__ << " not implemented yet" << endl;return false;}virtual bool write(std::ostream& /*os*/) const{cerr << __PRETTY_FUNCTION__ << " not implemented yet" << endl;return false;}void computeError(){const VertexParams* params = static_cast<const VertexParams*>(vertex(0));const double& a = params->estimate()(0);const double& b = params->estimate()(1);const double& lambda = params->estimate()(2);double fval = a * exp(-lambda * measurement()(0)) + b;_error(0) = fval - measurement()(1);}
};int main(int argc, char** argv)
{int numPoints;int maxIterations;bool verbose;std::vector<int> gaugeList;string dumpFilename;g2o::CommandArgs arg;arg.param("dump", dumpFilename, "", "dump the points into a file");arg.param("numPoints", numPoints, 50, "number of points sampled from the curve");arg.param("i", maxIterations, 10, "perform n iterations");arg.param("v", verbose, true, "verbose output of the optimization process");arg.parseArgs(argc, argv);// generate random datadouble a = 2.;double b = 0.4;double lambda = 0.2;//std::unique_ptr<Eigen::Vector2d[]> points(new Eigen::Vector2d[numPoints]);boost::shared_array<Eigen::Vector2d> points(new Eigen::Vector2d[numPoints]);for (int i = 0; i < numPoints; ++i) {double x = g2o::Sampler::uniformRand(0, 10);double y = a * exp(-lambda * x) + b;// add Gaussian noisey += g2o::Sampler::gaussRand(0, 0.02);points[i].x() = x;points[i].y() = y;}if (dumpFilename.size() > 0) {ofstream fout(dumpFilename.c_str());for (int i = 0; i < numPoints; ++i)fout << points[i].transpose() << endl;}// some handy typedefstypedef g2o::BlockSolver< g2o::BlockSolverTraits<Eigen::Dynamic, Eigen::Dynamic> >  MyBlockSolver;typedef g2o::LinearSolverDense<MyBlockSolver::PoseMatrixType> MyLinearSolver;// setup the solverg2o::SparseOptimizer optimizer;optimizer.setVerbose(true);MyLinearSolver* linearSolver = new MyLinearSolver();MyBlockSolver* solver_ptr = new MyBlockSolver(linearSolver);g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg(solver_ptr);optimizer.setAlgorithm(solver);// build the optimization problem given the points// 1. add the parameter vertexVertexParams* params = new VertexParams();params->setId(0);params->setEstimate(Eigen::Vector3d(1,1,1)); // some initial value for the paramsoptimizer.addVertex(params);// 2. add the points we measured to be on the curvefor (int i = 0; i < numPoints; ++i) {EdgePointOnCurve* e = new EdgePointOnCurve;e->setInformation(Eigen::Matrix<double, 1, 1>::Identity());e->setVertex(0, params);e->setMeasurement(points[i]);optimizer.addEdge(e);}// perform the optimizationoptimizer.initializeOptimization();optimizer.setVerbose(verbose);optimizer.optimize(maxIterations);if (verbose)cout << endl;// print out the resultcout << "Target curve" << endl;cout << "a * exp(-lambda * x) + b" << endl;cout << "Iterative least squares solution" << endl;cout << "a      = " << params->estimate()(0) << endl;cout << "b      = " << params->estimate()(1) << endl;cout << "lambda = " << params->estimate()(2) << endl;cout << endl;// clean up//delete[] points;return 0;
}

高翔在书中给出的程序:

#include <iostream>
#include <g2o/core/base_vertex.h>
#include <g2o/core/base_unary_edge.h>
#include <g2o/core/block_solver.h>
#include <g2o/core/optimization_algorithm_levenberg.h>
#include <g2o/core/optimization_algorithm_gauss_newton.h>
#include <g2o/core/optimization_algorithm_dogleg.h>
#include <g2o/solvers/dense/linear_solver_dense.h>
#include <Eigen/Core>
#include <opencv2/core/core.hpp>
#include <cmath>
#include <chrono>
using namespace std;// 曲线模型的顶点,模板参数:优化变量维度和数据类型
class CurveFittingVertex: public g2o::BaseVertex<3, Eigen::Vector3d>
{public:EIGEN_MAKE_ALIGNED_OPERATOR_NEWvirtual void setToOriginImpl() // 重置{_estimate << 0,0,0;}virtual void oplusImpl( const double* update ) // 更新{_estimate += Eigen::Vector3d(update);}// 存盘和读盘:留空virtual bool read( istream& in ) {}virtual bool write( ostream& out ) const {}
};// 误差模型 模板参数:观测值维度,类型,连接顶点类型
class CurveFittingEdge: public g2o::BaseUnaryEdge<1,double,CurveFittingVertex>
{public:EIGEN_MAKE_ALIGNED_OPERATOR_NEWCurveFittingEdge( double x ): BaseUnaryEdge(), _x(x) {}// 计算曲线模型误差void computeError(){const CurveFittingVertex* v = static_cast<const CurveFittingVertex*> (_vertices[0]);const Eigen::Vector3d abc = v->estimate();_error(0,0) = measurement()- std::exp( abc(0,0)*_x*_x + abc(1,0)*_x + abc(2,0) ) ;}virtual bool read( istream& in ) {}virtual bool write( ostream& out ) const {}
public:double _x;  // x 值, y 值为 _measurement
};int main( int argc, char** argv )
{double a=1.0, b=2.0, c=1.0;         // 真实参数值int N=100;                          // 数据点double w_sigma=1.0;                 // 噪声Sigma值cv::RNG rng;                        // OpenCV随机数产生器double abc[3] = {0,0,0};            // abc参数的估计值vector<double> x_data, y_data;      // 数据cout<<"generating data: "<<endl;for ( int i=0; i<N; i++ ){double x = i/100.0;x_data.push_back ( x );y_data.push_back (exp ( a*x*x + b*x + c ) + rng.gaussian ( w_sigma ));cout<<x_data[i]<<" "<<y_data[i]<<endl;}// 构建图优化,先设定g2otypedef g2o::BlockSolver< g2o::BlockSolverTraits<3,1> > Block;  // 每个误差项优化变量维度为3,误差值维度为1Block::LinearSolverType* linearSolver = new g2o::LinearSolverDense<Block::PoseMatrixType>(); // 线性方程求解器Block* solver_ptr = new Block( linearSolver );      // 矩阵块求解器// 梯度下降方法,从GN, LM, DogLeg 中选g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg( solver_ptr );// g2o::OptimizationAlgorithmGaussNewton* solver = new g2o::OptimizationAlgorithmGaussNewton( solver_ptr );// g2o::OptimizationAlgorithmDogleg* solver = new g2o::OptimizationAlgorithmDogleg( solver_ptr );g2o::SparseOptimizer optimizer;     // 图模型optimizer.setAlgorithm( solver );   // 设置求解器optimizer.setVerbose( true );       // 打开调试输出// 往图中增加顶点CurveFittingVertex* v = new CurveFittingVertex();v->setEstimate( Eigen::Vector3d(0,0,0) );v->setId(0);optimizer.addVertex( v );// 往图中增加边for ( int i=0; i<N; i++ ){CurveFittingEdge* edge = new CurveFittingEdge( x_data[i] );edge->setId(i);edge->setVertex( 0, v );                // 设置连接的顶点edge->setMeasurement( y_data[i] );      // 观测数值edge->setInformation( Eigen::Matrix<double,1,1>::Identity()*1/(w_sigma*w_sigma) ); // 信息矩阵:协方差矩阵之逆optimizer.addEdge( edge );}// 执行优化cout<<"start optimization"<<endl;chrono::steady_clock::time_point t1 = chrono::steady_clock::now();optimizer.initializeOptimization();optimizer.optimize(100);chrono::steady_clock::time_point t2 = chrono::steady_clock::now();chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>( t2-t1 );cout<<"solve time cost = "<<time_used.count()<<" seconds. "<<endl;// 输出优化值Eigen::Vector3d abc_estimate = v->estimate();cout<<"estimated model: "<<abc_estimate.transpose()<<endl;return 0;
}

二、利用G2o实现BA优化

基本方法:
g2o::VertexSE3Expmap定义相机的位姿顶点;
g2o::VertexSBAPointXYZ定义特征点的空间向量;g2o::EdgeProjectXYZ2UV 定义边;
优化器中添加顶点和边;
设置相机参数;
优化。

2.1 初始化G2o求解器

 //构造求解器g2o::SparseOptimizer optimizer;//线性方程求解器//g2o::BlockSolver_6_3::LinearSolverType* linearSolver = new g2o::LinearSolverDense<g2o::BlockSolver_6_3::PoseMatrixType>();g2o::BlockSolver_6_3::LinearSolverType* linearSolver = new g2o::LinearSolverCholmod<g2o::BlockSolver_6_3::PoseMatrixType>();//矩阵块求解器g2o::BlockSolver_6_3* block_solver = new g2o::BlockSolver_6_3(linearSolver);//L-M优化算法g2o::OptimizationAlgorithmLevenberg* algorithm = new g2o::OptimizationAlgorithmLevenberg(block_solver);optimizer.setAlgorithm(algorithm);

2.2 添加顶点

    for(int i = 0;i<num;i++){g2o::VertexSE3Expmap* v = new g2o::VertexSE3Expmap;v->setId(i);v->setFixed(i==0);v->setEstimate(g2o::SE3Quat());optimizer.addVertex(v);}//添加特征点顶点for(int i=0;i<points_1.size();i++){g2o::VertexSBAPointXYZ* v = new g2o::VertexSBAPointXYZ();Point2d cam = pixel2cam(points_1[i],K);v->setId(i+num); //已经添加过num个位姿的顶点了v->setEstimate(Eigen::Vector3d(cam.x,cam.y,1));v->setMarginalized(true);//把矩阵块分成两个部分,分别求解微量optimizer.addVertex(v);}

添加的参数示意:
x=[ξ1ξ2...ξm∣p1p2...pn]T\pmb x=[\xi_1 \quad \xi_2 ...\xi_m| \quad p_1 \quad p_2...p_n]^T xxx=[ξ1​ξ2​...ξm​∣p1​p2​...pn​]T

2.3 添加边

    g2o::EdgeProjectXYZ2UV* edge = new g2o::EdgeProjectXYZ2UV();edge->setVertex(0,dynamic_cast<g2o::VertexSBAPointXYZ*> (optimizer.vertex(i+2)));edge->setVertex(1,dynamic_cast<g2o::VertexSE3Expmap*> (optimizer.vertex(0)));edge->setMeasurement(Eigen::Vector2d(points_1[i].x,points_1[i].y));edge->setRobustKernel(new g2o::RobustKernelHuber());edge->setInformation(Eigen::Matrix2d::Identity());edge->setParameterId(0,0);optimizer.addEdge(edge);

2.4 优化

 optimizer.setVerbose(true);optimizer.initializeOptimization();optimizer.optimize(20);

2.5 获取优化的值

 g2o::VertexSE3Expmap* v = dynamic_cast<g2o::VertexSE3Expmap*>( optimizer.vertex(1) );Eigen::Isometry3d pose = v->estimate();cout<<"Pose="<<endl<<pose.matrix()<<endl;

2.6 保存为g2o文件

 optimizer.save("ba.g2o");

三、优化后的程序

main.cpp源码:

#include <iostream>#include <g2o/core/base_vertex.h>
#include <g2o/core/base_unary_edge.h>
#include <g2o/core/sparse_optimizer.h>
#include <g2o/core/block_solver.h>
#include <g2o/core/solver.h>#include <g2o/core/robust_kernel.h>
#include <g2o/core/robust_kernel_impl.h>#include <g2o/core/optimization_algorithm_levenberg.h>
#include <g2o/core/optimization_algorithm_gauss_newton.h>
#include <g2o/core/optimization_algorithm_dogleg.h>#include <g2o/solvers/dense/linear_solver_dense.h>
#include <g2o/solvers/eigen/linear_solver_eigen.h>
#include <g2o/solvers/cholmod/linear_solver_cholmod.h>#include <g2o/types/sba/types_six_dof_expmap.h>
#include <g2o/types/slam3d/se3quat.h>#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/cudafeatures2d.hpp>
#include <opencv2/cudaimgproc.hpp>#include <Eigen/Core>#include <vector>
#include <chrono>using namespace std;
using namespace cv;void find_feature_matches_GPU(const Mat &img_1, const Mat &img_2,std::vector<KeyPoint> &keypoints_1,std::vector<KeyPoint> &keypoints_2,std::vector<DMatch> &matches);
void find_feature_matches_CPU(const Mat &img_1, const Mat &img_2,std::vector<KeyPoint> &keypoints_1,std::vector<KeyPoint> &keypoints_2,std::vector<DMatch> &matches);
void BASolver(vector<Point2f> &points_1,vector<Point2f> &points_2,Mat &K);//相机内参
Mat K = (Mat_<double>(3, 3) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1);Point2d pixel2cam(const Point2d &p, const Mat &K)
{return Point2d((p.x - K.at<double>(0, 2)) / K.at<double>(0, 0),(p.y - K.at<double>(1, 2)) / K.at<double>(1, 1));
}int main(int argc,char**argv)
{std::string path1 = argv[1];std::string path2 = argv[2];if(argc !=3){cout<<"输入参数方法:**/testOptimizerG2o ./1.png ./2.png\n";return 0;}Mat img_1 = imread(path1);Mat img_2 = imread(path2);vector<KeyPoint> keypoints_1, keypoints_2;vector<Point2f> points_1,points_2;vector<DMatch> matches;chrono::steady_clock::time_point t1 = chrono::steady_clock::now();//use GPU//find_feature_matches_GPU(img_1,img_2,keypoints_1,keypoints_2,matches);//use cpufind_feature_matches_CPU(img_1,img_2,keypoints_1,keypoints_2,matches);chrono::steady_clock::time_point t2 = chrono::steady_clock::now();chrono::duration<double> delay_time = chrono::duration_cast<chrono::duration<double>>(t2 - t1); //milliseconds 毫秒cout<<"匹配耗时:"<<delay_time.count()<<"秒"<<endl;for(auto m:matches){points_1.push_back(keypoints_1[m.queryIdx].pt);points_2.push_back(keypoints_2[m.trainIdx].pt);}cout<<"特征点的数量:"<<points_1.size()<<"  "<<points_2.size()<<endl;//BA优化BASolver(points_1,points_2,K);Mat img_keypoints;drawMatches(img_1,keypoints_1,img_2,keypoints_2,matches,img_keypoints);imshow("matches",img_keypoints);waitKey(0);return 0;
}void find_feature_matches_GPU(const Mat &img_1, const Mat &img_2,std::vector<KeyPoint> &keypoints_1,std::vector<KeyPoint> &keypoints_2,std::vector<DMatch> &matches) {cuda::Stream myStream;// 利用GPU加速提取特征点cuda::GpuMat d_img1,d_img2;d_img1.upload(img_1,myStream);d_img2.upload(img_2,myStream);cuda::GpuMat d_img1Gray,d_img2Gray;cuda::cvtColor(d_img1,d_img1Gray,COLOR_BGR2GRAY,0,myStream);cuda::cvtColor(d_img2,d_img2Gray,COLOR_BGR2GRAY,0,myStream);//-- 初始化cuda::GpuMat d_keypoints1, d_keypoints2;cuda::GpuMat descriptors_1, descriptors_2,descriptors_1_32F,descriptors_2_32F;//创建ORB检测Ptr<cuda::ORB> detector_ORB = cuda::ORB::create();//-- 第一步:检测 Oriented FAST 角点位置 计算 BRIEF 描述子detector_ORB->detectAndComputeAsync(d_img1Gray,cuda::GpuMat(),d_keypoints1,descriptors_1,false,myStream);detector_ORB->convert(d_keypoints1,keypoints_1);descriptors_1.convertTo(descriptors_1_32F,CV_32F);detector_ORB->detectAndComputeAsync(d_img2Gray,cuda::GpuMat(),d_keypoints2,descriptors_2,false,myStream);detector_ORB->convert(d_keypoints2,keypoints_2);descriptors_2.convertTo(descriptors_2_32F,CV_32F);//-- 第二步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离//*******************************************利用Match方法进行匹配************************************///可取消注释选择此方法/*vector<DMatch> match;Ptr<cuda::DescriptorMatcher> matcher = cuda::DescriptorMatcher::createBFMatcher(NORM_L2);// BFMatcher matcher ( NORM_HAMMING );matcher->match(descriptors_1_32F, descriptors_2_32F, match);//-- 第三步:匹配点对筛选double min_dist = 1000, max_dist = 0;for (int i = 0; i < descriptors_1.rows; i++) {double dist = match[i].distance;if (dist < min_dist) min_dist = dist;if (dist > max_dist) max_dist = dist;}cout<<"-- Max dist : "<< max_dist<<endl;cout<<"-- Min dist : "<< min_dist<<endl;for (int i = 0; i < descriptors_1.rows; i++){if (match[i].distance <= 0.7*max_dist){matches.push_back(match[i]);}}*///*******************************************利用KnnMatch方法进行匹配(效果较好)**************************/vector< vector<DMatch>> knnmatch;Ptr<cuda::DescriptorMatcher> matcher = cuda::DescriptorMatcher::createBFMatcher(NORM_L2);// BFMatcher matcher ( NORM_HAMMING );matcher->knnMatch(descriptors_1_32F, descriptors_2_32F, knnmatch,2);//-- 第三步:匹配点对筛选for (int i = 0; i < knnmatch.size(); i++){if (knnmatch[i][0].distance <= 0.8*knnmatch[i][1].distance){matches.push_back(knnmatch[i][0]);}}}void find_feature_matches_CPU(const Mat &img_1, const Mat &img_2,std::vector<KeyPoint> &keypoints_1,std::vector<KeyPoint> &keypoints_2,std::vector<DMatch> &matches) {//-- 初始化Mat descriptors_1, descriptors_2;Mat imgGray_1,imgGray_2;cvtColor(img_1,imgGray_1,COLOR_BGR2GRAY);cvtColor(img_2,imgGray_2,COLOR_BGR2GRAY);//创建ORB检测Ptr<ORB> detector_ORB = ORB::create();Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");//-- 第一步:检测 Oriented FAST 角点位置 计算 BRIEF 描述子detector_ORB->detectAndCompute(imgGray_1,Mat(),keypoints_1,descriptors_1);detector_ORB->detectAndCompute(imgGray_2,Mat(),keypoints_2,descriptors_2);//-- 第二步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离vector<DMatch> match;// BFMatcher matcher ( NORM_HAMMING );matcher->match(descriptors_1, descriptors_2, match);//-- 第三步:匹配点对筛选double min_dist = 1000, max_dist = 0;for (int i = 0; i < descriptors_1.rows; i++){double dist = match[i].distance;if (dist < min_dist) min_dist = dist;if (dist > max_dist) max_dist = dist;}cout<<"-- Max dist : "<< max_dist<<endl;cout<<"-- Min dist : "<< min_dist<<endl;for (int i = 0; i < descriptors_1.rows; i++){if (match[i].distance <= 0.4*max_dist){matches.push_back(match[i]);}}}void BASolver(vector<Point2f> &points_1,vector<Point2f> &points_2,Mat &K)
{//****************************BA优化过程*********************//构造求解器g2o::SparseOptimizer optimizer;//线性方程求解器//g2o::BlockSolver_6_3::LinearSolverType* linearSolver = new g2o::LinearSolverDense<g2o::BlockSolver_6_3::PoseMatrixType>();g2o::BlockSolver_6_3::LinearSolverType* linearSolver = new g2o::LinearSolverCholmod<g2o::BlockSolver_6_3::PoseMatrixType>();//矩阵块求解器g2o::BlockSolver_6_3* block_solver = new g2o::BlockSolver_6_3(linearSolver);//L-M优化算法g2o::OptimizationAlgorithmLevenberg* algorithm = new g2o::OptimizationAlgorithmLevenberg(block_solver);//optimizer.setAlgorithm(algorithm);optimizer.setVerbose(true);//添加顶点就是添加优化的参数,这里位姿和特征点都要优化;//边实际上就是两个参数之间的关系,在这里是两者参数映射的关系//添加位姿顶点for(int i = 0;i<2;i++){g2o::VertexSE3Expmap* v = new g2o::VertexSE3Expmap;v->setId(i);v->setFixed(i==0);v->setEstimate(g2o::SE3Quat());optimizer.addVertex(v);}//添加特征点顶点for(int i=0;i<points_1.size();i++){g2o::VertexSBAPointXYZ* v = new g2o::VertexSBAPointXYZ();Point2d cam = pixel2cam(points_1[i],K);v->setId(i+2); //已经添加过两个位姿的顶点了v->setEstimate(Eigen::Vector3d(cam.x,cam.y,1));v->setMarginalized(true);//把矩阵块分成两个部分,分别求解微量optimizer.addVertex(v);}//添加相机参数g2o::CameraParameters* camera = new g2o::CameraParameters(K.at<double>(0, 0),Eigen::Vector2d(K.at<double>(0, 2),K.at<double>(1, 2)),0);camera->setId(0);optimizer.addParameter(camera);//添加边,第一帧和第二帧for(size_t i = 0;i<points_1.size();i++){g2o::EdgeProjectXYZ2UV* edge = new g2o::EdgeProjectXYZ2UV();edge->setVertex(0,dynamic_cast<g2o::VertexSBAPointXYZ*> (optimizer.vertex(i+2)));edge->setVertex(1,dynamic_cast<g2o::VertexSE3Expmap*> (optimizer.vertex(0)));edge->setMeasurement(Eigen::Vector2d(points_1[i].x,points_1[i].y));edge->setRobustKernel(new g2o::RobustKernelHuber());edge->setInformation(Eigen::Matrix2d::Identity());edge->setParameterId(0,0);//这句必要optimizer.addEdge(edge);}for(size_t i = 0;i<points_2.size();i++){g2o::EdgeProjectXYZ2UV* edge = new g2o::EdgeProjectXYZ2UV();edge->setVertex(0,dynamic_cast<g2o::VertexSBAPointXYZ*> (optimizer.vertex(i+2)));edge->setVertex(1,dynamic_cast<g2o::VertexSE3Expmap*> (optimizer.vertex(1)));edge->setMeasurement(Eigen::Vector2d(points_2[i].x,points_2[i].y));edge->setRobustKernel(new g2o::RobustKernelHuber());edge->setInformation(Eigen::Matrix2d::Identity());edge->setParameterId(0,0);optimizer.addEdge(edge);}optimizer.setVerbose(false);optimizer.initializeOptimization();optimizer.optimize(20);//变换矩阵g2o::VertexSE3Expmap * v1 = dynamic_cast<g2o::VertexSE3Expmap*>(optimizer.vertex(0)) ;g2o::VertexSE3Expmap * v2 = dynamic_cast<g2o::VertexSE3Expmap*>(optimizer.vertex(1)) ;Eigen::Isometry3d pose1 = v1->estimate();Eigen::Isometry3d pose2 = v2->estimate();cout<<"The Pose1 from fram 1=\n"<<pose1.matrix()<<endl;cout<<"The Pose2 from fram 2(or the frame1 to frame2)=\n"<<pose2.matrix()<<endl;//****************************BA优化过程*********************
}

CMakeLists.txt:

cmake_minimum_required(VERSION 3.5)project(testOptimizerG2o LANGUAGES CXX)set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)set( CMAKE_BUILD_TYPE "Release" )
set( CMAKE_CXX_FLAGS "-std=c++11 -O3" )set(CMAKE_INCLUDE_CURRENT_DIR ON)
list( APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules )# OpenCV
find_package( OpenCV 4.3.0 REQUIRED)
# Eigen3
find_package( Eigen3 REQUIRED )
# 寻找G2O
find_package( G2O REQUIRED )
#Sophus
find_package( Sophus REQUIRED)
#Cholmod
find_package( Cholmod REQUIRED)INCLUDE_DIRECTORIES(${CHOLMOD_INCLUDE_DIR}${OpenCV_INCLUDE_DIRS}${EIGEN3_INCLUDE_DIRS}${Sophus_INCLUDE_DIRS})add_executable(testOptimizerG2o main.cpp)
TARGET_LINK_LIBRARIES(testOptimizerG2o${OpenCV_LIBS}${Sophus_LIBRARIES}${CHOLMOD_LIBRARIES}g2o_core g2o_types_slam3d g2o_solver_csparse g2o_stuff g2o_csparse_extension g2o_types_sim3 g2o_types_sba)message(STATUS "OPENCV is :${OpenCV_INCLUDE_DIRS}")message(STATUS "OPENCV version is :${OpenCV_VERSION}")message(STATUS "the cholmod is ${CHOLMOD_INCLUDE_DIR} ${CHOLMOD_LIBRARIES}")

完整的文件在参考链接里。

参考链接

https://www.cnblogs.com/gaoxiang12/p/5304272.html

https://github.com/zouyuelin/SLAM_Learning_notes/tree/main/testOptimizerG2o

SLAM--G2o实现BA优化相关推荐

  1. SLAM中的BA优化

    SLAM中的BA优化 SLAM中的BA优化,先根据相机模型和A,B图像特征匹配好的像素坐标,求出A图像上的像素坐标对应的归一化的空间点坐标,然后根据该空间点的坐标计算重投影到B图像上的像素坐标,重投影 ...

  2. SLAM后端:BA优化(Bundle Adjustment)

    BA是一种优化方法,最小化重投影误差优化PnP得到的R.t 应用场景:利用共视点的3D坐标与相机内参矩阵,根据PnP计算位姿R.t(有误差).利用R.t及相机的投影模型可以计算出这些特征点在第二帧图像 ...

  3. 【slam十四讲第二版】【课本例题代码向】【第九讲~后端Ⅰ】【安装Meshlab】【BAL数据集格式】【ceres求解BA】【g2o求解BA】

    [slam十四讲第二版][课本例题代码向][第九讲~后端Ⅰ][安装Meshlab][BAL数据集格式][ceres求解BA][g2o求解BA] 0 前言 1 安装Meshlab: 三维几何网格处理 2 ...

  4. (01)ORB-SLAM2源码无死角解析-(63) BA优化(g2o)→局部建图线程:Optimizer::LocalBundleAdjustment→位姿与地图点优化

    讲解关于slam一系列文章汇总链接:史上最全slam从零开始,针对于本栏目讲解的(01)ORB-SLAM2源码无死角解析链接如下(本文内容来自计算机视觉life ORB-SLAM2 课程课件): (0 ...

  5. 视觉SLAM笔记(53) g2o 操作后端优化

    视觉SLAM笔记(53) g2o 操作后端优化 1. BA 数据集 2. g2o 求解 BA 3. 求解 1. BA 数据集 目录下的 common 文件夹是实验的数据集部分 它的布局如图所示 其中, ...

  6. 如何用ceres进行两帧之间的BA优化

    学习高博的书已有很长一段时间了,一直看理论,看代码,而没有自己亲自上手,最近在做BA优化,大部分SLAM是用g2o进行的,而对于ceres用的很少,由于博主根本看不懂g2o的代码风格,个人觉得很无语( ...

  7. BAD SLAM | 直接法实时BA+RGBD基准数据集(CVPR2019)

    论文解读:BAD SLAM | 直接法实时BA+RGBD基准数据集(CVPR2019) 本文解读的论文为发表于CVPR 2019的 "Schops, Thomas, Torsten Satt ...

  8. SLAM:ORB-SLAM 位姿优化描述

    只知道算法描述和代码,而不知道原理是比较扯的事情,还是把原理转载一下. 原文链接: http://www.cnblogs.com/luyb/p/5447497.html ORB-SLAM作为单目SLA ...

  9. 一起做激光SLAM[六]isam于SLAM位姿因子图优化的使用

    本节目标:学习gtsam与isam在二位位姿pose2和三维位姿pose3上的使用,并将isam用于位姿的因子图优化. 预期效果:将ICP匹配带来的瞬间位移变成对之前累积误差的消除.蓝色ICP无图优化 ...

  10. SfM三维点云重建:BA优化--【VS2015+OpenCV3.4+PCL1.8】

    前言 在SfM多视图三维点云重建–[VS2015+OpenCV3.4+PCL1.8]中实现的增量式SfM三维点云重建,会随着图片数量的增加而导致误差逐渐累积,最后可能无法完成重建.在三维重建中常使用B ...

最新文章

  1. [微信小程序]实现一个自定义遮罩层组件(完整示例代码附效果图)
  2. 7个步骤,帮您轻松实现云迁移
  3. 数据库性能检查指导方案
  4. android palette组件用法,Palette颜色提取使用详解
  5. Hotspot GC Root 对应调用链
  6. C# ComboBox自动完成功能的例子
  7. c语言图像压缩算法实现_kakadu——JPEG2000图像压缩软件的安装和使用
  8. 【java】getWriter() has already been called for this
  9. linux 性能测试 多线程,sysbench - linux多线程性能测试工具
  10. linux 内网共享文件夹_linux局域网文件共享服务器
  11. 百度地图生成器不显示图片的原因
  12. Redisson框架官方介绍
  13. 第5组 团队展示(组长)
  14. 输出文本(pre\code\kbd\var...)
  15. 香港服务器几种网站测速的方法
  16. 智能电视机安装App
  17. 第一章:Django入门篇
  18. 麻将胡牌算法的一种设计及其分析
  19. 微星 MEG Z590i unify windows 驱动下载地址
  20. 什么是数字孪生?如何将数字孪生技术进行应用?这篇文章全都告诉你

热门文章

  1. python 优点_python语言有什么优势
  2. 多个容器一起打包_Docker从入门到掉坑(三):容器太多,操作好麻烦
  3. ARM开发5.3.4 基础实训( 1 ) 蜂鸣器输出控制--LPC21XX
  4. java9之后,String为何从char类型数组转成byte类型数组
  5. tftp路由器刷机修复工具_魅族路由器极速版刷老毛子固件教程
  6. android 5.1 内核版本号,最新的安卓5.1.1 ROOT教程(不需要刷第三方内核)
  7. DOM之事件高级(附实例、图解)
  8. 我的实用设计模式之关于Policy-based design
  9. 【STL记录】Containers--Lists
  10. Android屏幕尺寸(来自网络整理)