接上文,我们分析system的函数和起实现,我们会一个一个函数去分析实现,我们仍旧会精简其中不重要的部分,例如打印信息,错误处理等 如果需要了解详细代码实现,请看源代码。
System::System(const string &strVocFile, const string &strSettingsFile, const eSensor sensor,
const bool bUseViewer):mSensor(sensor), mpViewer(static_cast<Viewer*>(NULL)), mbReset(false),mbActivateLocalizationMode(false),
mbDeactivateLocalizationMode(false)
{
cv::FileStorage fsSettings(strSettingsFile.c_str(), cv::FileStorage::READ); //读取配置文件,省略读取失败处理
mpVocabulary = new ORBVocabulary(); // 读取词袋模型,省略读取错误处理
bool bVocLoad = mpVocabulary->loadFromTextFile(strVocFile);
mpKeyFrameDatabase = new KeyFrameDatabase(*mpVocabulary); //关键帧数据
mpMap = new Map(); //创建地图
//Create Drawers. These are used by the Viewer
mpFrameDrawer = new FrameDrawer(mpMap); // 创建绘制地图类
mpMapDrawer = new MapDrawer(mpMap, strSettingsFile); //地图绘制类,根据设置文件
mpTracker = new Tracking(this, mpVocabulary, mpFrameDrawer, mpMapDrawer,
mpMap, mpKeyFrameDatabase, strSettingsFile, mSensor); //跟踪类及其线程创建 主线程执行
mpLocalMapper = new LocalMapping(mpMap, mSensor==MONOCULAR); //局部地图构建类
mptLocalMapping = new thread(&ORB_SLAM2::LocalMapping::Run,mpLocalMapper); //开启执行线程
//初始化回环检测线程并执行
mpLoopCloser = new LoopClosing(mpMap, mpKeyFrameDatabase, mpVocabulary, mSensor!=MONOCULAR);
mptLoopClosing = new thread(&ORB_SLAM2::LoopClosing::Run, mpLoopCloser);
//初始化交互展示线程并运行
if(bUseViewer)
{
mpViewer = new Viewer(this, mpFrameDrawer,mpMapDrawer,mpTracker,strSettingsFile);
mptViewer = new thread(&Viewer::Run, mpViewer);
mpTracker->SetViewer(mpViewer); //给Tracker设置viewer目前看并没有用处,viewer主要用于显示map相关内容
}
//设置各个线程的数据指针
mpTracker->SetLocalMapper(mpLocalMapper);
mpTracker->SetLoopClosing(mpLoopCloser);
mpLocalMapper->SetTracker(mpTracker);
mpLocalMapper->SetLoopCloser(mpLoopCloser);
mpLoopCloser->SetTracker(mpTracker);
mpLoopCloser->SetLocalMapper(mpLocalMapper);
}
通过分析,可以看到,system作为整个库的管理类,承担的职责是完成整体的创建和管理工作,此类并不涉及具体算法实现。
下面是三个函数
// Proccess the given stereo frame. Images must be synchronized and rectified.
// Input images: RGB (CV_8UC3) or grayscale (CV_8U). RGB is converted to grayscale.
// Returns the camera pose (empty if tracking fails).
cv::Mat TrackStereo(const cv::Mat &imLeft, const cv::Mat &imRight, const double ×tamp);
// Process the given rgbd frame. Depthmap must be registered to the RGB frame.
// Input image: RGB (CV_8UC3) or grayscale (CV_8U). RGB is converted to grayscale.
// Input depthmap: Float (CV_32F).
// Returns the camera pose (empty if tracking fails).
cv::Mat TrackRGBD(const cv::Mat &im, const cv::Mat &depthmap, const double ×tamp);
// Proccess the given monocular frame
// Input images: RGB (CV_8UC3) or grayscale (CV_8U). RGB is converted to grayscale.
// Returns the camera pose (empty if tracking fails).
cv::Mat TrackMonocular(const cv::Mat &im, const double ×tamp);
//英文注释写的简明易懂,就留着吧
这三个函数适用于不同硬件,我们首先看下单目的实现代码
cv::Mat System::TrackMonocular(const cv::Mat &im, const double ×tamp)
{
// Check mode change
{
unique_lock<mutex> lock(mMutexMode); //资源锁,等待资源就绪
if(mbActivateLocalizationMode)
{
mpLocalMapper->RequestStop();
// Wait until Local Mapping has effectively stopped
while(!mpLocalMapper->isStopped())
{
usleep(1000);
}
mpTracker->InformOnlyTracking(true);
mbActivateLocalizationMode = false;
}
if(mbDeactivateLocalizationMode)
{
mpTracker->InformOnlyTracking(false);
mpLocalMapper->Release();
mbDeactivateLocalizationMode = false;
}
}
cv::Mat Tcw = mpTracker->GrabImageMonocular(im,timestamp); //主要实现跟踪函数,我们需要着重分析的函数实现
unique_lock<mutex> lock2(mMutexState);
mTrackingState = mpTracker->mState;
mTrackedMapPoints = mpTracker->mCurrentFrame.mvpMapPoints;
mTrackedKeyPointsUn = mpTracker->mCurrentFrame.mvKeysUn;
return Tcw;
}
经过前两篇,我们能看到orbslam主流程,后续在第三篇我们将进入主要实现分析