A Visual Localization Method Based on Indoor Signs
-
摘要: 为解决室内交通场景中智能汽车和移动机器人进行定位计算的问题, 利用室内场景中已存在的各类标志, 引入BEBLID算法, 提出1种视觉定位方法。对BEBLID算法进行改进, 赋予其对图像整体进行特征表征的能力。将定位过程分解为离线阶段和在线阶段, 离线阶段构建场景标志地图。在线阶段中, 首先通过全局特征匹配, 引入KNN方法确定最近节点和最近图像。通过局部特征匹配获得特征点一一对应关系。利用场景特征地图中存储的标志坐标信息, 进行度量计算, 获取当前位置信息。在教学楼、办公楼和室内停车场场景进行实验, 实验中对场景标志的正确识别率达到90%, 平均定位误差小于1 m, 与传统方法相比, 同一样本下识别精度相对提升约10%, 实验验证了算法的有效性。Abstract: A visual localization method is proposed to provide a way of localizing intelligent vehicles and mobile robots in indoor environment. The proposed method exploits various signs within indoor environment and uses the boosted efficient binary local image descriptor(BEBLID)algorithm. The proposed method enforces the ability to characterize the whole image by improving the classic BEBLID. The localization method consists of an offline and online component. For the offline component, a scene sign map is created. For the online component, the localization progress is divided into 3 parts. In the first part, the holistic BEBLID features are matched. The closet sign site and the closet image are located by using the KNN method. In the second part, the correspondences of key points are identified by local BEBLID features matching. In the third part, the current position is localized by metric calculation using coordinate information stored in the scene sign map. The experiment is conducted in three kinds of indoor scenes, including a teaching building, an office building, and an indoor parking lot. The results show that the recognition rate of signs in the scene reaches 90%, and the average localization error is less than 1 m. Compared with the traditional methods, the proposed method improves about 10% of relative recognition rate with the same test set, which verifies the effectiveness of the proposed method.
-
Key words:
- indoor localization /
- holistic feature /
- visual localization /
- BEBLID feature
-
表 1 计算效率对比实验结果
Table 1. Comparison experiments of calculation efficacy
单位: ms 方法 场景1 场景2 本文方法 92.0 95.3 ORB[15] 92.7 96.4 表 2 第一类场景定位误差
Table 2. Localization error in the class-1 scene
场景 平均误差/m 标准偏差/m 小于1 m的概率/% 耗时/ms 场景1 0.80 1.38 87 152.0 场景2 0.82 1.41 87 -
[1] YASSIN A, NASSER Y, AWAD M, et al. Recent advances in indoor localization: a survey on theoretical approaches and applications[J]. IEEE Communications Surveys & Tutorials, 2017, 19(99): 1327-1346. http://www.researchgate.net/profile/Ran_Liu54/publication/311167066_Recent_Advances_in_Indoor_Localization_A_Survey_on_Theoretical_Approaches_and_Applications/links/5a4cad400f7e9b8284c3fc99/Recent-Advances-in-Indoor-Localization-A-Survey-on-Theoretical-Approaches-and-Applications.pdf [2] LI B, MUNOZ J P, RONG X, et al. Vision-based mobile indoor assistive navigation aid for blind people[J]. IEEE Transactions on Mobile Computing, 2019, 18(3): 702-714. doi: 10.1109/TMC.2018.2842751 [3] ZOU H, CHEN C L, LI M, et al. Adversarial learning-enabled automatic WiFi indoor radio map construction and adaptation with mobile robot[J]. IEEE Internet of Things Journal, 2020, 7(8): 6946-6954. doi: 10.1109/JIOT.2020.2979413 [4] HUANG Y, ZHAO J, HE X, et al. Vision-based semantic mapping and localization for autonomous indoor parking[C]. 2018 IEEE Intelligent Vehicles Symposium(IV), Changshu, China: IEEE, 2018. [5] LAOUDIAS C, MOREIRA A, KIM S, et al. A survey of enabling technologies for network localization, tracking, and navigation[J]. IEEE Communications Surveys & Tutorials, 2018, 20(4): 3607-3644. [6] HERNÁNDEZ N, HUSSEIN A, CRUZADO D, et al. Applying low cost WiFi-based localization to in-campus autonomous vehicles[C]. 2017 IEEE 20th International Conference on Intelligent Transportation Systems(ITSC), Yokohama, Japan: IEEE, 2017. [7] 赵国旗, 杨明, 王冰, 等. 基于智能终端的移动机器人室内外无缝定位方法[J]. 上海交通大学学报, 2018, 52(1): 13-19. https://www.cnki.com.cn/Article/CJFDTOTAL-SHJT201801005.htmZHAO Guoqi, YANG Ming, WANG Bing, et al. Mobile robot seamless localization based on smart device in indoor and outdoor environments[J]. Journal of Shanghai Jiaotong University, 2018, 52(1): 13-19. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-SHJT201801005.htm [8] WANG W, MARELLI D, FU M. Multiple-vehicle localization using maximum likelihood Kalman filtering and ultra-wideband signals[J]. IEEE Sensors Journal, 2021, 21(4): 4949-4956. doi: 10.1109/JSEN.2020.3031377 [9] 王博远, 刘学林, 蔚保国, 等. WiFi指纹定位中改进的加权k近邻算法[J]. 西安电子科技大学学报, 2019, 46(5): 41-47. https://www.cnki.com.cn/Article/CJFDTOTAL-XDKD201905007.htmWANG Boyuan, LIU Xuelin, YU Baoguo, et al. Improved weighted k-nearest neighbor algorithm for wifi fingerprint positioning[J]. Journal of Xidian University, 2019, 46(5): 41-47. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-XDKD201905007.htm [10] 杨保, 张鹏飞, 李军杰, 等. 一种基于蓝牙的室内定位导航技术[J]. 测绘科学, 2019, 44(6): 89-95. https://www.cnki.com.cn/Article/CJFDTOTAL-CHKD201906013.htmYANG Bao, ZHANG Pengfei, LI Junjie, et al. An indoor positioning and navigation technology based on bluetooth[J]. Science of Surveying and Mapping, 2019, 44(6): 89-95. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-CHKD201906013.htm [11] SADRUDDIN H, MAHMOUD A, ATIA M M. Enhancing body-mounted LiDAR SLAM using an IMU-based pedestrian dead reckoning(PDR)model[C]. 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems(MWSCAS), Springfield, MA, USA: IEEE, 2020. [12] CAMPOS C, ELVIRA R, RODRÍGUEZ J J G, et al. ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multi-map SLAM[R/OL]. (2021-5)[2021-10-8]. https://ieeexplore.ieee.org/document/9440682. [13] RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB: an efficient alternative to SIFT or SURF[C]. IEEE International Conference on Computer Vision, Barcelona, Spain: IEEE, 2011. [14] MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORBSLAM: A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163. doi: 10.1109/TRO.2015.2463671 [15] 胡月志, 李娜, 胡钊政, 等. 基于ORB全局特征与最近邻的交通标志快速识别算法[J]. 交通信息与安全, 2016, 34(1): 23-29. doi: 10.3963/j.issn.1674-4861.2016.01.006HU Yuezhi, LI Na, HU Zhaozheng, et al. Fast sign recognition based on ORB holistic feature and k-nearest neighbor method[J]. Journal of Transport Information and Safety, 2016, 34(1): 23-29. (in Chinese). doi: 10.3963/j.issn.1674-4861.2016.01.006 [16] 陶倩文, 胡钊政, 黄刚, 等. 基于消防安全疏散标志的高精度室内视觉定位[J]. 交通信息与安全, 2018, 36(2): 39-46+60. doi: 10.3963/j.issn.1674-4861.2018.02.006TAO Qianwen, HU Zhaozheng, HUANG Gang, et al. High-accuracy vision-based indoor positioning using building safety evacuation signs[J]. Journal of Transport Information and Safety, 2018, 36(2): 39-46+60. (in Chinese). doi: 10.3963/j.issn.1674-4861.2018.02.006 [17] BAY H, TUYTELAARS T, VAN GOOL L. Surf: speeded up robust features[C]. European Conference on Computer Vision, Graz, Austria: ECCV, 2006. [18] ELLOUMI W, LATOUI A, CANALS R, et al. Indoor pedestrian localization with a smartphone: a comparison of inertial and vision-based methods[J]. IEEE Sensors Journal, 2016, 16(13): 5376-5388. doi: 10.1109/JSEN.2016.2565899 [19] SUÁREZ I, SFEIR G, BUENAPOSADA J M, et al. BEBLID: boosted efficient binary local image descriptor[J]. Pattern Recognition Letters, 2020(133): 366-372. http://www.sciencedirect.com/science/article/pii/S0167865520301252 [20] ZHANG Zhengyou. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334. doi: 10.1109/34.888718