However, the markers (e.g., Figure 1a) produced by specially designed algorithms may look odd in most indoor living environments. One important goal for the projects is that markers are designed to have specific patterns so that systems can identify the marker patterns and poses very efficiently. Their technologies have been used in 3D indoor model building, location estimation and navigation for UAV or robots. These methods have high efficiency on detecting and recognizing markers from a camera view. Two-dimensional markers used for camera localization are extensively designed as monochrome codes with specific patterns for fast detecting and recognizing. Several marker-based pose estimation approaches have been proposed to calculate the location of a camera using specific 2D markers. This is the foundation for marker-based localization for indoor devices. The location of a camera can be derived by using relative positions to some pre-located visible objects or surfaces with known patterns. Using cameras to produce localization has been shown to be an effective and low-cost approach. Both new techniques can significantly improve state-of-the-art object matching technologies. This results in a big saving of the average pose homography calculation time (from 30 ms to 1 ms, or about 95% improvement in our experiment). These filters can drastically decrease the FP matching output (e.g., from 1656 pairs to 48 pairs in an example in Section 5.3). Two filters are designed to inspect the matching result produced by FLANN and can greatly reduce the number of useful matched FP pairs before we use them in the homography calculation. Another new technique is for filtering out weak matched feature point (FP) pairs. Our study shows that this technique can reduce the average matching time by 50%. By inspecting the relative positions of feature points and selecting just a sufficient number of useful points, we can reduce the number as well as improve the quality of feature points for picture matching. We have developed an algorithm to reduce the number of extracted feature points from a robot view before matching them to a pictorial object in the library. ![]() To improve the localization speed and precision, we have designed two new techniques to improve the efficiency and precision of pictorial planar object recognition. Compared with some earlier localization works, our VBL methods have the following benefits: PicPose is designed to extract feature points of a pictorial planar object from a camera view, and produce the camera’s pose by matching those feature points on the planar surface of a 3D object with its real-world 3D models stored in the 3D map. In contrast, the PicPose method detects the pictorial planar surface ( Figure 1c) of an object without requiring it to have any specific shape or frame. It then uses the picture’s corner points to identify the camera’s pose relative to the picture’s actual position stored in the 3D map. The ArPico method detects and recognizes framed pictures ( Figure 1b) before converting pictures into vectors of binary blocks (similar to ArUco markers) for matching with known marker codes in a library. This framework is applicable to both ArPico and PicPose methods. The pictorial planner object based VBL framework presented in this paper has three components: (a) Offline object learning and library creation (b) libraries and map management and (c) real-time device localization. The experiment study shows that our localization methods are practical, have very good accuracy, and can be used for real time robot navigation. We have built an autonomous moving robot that can self-localize itself using its on-board camera and the PicPose technology. PicPose detects the pictorial planar surface of an object in a camera view and produces the pose output by matching the feature points in the view with that in the original picture and producing the homography to map the object’s actual location in the 3D real world map. It then uses the corner points on a picture’s border to identify the camera’s pose in the 3D space. ![]() ArPico detects and recognizes framed pictures by converting them into binary marker codes for matching with known codes in the library. We have designed two object detection methods for localization, ArPico and PicPose. In this study, we present a pictorial planar surface based 3D object localization framework. Visual-based localization (VBL) is a promising self-localization approach that identifies a robot’s location in an indoor or underground 3D space by using its camera to scan and match the robot’s surrounding objects and scenes. ![]() Localization is an important technology for smart services like autonomous surveillance, disinfection or delivery robots in future distributed indoor IoT applications.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |