Vision-based measurement technology benefits high-quality manufacturers through improved dimensional precision, enhanced geometric tolerance, and increased product yield. The monocular 3D structured light visual sensing method is popular for detecting online parts since it can reach micron-meter depth accuracy. However, the line-of-sight requirement of a single viewpoint vision system often fails when hiding occurs due to the object’s surface structure, such as edges, slopes, and holes. To address this issue, a multi-view 3D structured light vision system is proposed in this paper to achieve high accuracy, i.e., Z-direction repeatability, and reduce hiding probability during mechanical dimension measurement. The main contribution of this paper includes the use of industrial cameras with high resolution and high frame rates to achieve high-precision 3D reconstruction. Moreover, a multi-wavelength (heterodyne) phase expansion method is employed for high-precision phase calculation. By leveraging multiple industrial cameras, the system overcomes field of view occlusions, thereby broadening the 3D reconstruction field of view. Finally, the system achieves a Z-axis repetition accuracy of 0.48 μm.
Three-dimensional reconstruction technology plays an important role in indoor scenes by converting objects and structures in indoor environments into accurate 3D models using multi-view RGB images. It offers a wide range of applications in fields such as virtual reality, augmented reality, indoor navigation, and game development. Existing methods based on multi-view RGB images have made significant progress in 3D reconstruction. These image-based reconstruction methods not only possess good expressive power and generalization performance, but also handle complex geometric shapes and textures effectively. Despite facing challenges such as lighting variations, occlusion, and texture loss in indoor scenes, these challenges can be effectively addressed through deep neural networks, neural implicit surface representations, and other techniques. The technology of indoor 3D reconstruction based on multi-view RGB images has a promising future. It not only provides immersive and interactive virtual experiences but also brings convenience and innovation to indoor navigation, interior design, and virtual tours. As the technology evolves, these image-based reconstruction methods will be further improved to provide higher quality and more accurate solutions to indoor scene reconstruction.
With the rapid popularization of mobile devices and the wide application of various sensors, scene perception methods applied to mobile devices occupy an important position in location-based services such as navigation and augmented reality (AR). The development of deep learning technologies has greatly improved the visual perception ability of machines to scenes. The basic framework of scene visual perception, related technologies and the specific process applied to AR navigation are introduced, and future technology development is proposed. An application (APP) is designed to improve the application effect of AR navigation. The APP includes three modules: navigation map generation, cloud navigation algorithm, and client design. The navigation map generation tool works offline. The cloud saves the navigation map and provides navigation algorithms for the terminal. The terminal realizes local real-time positioning and AR path rendering.