

After the continuous development of algorithms, especially the rise of neural networks, contour extraction has gradually acquired intelligence, and manual marking has been automated, which greatly im-proves the practicability of this third method. This method makes use of the advantages of an edge detection algorithm to mark each image manually. First, the edge of the real scene-image is detected, the accurate contour is drawn, and then the occlusion relationship between the real and virtual scenes are manually marked and completed. The third method used to prove the realizability of scene fusion is the image-analysis-based method. The system calculates the pixel depth value through the stereo matching algorithm and compares the depth value to determine the position relationship between real and virtual scenes. proposed a video transparent AR system that can solve the problem of occlusion between a real scene and computer-generated objects. The second method of proving the realizability of scene fusion is the depth-based method, which determines the occlusion relationship according to the depth value of the target point, and usually only displays the information near the target point. This method was first proposed by Breen in 1996, but it was difficult to realize due to technical limitations at that time. One is the model-based method, which reconstructs the 3D model of a real scene through a computer, exports the model data to the virtual scene rendering software, and renders the virtual scene at the same time to achieve the fusion effect. There are three methods in AR to prove the realizability of scene fusion. The key to realizing ARHS is the effective fusion of real and virtual scenes. The organic combination of both can further improve the comprehensibility of the scene and achieve the “augmentation” of the scene.
#Stereogram hd software#
The real-scene data are sampled by the camera, and the virtual scene is rendered by computer software or a program. The diversified scene selection of HS not only enriches its expression ability, but also makes the realization of augmented reality-holographic stereogram (ARHS) possible.ĪRHS reconstructs the light field information of real and virtual scenes at the same time. In addition, the scene is not limited to real-world objects, but can also be a 3D model rendered by computer. HS discretizes and approximates the continuous 3D light field, which greatly reduces the amount of data. Moreover, HS does not have the depth information in the scene space, but people can still perceive 3D clues, which depends on the binocular parallax effect. An HS cannot show all the information of the scene but is limited to a certain angle (less than 180°). Using discrete 2D images with parallax information as the input, the 3D reconstruction of a scene can be obtained after image processing, stereoscopic exposure, and development and fixing. HS is widely used in the military, publicity, commerce, and other fields. Holographic stereogram (HS) comprises a research hotspot in the field of three-dimensional (3D) display, providing a flexible and efficient means of 3D display. Analysis of experimental results shows that the proposed method can effectively realize augmented reality-holographic stereogram. The obtained scene model and virtual scene are rendered simultaneously to obtain the real and virtual fusion scene. First, the point cloud data is generated by VisualSFM software, and then the 3D mesh model is reconstructed by MeshLab software. In this paper, an augmented reality-holographic stereogram based on 3D reconstruction is proposed. It can reconstruct the light field information of real and virtual scenes at the same time, further improving the comprehensibility of the scene and achieving the “augmentation” of the scene. Holographic stereogram comprises a hotspot in the field of three-dimensional (3D) display. 3R and D Center for Intelligent Control and Advanced Manufacturing, Research Institute of Tsinghua University in Shen Zhen, Shen Zhen, China.2Center of Vocational Education, Army Academy of Armored Forces, Beijing, China.1Department of Information Communication, Army Academy of Armored Forces, Beijing, China.Yunpeng Liu 1 †, Tao Jing 1 †, Qiang Qu 1, Ping Zhang 2, Pei Li 3, Qian Yang 4, Xiaoyu Jiang 1* and Xingpeng Yan 1*
