Augmented Virtual Environments (AVE) or Virtual-Reality Fusion systems fuse dynamic videos with static three-dimensional (3D) models of a virtual environment to provide an optimal solution for visualizing and understanding multichannel surveillance systems. However, texture distortion caused by viewpoint changes in such systems is a critical issue that needs to be addressed. To minimize texture fusion distortion, this paper presents a novel virtual environment system in two phases, offline and online phases, to dynamically fuse multiple surveillance videos with a virtual 3D scene. In the offline phase, a static virtual environment is obtained by performing a 3D photogrammetric reconstruction from the input images of the scene. In the online phase, the virtual environment is augmented by fusing multiple videos through two optional strategies. One strategy is to dynamically map images of different videos onto a 3D model of the virtual environment, and the other is to extract moving objects and represent them as billboards. The system can be used to visualize a 3D environment from any viewpoint augmented by real-time videos. Experiments and user studies in different scenarios demonstrate the superiority of our system.
Bibliographical noteFunding Information:
This is a New Mexico Agricultural Experiment Station publication, supported by state funds and the United States Hatch Act. We thank our field technicians J. M. Wrieden, K. I. Fonseca, and J. M. Rodriguez for their commitment to this project. We thank the directors, staff, and volunteers of Wild At Heart, notably B. Fox and S. Fox for their help over the course of this study and G. Clark for his technical assistance and guidance. We dedicate this manuscript to Sam Fox, who spent her life working towards the conservation of birds in the southwest. We thank the numerous landowners in Arizona for allowing us to conduct surveys on their property. We are thankful for the cooperation and support from Arizona Audubon and the City of Phoenix Parks and Recreation. We thank T. F. Wright and R. K. Murphy for their thoughtful comments on a previous version of this manuscript. Funding for this project was provided by the United States Fish and Wildlife Service, Arizona Game and Fish Department, United States Department of Agriculture Hispanic Serving Institutions Program Grant number 2015 38422‐30947 awarded to M. J. Desmond, T&E, and New Mexico State University.
information NSFC, Grant/Award Numbers: U21A20515; 61972388; Shenzhen Science and Technology Program, Grant/Award Numbers: JCYJ20180507182222355; GJHZ20210705141402008This work is supported in part by NSFC (U21A20515 and 61972388) and Shenzhen Science and Technology Program (JCYJ20180507182222355 and GJHZ20210705141402008).
© 2022 John Wiley & Sons Ltd.
- augmented virtual environments
- video fusion
- video surveillance
- virtual environments
- virtual-reality fusion
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design