Semantic Modeling of Outdoor Scenes for the Creation of Virtual Environments and Simulations

Date
2019-01-08
Authors
Chen, Meida
McAlinden, Ryan
Spicer, Ryan
Soibelman Ph.D, Lucio
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Efforts from both academia and industry have adopted photogrammetric techniques to generate visually compelling 3D models for the creation of virtual environments and simulations. However, such generated meshes do not contain semantic information for distinguishing between objects. To allow both user- and system-level interaction with the meshes, and enhance the visual acuity of the scene, classifying the generated point clouds and associated meshes is a necessary step. This paper presents a point cloud/mesh classification and segmentation framework. The proposed framework provides a novel way of extracting object information – i.e., individual tree locations and related features while considering the data quality issues presented in a photogrammetric-generated point cloud. A case study has been conducted using data that were collected at the University of Southern California to evaluate the proposed framework.
Description
Keywords
Smart City Digital Twins, Decision Analytics, Mobile Services, and Service Science, Point cloud segmentation; Individual tree locations identification; Point cloud feature extraction; Mesh segmentation; Creation of virtual environments and simulations
Citation
Rights
Access Rights
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.