Semantic Modeling of Outdoor Scenes for the Creation of Virtual Environments and Simulations Chen, Meida McAlinden, Ryan Spicer, Ryan Soibelman Ph.D, Lucio 2019-01-02T23:58:47Z 2019-01-02T23:58:47Z 2019-01-08
dc.description.abstract Efforts from both academia and industry have adopted photogrammetric techniques to generate visually compelling 3D models for the creation of virtual environments and simulations. However, such generated meshes do not contain semantic information for distinguishing between objects. To allow both user- and system-level interaction with the meshes, and enhance the visual acuity of the scene, classifying the generated point clouds and associated meshes is a necessary step. This paper presents a point cloud/mesh classification and segmentation framework. The proposed framework provides a novel way of extracting object information – i.e., individual tree locations and related features while considering the data quality issues presented in a photogrammetric-generated point cloud. A case study has been conducted using data that were collected at the University of Southern California to evaluate the proposed framework.
dc.format.extent 10 pages
dc.identifier.doi 10.24251/HICSS.2019.236
dc.identifier.isbn 978-0-9981331-2-6
dc.language.iso eng
dc.relation.ispartof Proceedings of the 52nd Hawaii International Conference on System Sciences
dc.rights Attribution-NonCommercial-NoDerivatives 4.0 International
dc.subject Smart City Digital Twins
dc.subject Decision Analytics, Mobile Services, and Service Science
dc.subject Point cloud segmentation; Individual tree locations identification; Point cloud feature extraction; Mesh segmentation; Creation of virtual environments and simulations
dc.title Semantic Modeling of Outdoor Scenes for the Creation of Virtual Environments and Simulations
dc.type Conference Paper
dc.type.dcmi Text
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
1.74 MB
Adobe Portable Document Format