Please use this identifier to cite or link to this item:

Semantic Modeling of Outdoor Scenes for the Creation of Virtual Environments and Simulations

File Size Format  
0194.pdf 1.79 MB Adobe PDF View/Open

Item Summary

Title:Semantic Modeling of Outdoor Scenes for the Creation of Virtual Environments and Simulations
Authors:Chen, Meida
McAlinden, Ryan
Spicer, Ryan
Soibelman Ph.D, Lucio
Keywords:Smart City Digital Twins
Decision Analytics, Mobile Services, and Service Science
Point cloud segmentation; Individual tree locations identification; Point cloud feature extraction; Mesh segmentation; Creation of virtual environments and simulations
Date Issued:08 Jan 2019
Abstract:Efforts from both academia and industry have adopted photogrammetric techniques to generate visually compelling 3D models for the creation of virtual environments and simulations. However, such generated meshes do not contain semantic information for distinguishing between objects. To allow both user- and system-level interaction with the meshes, and enhance the visual acuity of the scene, classifying the generated point clouds and associated meshes is a necessary step. This paper presents a point cloud/mesh classification and segmentation framework. The proposed framework provides a novel way of extracting object information – i.e., individual tree locations and related features while considering the data quality issues presented in a photogrammetric-generated point cloud. A case study has been conducted using data that were collected at the University of Southern California to evaluate the proposed framework.
Pages/Duration:10 pages
Rights:Attribution-NonCommercial-NoDerivatives 4.0 International
Appears in Collections: Smart City Digital Twins

Please email if you need this content in ADA-compliant format.

This item is licensed under a Creative Commons License Creative Commons