Please use this identifier to cite or link to this item: http://hdl.handle.net/10125/4515

Supporting linguistic research using generic automatic audio/video analysis

File Size Format  
06schreerschneider.pdf 742.1 kB Adobe PDF View/Open

Item Summary

dc.contributor.author Schreer, Oliver
dc.contributor.author Schneider, Daniel
dc.date.accessioned 2012-07-05T23:32:41Z
dc.date.available 2012-07-05T23:32:41Z
dc.date.issued 2012-08
dc.identifier.citation Schreer, Oliver and Daniel Schneider. 2012. Supporting linguistic research using generic automatic audio/video analysis. In Frank Seifart, Geoffrey Haig, Nikolaus P. Himmelmann, Dagmar Jung, Anna Margetts, and Paul Trilsbeek (eds). 2012. Potentials of Language Documentation: Methods, Analyses, and Utilization. 46-53. Honolulu: University of Hawai'i Press.
dc.identifier.isbn 978-0-9856211-0-0
dc.identifier.uri http://hdl.handle.net/10125/4515
dc.description.abstract Automatic analysis can speed up the annotation process and free up human resources, which can then be spent on theorizing instead of tedious annotation tasks. We will describe selected automatic tools that support the most time-consuming steps in annotation, such as speech and speaker segmentation, time alignment of existing transcripts, automatic scene analysis with respect to camera motion, face/person detection, and the tracking of head and hands as well as the resulting gesture analysis.
dc.description.sponsorship National Foreign Language Resource Center
dc.publisher University of Hawai'i Press
dc.relation.ispartofseries LD&C Special Publication
dc.rights Creative Commons Attribution Non-Commercial Share Alike License
dc.title Supporting linguistic research using generic automatic audio/video analysis
prism.startingpage 46
prism.endingpage 53
Appears in Collections: LD&C Special Publication No. 3: Potentials of Language Documentation: Methods, Analyses, and Utilization


Please email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.

Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.