Please use this identifier to cite or link to this item:
Supporting linguistic research using generic automatic audio/video analysis
|Title:||Supporting linguistic research using generic automatic audio/video analysis|
|Date Issued:||Aug 2012|
|Publisher:||University of Hawai'i Press|
|Citation:||Schreer, Oliver and Daniel Schneider. 2012. Supporting linguistic research using generic automatic audio/video analysis. In Frank Seifart, Geoffrey Haig, Nikolaus P. Himmelmann, Dagmar Jung, Anna Margetts, and Paul Trilsbeek (eds). 2012. Potentials of Language Documentation: Methods, Analyses, and Utilization. 46-53. Honolulu: University of Hawai'i Press.|
|Series:||LD&C Special Publication|
|Abstract:||Automatic analysis can speed up the annotation process and free up human resources, which can then be spent on theorizing instead of tedious annotation tasks. We will describe selected automatic tools that support the most time-consuming steps in annotation, such as speech and speaker segmentation, time alignment of existing transcripts, automatic scene analysis with respect to camera motion, face/person detection, and the tracking of head and hands as well as the resulting gesture analysis.|
|Rights:||Creative Commons Attribution Non-Commercial Share Alike License|
|Appears in Collections:||
LD&C Special Publication No. 3: Potentials of Language Documentation: Methods, Analyses, and Utilization|
Please email firstname.lastname@example.org if you need this content in ADA-compliant format.
Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.