ScholarSpace will be down for maintenance on Thursday (8/16) at 8am HST (6pm UTC)
Please use this identifier to cite or link to this item:
Sonorising and visualizing archive records
|Title:||Sonorising and visualizing archive records|
|Issue Date:||04 Mar 2017|
|Description:||Audio records of the world’s smaller languages are rare to find and have a corresponding value, both to broader world community and to the speakers or their descendants looking for representation of their culture on the web. The Pacific and Regional Archive for Digital Sources in Endangered Cultures (PARADISEC) holds records in more than 1,000. Over half of those records are in audio or video format and, while we provide various finding aids using textual metadata, the experience of listening to these items is what gives a sense of the diversity of small languages recorded over the past half century by researchers and speakers. We have developed a system for presenting media and time-aligned text online (http://eopas.org) and are working on an online annotation system that will allow enrichment, particularly of orphaned recordings for which we have little or no metadata. We have developed a visualization of this system using augmented reality that allows users to point a device at a map of the region and to hear or see media playing texts in the language of that area. In this presentation we will outline these tools and describe a museum display we are currently building using virtual reality systems to present snippets of media via a geographic interface.
In our virtual reality display, users immerse themselves in a language landscape, navigating for example to Vanuatu or PNG to sample something of the massive linguistic diversity in those countries. A user moves across a representation of a geographic region and sees markers on the landscape representing the metadata of the relevant PARADISEC materials for that location: number of speakers, amount and diversity of material held. Audio and text appear when the user gazes at a marker. The visualization can be experienced inside a virtual reality headset such as the Google Cardboard or HTC Vive.
Such a visualization could be adapted to represent the holdings of other kinds of archives, so the discussion in this paper has implications for digital humanities more generally.
This paper will discuss the reasons why we might want to explore this kind of archival data via audiovisual interaction. In the case of our PARADISEC map visualization it is not essential that a user come away with perfect recall of what exactly the archive holds, but rather with a sense of its richness, and increased motivation for using the archive in the future. As each item in the sonorization has its PARADISEC identifier, it is always possible to drill deeper than the short audio grab provided.
|Appears in Collections:||5th International Conference on Language Documentation and Conservation (ICLDC)|
Please email email@example.com if you need this content in an ADA-compliant format.
Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.