Augmented Songbook: an Augmented Reality Educational Application for Raising Music Awareness
From LRDE
- Authors
- Marçal Rusiñol, Joseph Chazalon, Katerine Diaz-Chito
- Journal
- Multimedia Tools and Applications
- Type
- article
- Projects
- Olena
- Keywords
- Image
- Date
- 2017-06-29
Abstract
This paper presents the development of an Augmented Reality mobile application which aims at sensibilizing young children to abstract concepts of music. Such concepts are, for instance, the musical notation or the concept of rythm. Recent studies in Augmented Reality for education suggest that such technologies have multiple benefits for students, including younger ones. As mobile document image acquisition and processing gains maturity on mobile platforms, we explore how it is possible to build a markerless and real-time application to augment the physical documents with didactical animations and interactive content. Given a standard image processing pipeline, we compare the performance of different local descriptors at two key stages of the process. Results suggest alternatives to the SIFT local descriptorsregarding result quality and computationnal efficiencyboth for document model identification and pespective transform estimation. All experiments are performed on an original and public dataset we introduce here.
Documents
Bibtex (lrde.bib)
@Article{ rusinol.17.mtap, title = {Augmented Songbook: an Augmented Reality Educational Application for Raising Music Awareness}, author = {Rusi{\~{n}}ol, Mar{\c{c}}al and Chazalon, Joseph and Diaz-Chito, Katerine}, journal = {Multimedia Tools and Applications}, year = {2018}, volume = {77}, number = {11}, pages = {13773--13798}, month = jun, abstract = {This paper presents the development of an Augmented Reality mobile application which aims at sensibilizing young children to abstract concepts of music. Such concepts are, for instance, the musical notation or the concept of rythm. Recent studies in Augmented Reality for education suggest that such technologies have multiple benefits for students, including younger ones. As mobile document image acquisition and processing gains maturity on mobile platforms, we explore how it is possible to build a markerless and real-time application to augment the physical documents with didactical animations and interactive content. Given a standard image processing pipeline, we compare the performance of different local descriptors at two key stages of the process. Results suggest alternatives to the SIFT local descriptors, regarding result quality and computationnal efficiency, both for document model identification and pespective transform estimation. All experiments are performed on an original and public dataset we introduce here.}, doi = {10.1007/s11042-017-4991-4} }