Augmented Songbook: an Augmented Reality Educational Application for Raising Music Awareness

From LRDE

Revision as of 18:19, 13 September 2017 by Bot (talk | contribs)

Abstract

This paper presents the development of an Augmented Reality mobile application which aims at sensibilizing young children to abstract concepts of music. Such concepts are, for instance, the musical notation or the concept of rythm. Recent studies in Augmented Reality for education suggest that such technologies have multiple benefits for studentsincluding younger ones. As mobile document image acquisition and processing gains maturity on mobile platforms, we explore how it is possible to build a markerless and real-time application to augment the physical documents with didactical animations and interactive content. Given a standard image processing pipeline, we compare the performance of different local descriptors at two key stages of the process. Results suggest alternatives to the SIFT local descriptors, regarding result quality and computationnal efficiency, both for document model identification and pespective transform estimation. All experiments are performed on an original and public dataset we introduce here.


Bibtex (lrde.bib)

@Article{	  rusinol.17.mtap,
  title		= {Augmented Songbook: an Augmented Reality Educational
		  Application for Raising Music Awareness},
  author	= {Rusi{\~{n}}ol, Mar{\c{c}}al and Chazalon, Joseph and
		  Diaz-Chito, Katerine},
  journal	= {Multimedia Tools and Applications},
  year		= {2017},
  month		= {Jul},
  abstract	= {This paper presents the development of an Augmented
		  Reality mobile application which aims at sensibilizing
		  young children to abstract concepts of music. Such concepts
		  are, for instance, the musical notation or the concept of
		  rythm. Recent studies in Augmented Reality for education
		  suggest that such technologies have multiple benefits for
		  students, including younger ones. As mobile document image
		  acquisition and processing gains maturity on mobile
		  platforms, we explore how it is possible to build a
		  markerless and real-time application to augment the
		  physical documents with didactical animations and
		  interactive content. Given a standard image processing
		  pipeline, we compare the performance of different local
		  descriptors at two key stages of the process. Results
		  suggest alternatives to the SIFT local descriptors,
		  regarding result quality and computationnal efficiency,
		  both for document model identification and pespective
		  transform estimation. All experiments are performed on an
		  original and public dataset we introduce here.},
  day		= {12},
  doi		= {10.1007/s11042-017-4991-4}
}