Feature Selection and Composition using PyOracle

A system is described which uses the Audio Oracle algorithm for music analysis and machine improvisation. Some improvements on previous Factor Oracle-based systems are presented, including automatic model calibration based on measures from Music Information Dynamics, facilities for compositional structuring and automation, and an audio-based query mode which uses the input signal to influence the output of the generative system.

Singing Voice Database: The role of the singing acoustic cues in the perception of broad affect dimensions

From the Proceedings of the 10th International Symposium on Computer Music Multidisciplinary Research, Marseille, France, October 15-18, 2013, an experiment that investigates the role of acoustic correlates of the singing voice in the perception of broad affect dimensions using the two-dimensional model of affect.

DistributeDJ: a mobile group music making toolkit

The activity of technology enhanced group music making can offer intriguing insights into musical interaction, new ideas on performance, and more easily provide non expert musi- cians the opportunity to participate. Collaborative improvisations often have a looser structure then traditionally composed score based music, but in most cases there are still sets of structural frameworks to be followed. DistributeDJ, a group music service, is proposed to aid in the creation and execution of flexible collaborative mobile music performances and attract a broader audience participation. Designed to run through an Internet client, the system provides a communication layer between a central performance server and multiple clients that are used to actively manipulate the contents of delivered audio through participant’s mobile interface.

Expressions: Inter-Professional Culture via Coactive Digital Humanities Platform

This paper explores an integrated inter-professional collaborative model and technology for a break-through digital humanities platform called, “Expressions.’ This platform was co-created by the Center for Research in Entertainment & Learning (CREL) at Calit2, UCSD, in partnership with Visual Exchange Network (VEN), which is an hybrid research and production group for upstream multi-venue media.

Audio Oracle analysis of Musical Information Rate

This paper presents a method for analysis of changes in information contents in music based on an audio representa- tion called Audio Oracle (AO). Using compression properties of AO we estimate the amount of information that passes between the past and the present at every instance in a musical signal. This formulation extends the notion of Information Rate (IR) to individual sequences and allows an optimal estimation of the AO threshold parameter. We show that changes in IR correspond to significant musical structures such as sections in a sonata form. Relation to musical perception and applications for composition and improvisation are discussed in the paper.