Programme‎ > ‎

Session 9A: Computational Music Analysis

Friday 19 September/Saturday 20 September
Session Convenor: David Meredith

Friday 19 September
10.30     David Meredith
Introduction


Subsession 1
Chair: David Meredith                                                     

10.45     Tom Collins
Inter-Opus Analyses of Beethoven’s Piano Sonatas

11.15     Tillman Weyde
Melodic Prediction and Polyphonic Structure Analysis

11.45     coffee break                                          


Subsession 2
Chair: Tillman Weyde                              

12.15     Christina Anagnostopoulou
Computational Music Analysis of Children’s Keyboard Improvisations

12.45     Teppo Ahonen/Janne Lahti/Kjell Lemström/Simo Linkola
Intelligent Digital Music Score Book: CATNIP

13.15     lunch break

                                               
Subsession 3
Chair: Kjell Lemström                 

14.45     Agustín Martorell
Systematic Set-Class Surface Analysis: A Hierarchical Multi-Scale Approach

15.15     David Meredith
Music Analysis and Point-Set Compression

15.45     coffee break



Subsession 4
Chair: Agustín Martorell

16.15     Gissel Velarde/David Meredith
Melodic Pattern Discovery by Structural Analysis via Wavelets and Clustering Techniques

16.45     Matevž Pesek/Aleš Leonardis/Matija Marolt
Compositional Hierarchical Model for Pattern Discovery in Music

17.15     break

17.30     David Meredith
Open Discussion 1: Evaluating Music Analysis Algorithms

In recent years, many different algorithms have been proposed for generating a wide variety of different types of structural description (e.g., hierarchical analyses, analyses of harmony, tonality, counterpoint, thematic structure, etc.) from a range of different musical ‘surfaces’ (MIDI, audio, MusicXML). When several different algorithms exist for the same task, how can we best evaluate these algorithms? Should we be concerned with the generalizability of such algorithms – is it important whether or not the same algorithm can easily be adapted for several distinct tasks? Or should we be concerned only with finding specialized algorithms tailored for specific tasks? Should we only be concerned with the output of algorithms or with precisely the method by which this output is produced? What counts as ‘ground truth’? How should we compare algorithm output with such ground truth? Clearly, the answers to these questions depend on the user’s or the algorithm designer’s motivation. If one is motivated by a desire for a better understanding of the psychological processes underlying music cognition, then one may be more interested in whether an algorithm can easily be applied to several different tasks that, traditionally, are considered to require musical ‘intelligence’. On the other hand, if one’s goal is to carry out a specific task as reliably as possible, then one may require an algorithm that performs even better than human experts.

>18.30



Saturday 20 September

Subsession 5

Chair: David Meredith                             

10.30     Alex McLean/Victor Padilla/Alan Marsden/Kia Ng
Data for Music Analysis from Optical Music Recognition: Prospects for Improvement Using Multiple Sources

11.00     David Rizo
Interactive Music Analysis

11.30     coffee break


Subsession 6
Chair: Alex McLean
                                   
12.00     Anja Volk
Rhythmic Patterns as Constituents of the Ragtime Genre

12.30     Emilios Cambouropoulos/Maximos Kaliakatsos-Papakostas/Costas Tsougras
The General Chord Type Representation: An Algorithm for Root Finding and Chord Labelling in Diverse Harmonic Idioms

13.00     lunch break

                                               
Subsession 7
Chair: Emilios Cambouropoulos                            

14.30     Mathieu Giraud
Can a Computer Understand Musical Forms?

15.00     Alan Marsden
Do Performers Disambiguate Structure?

15.30     coffee break
                                          


Subsession 8
Chair: David Meredith                             

16.00     Keiji Hirata/Satoshi Tojo/Alan Marsden/Masatoshi Hamanaka
Music Analyzer that Can Handle Context Dependency

16.30     Alan Marsden
Open Discussion 2: Is Analysis a Matter of Discovery of Structure, Ascription of Structure, or Negotiation of Structure?

Underlying much computational music analysis is an assumption that analysis is a process of discovering the structures which are latent in a piece of music: the computer is useful because it is fast and accurate at discovery; the software is correct to the degree that the analyses generated match the proper structures. Is this a safe assumption? Or is it safe only for certain kinds of analysis? Or is it unsafe but still a useful starting-point for computational analysis? The only utterly safe claim about analysis is that music analysts ascribe a structure to a piece of music. If I say a song has AABA structure I am not necessarily reporting something definite about the piece. You might say that it has the form of a pear. While fictional in a sense, these ascriptions are not necessarily fantasy: we have reasons for our claims, and we might find each other’s informative. What would software which ascribed structure to music be like? Would it be useful? Perhaps analysis is best regarded as a product of negotiation, either between analysts or between the analyst and the score, the sound or the musical experience: an analyst proposes a structure and ‘tries it out’ on the music. What should be the objectives of computational analysis if analysis is negotiation? Would a computational approach have wider acceptance if the software were a partner or mediator in negotiating an analysis rather than an oracle? What would analytical software in this paradigm be like?

>17.30