Abstract for Abrams, Gwilliams and Marantz poster

While there is a general consensus that fundamental frequency, spectral content, and musical context contribute to pitch perception, it is currently unclear which aspects of perceived musical pitch are neurally encoded during early auditory processing. To investigate this, we recorded brain responses to two types of tones: i) pure tones of fundamental frequency only (F0); ii) complex tones of five partials (integer multiples of, but not including, F0). Participants listened to musical tone sequences, ranging from 220-624Hz (the notes of the A, C, and Eb major scales), while magnetoencephalography (MEG) was recorded. Although the two tone-types have non-overlapping spectral content, they are perceived as the same pitch, thus creating an orthogonal relationship between the sensory input and perceptual output. Multivariate analyses were used to decode frequency and tone-type from the activity across MEG sensors. High decoding accuracy across time would suggest that these features are in fact encoded in the spatial pattern of neural responses to musical pitch. We found that a classifier trained at ~50 ms after the onset of the tone could accurately decode whether a listener was presented with a pure or complex tone based on the spatial pattern of activity. At 100ms, we could decode F0 from both tone-types, even for complex tones for which F0 was absent. Further, we were able to use a classifier trained on pure tones to accurately predict the frequency of the complex tones, suggesting that the missing fundamental is restored at this latency. From 200-300ms, tone-type decoding accuracy increased, and the F0 spatial pattern no longer generalised from one tone-type to the other. In sum, separable response components seem to track the spectral content of musical tones as well as the present, or restored, fundamental frequency. Overall this suggests that central aspects of musical pitch perception are indeed encoded in early auditory neural responses.