In two previous blog posts we discussed a mixed picture of findings for the relationship between audio quality and real world usage/popularity of audio files on the website Freesound. In one of our Web experiments, Audiobattle, we found that the number of downloads for recordings of birdsong predicted independent ratings of quality reasonably well. In a follow up experiment, however, we found that this effect did not generalise well to other categories of sound – there was almost no relationship between quality ratings and the number of plays or downloads for recordings of thunderstorms or Church bells, for example.
Audio quality research often involves manipulating a known facet of a recording (such as distortion level, bit rate, and so on) and seeing what effect it has on people’s ratings of quality. Unfortunately however, the simple act of requesting a rating of quality can change the way people would normally listen to the recording. Recently we’ve been considering alternative ways of approaching this problem.
If, for instance, we could find another measure that predicted quality reasonably enough we might not have to ask directly for people’s ratings. And if this implicit measure of quality could be found quickly and freely, in data that already exists, we might have any number of new and exciting avenues to pursue.
We are interested in how people perceive the quality of user-generated content and to help us understand this better we are currently carrying out an experiment comparing youtube clips of glastonbury. If you would like to take part please click here, its quite interesting how different devices and positions in the audience can make such a big difference to the sound.
Also from a sound engineering perspective providing a good quality sound to the whole audience is a very difficult task, you need to be part engineer part meteorologist, as the weather can have such a huge effect on the sound, read prof. Cox’s blog for more info.