You too choose two YouTubes…

In two previous blog posts we discussed a mixed picture of findings for the relationship between audio quality and real world usage/popularity of audio files on the website Freesound. In one of our Web experiments, Audiobattle, we found that the number of downloads for recordings of birdsong predicted independent ratings of quality reasonably well. In a follow up experiment, however, we found that this effect did not generalise well to other categories of sound – there was almost no relationship between quality ratings and the number of plays or downloads for recordings of thunderstorms or Church bells, for example.

For our next Web test, Qualitube, we reasoned that people might find it easier to compare samples if they were recordings of the same event. Continue reading

Following the he(a)rd: How much should we trust the crowd when it comes to quality in audio?

Audio quality research often involves manipulating a known facet of a recording (such as distortion level, bit rate, and so on) and seeing what effect it has on people’s ratings of quality. Unfortunately however, the simple act of requesting a rating of quality can change the way people would normally listen to the recording. Recently we’ve been considering alternative ways of approaching this problem.

If, for instance, we could find another measure that predicted quality reasonably enough we might not have to ask directly for people’s ratings. And if this implicit measure of quality could be found quickly and freely, in data that already exists, we might have any number of new and exciting avenues to pursue.

Continue reading