In an earlier blog we wrote about a Web experiment where we asked participants to compare and rate sounds tagged with “bird song” (or “birdsong”) on Freesound.org. We then compared the quality ratings we had obtained with the Freesound metadata for each sample (such as average rating, how many downloads, etc). We found that 33% of the variance in quality ratings could be explained by the number of downloads per day of the sounds. An interesting finding – it hinted towards a rough-and-ready method for quickly sorting sets of audio into good and poor audio quality. Continue reading
In a recent blog post I discussed the possibility of using a proxy measure for quality. Rather than ask about quality directly, could we find another metric in (ideally free and accessible) existing data about user behaviour that we could rely on to predict quality?
Audio quality research often involves manipulating a known facet of a recording (such as distortion level, bit rate, and so on) and seeing what effect it has on people’s ratings of quality. Unfortunately however, the simple act of requesting a rating of quality can change the way people would normally listen to the recording. Recently we’ve been considering alternative ways of approaching this problem.
If, for instance, we could find another measure that predicted quality reasonably enough we might not have to ask directly for people’s ratings. And if this implicit measure of quality could be found quickly and freely, in data that already exists, we might have any number of new and exciting avenues to pursue.