In two previous blog posts we discussed a mixed picture of findings for the relationship between audio quality and real world usage/popularity of audio files on the website Freesound. In one of our Web experiments, Audiobattle, we found that the number of downloads for recordings of birdsong predicted independent ratings of quality reasonably well. In a follow up experiment, however, we found that this effect did not generalise well to other categories of sound – there was almost no relationship between quality ratings and the number of plays or downloads for recordings of thunderstorms or Church bells, for example.
In an earlier blog we wrote about a Web experiment where we asked participants to compare and rate sounds tagged with “bird song” (or “birdsong”) on Freesound.org. We then compared the quality ratings we had obtained with the Freesound metadata for each sample (such as average rating, how many downloads, etc). We found that 33% of the variance in quality ratings could be explained by the number of downloads per day of the sounds. An interesting finding – it hinted towards a rough-and-ready method for quickly sorting sets of audio into good and poor audio quality. Continue reading
In a recent blog post I discussed the possibility of using a proxy measure for quality. Rather than ask about quality directly, could we find another metric in (ideally free and accessible) existing data about user behaviour that we could rely on to predict quality?