You too choose two YouTubes…

In two previous blog posts we discussed a mixed picture of findings for the relationship between audio quality and real world usage/popularity of audio files on the website Freesound. In one of our Web experiments, Audiobattle, we found that the number of downloads for recordings of birdsong predicted independent ratings of quality reasonably well. In a follow up experiment, however, we found that this effect did not generalise well to other categories of sound – there was almost no relationship between quality ratings and the number of plays or downloads for recordings of thunderstorms or Church bells, for example.

For our next Web test, Qualitube, we reasoned that people might find it easier to compare samples if they were recordings of the same event. Continue reading

Institute of Acoustics: Sound Recording Techniques – Our presentation.

Today (Wed 26th March, 2014) Trevor is presenting some of our recent findings on the effect of distortion on perceived quality in music, as part of the Institute of Acoustics’ Sound Recording Techniques event.

Our talk is titled “How distortion affects the perceived quality of music: Psychoacoustic experiments” and slides for it can be found here (PowerPoint slides in .pdf format).

Quality from quantities – how the wisdom of crowds might help to predict audio quality.

In a recent blog post I discussed the possibility of using a proxy measure for quality. Rather than ask about quality directly, could we find another metric in (ideally free and accessible) existing data about user behaviour that we could rely on to predict quality?

We decided to investigate the idea empirically using samples and data from Freesound.org. Continue reading

Following the he(a)rd: How much should we trust the crowd when it comes to quality in audio?

Audio quality research often involves manipulating a known facet of a recording (such as distortion level, bit rate, and so on) and seeing what effect it has on people’s ratings of quality. Unfortunately however, the simple act of requesting a rating of quality can change the way people would normally listen to the recording. Recently we’ve been considering alternative ways of approaching this problem.

If, for instance, we could find another measure that predicted quality reasonably enough we might not have to ask directly for people’s ratings. And if this implicit measure of quality could be found quickly and freely, in data that already exists, we might have any number of new and exciting avenues to pursue.

Continue reading

What you told us about recording audio with mobile phones (and what your phone says about you…).

Early on in the project we put a survey on the web to ask questions about where and how people make audio recordings, and what they make recordings of. We also wanted to know what issues people reported as having the biggest impact on audio quality in their recordings (you can still take part in the survey by clicking here, it only takes a couple of minutes). Three months on, over 150 people have taken part and we have begun to analyse the data. One of many interesting trends to emerge is a series of differences between iPhones and other brands of mobile. Continue reading