Level-Matched listening and whether or not you can adjust it by ear.

crimsonhawk47

New member
Whenever I use effects or replace a sample, I compare the before and after to the original, as you should.

24/192 Music Downloads are Very Silly Indeed

This article talked a lot about misconceptions in audio. You can find Loudness Tricks on the left-hand sidebar.

In that paragraph, he says that we consciously notice 1db of volume difference and subconsciously notice 0.2db. He says that true level-matched listening isn't possible by just adjusting the volumes of two sounds until they sound the same...

...But that seems like BS. Is it to say all the engineers using level-matched listening are really just doing a useless test?

I command discussion.
 
Well, we'll set your "commandment" aside for the time being. Just don't get ta feelin' like y'all all that...

Of course, as almost always, the answer is "it depends." On a lot of factors, but the source material itself, and what you're listening on, and how you've level-matched, and the effects of psycho-acoustics/"apparent volume" are among them.

Can't comment on the article yet; it does sound like a fascinating topic and I'll eventually get to it and report back, hopefully BC and some of our other experienced producers and mods will see this and weigh-in after reading as well.

GJ
 
I am in the midst or rereading Bob Katz's mastering Audio (2nd edition), so will reference things that are fresh in my mind from this book as we pass through

The discussion on golden ears makes perfect sense - any exceptional individual in any field of endeavour is a combination of training, practice and some natural ability/predisposition within that field

Come to 1/3 down the page and have found what I consider to be a disingenuous statement: that in playing back 24/192 files we will be playing back the whole spectrum (20Hz-95kHz more or less) that such a file can represent

This is disingenuous because

- we do not have transducers that can playback anything above 25kHz within the typical home/professional listening arena
- any playback hardware/software would ordinarily incorporate a low pass filter set to the upper boundary of human hearing (i.e. the LPF would have a corner frequency of ca. 22kHz anyway)
- the tests provided are also disingenuous as they include shifting entire tracks by 24kHz (above the LPF corner frequency of typical home playback systems) which has the potential to introduce distortions into the track before playback within understandably incomprehensible frequency ranges

Whilst it is true that a sampling rate of 192kHz is intended to allow frequencies up to ca. 96kHz to be sampled, if there is no appreciable content above 20kHz (there may be some higher order harmonics present in some sounds, but they would be negligible in terms of meaningfulness at playback) - most of the equipment in a domestic playback chain is not capable of reproducing some of the frequencies supposedly captured by 192kHz and would treat any content as invisible (rather than fold it down against some non-existent lower Nyquist frequency), especially if the lpf is doing it's job at 22kHz

the other arguments about bit depth are relevant: at playback, after finalisation of the project, anything more than 16 bits is wasted
- in most cases the ad/da section of your interface is only going to give you 20 bits or less, so it will be down-sampling your bit depth
- Bob Katz (amongst many other engineers) recommends that for production (anything prior to finalisation) using 24 bits is imperative to ensure that no rounding errors are introduced
- He also recommends that you dither only once, when committing the final finished project after mastering

on Dynamic range
- Katz also points out that 16 bit audio can have a much greater perceptible range than the theoretical range of 96dB, pretty much showing the same response as this article
- most daws, whilst nominally being at 24 bits project bit depth, effectively use double precision to do their processing (48 bits is typical inside protools) but continually convert between 24 bit and 48 bits at the interface to plug-ins - i.e. source (48bits) -> plugin (down-sample to 24 bits in up-sample to 48 bits at output) -> next plugin and so it goes
- the above is not be confused with using 32 bit floating point file format; different beast, different purpose

The paper from the Boston Audio Society is a reasonable attack on the assertions made by high-def audio pundits and only confirms that during playback of the completed project we do not need more than 16/44.1

The second paper is more controversial, in my opinion, as the target of the investigation was to determine if ultrasonics could be detected - i.e. they were not applying a LPF to the signal before playback. What they found was that the inter-modulation distortions of trying to reproduce the higher order harmonics (11th,13th, 15th, 17 and 19th of a 2 kHz tone i.e. 22kHz through 38kHz) on a single speaker introduced perceptible even harmonics (2nd, 4th, 6th, 8th, 10th, 12th, 14th and 16th) between each odd harmonic, making it possible to distinguish between stimuli presented on a single speaker vs the imperceptibility of the stimuli being presented through independent speakers for each of the target harmonics. i.e. unless there were non-linearities introduced during reproduction the signals could not be detected reliably by the test subjects. For me, the bigger problem is that with the small number of participants involved, there is no validity to any statistical analysis applied to the results. The choice of a 79.4% point of correct stimulus is chosen to be the point where the experimenters believe that the responses are due to more than chance (guessing)

And finally we get to your nugget above - discrimination at 0.1dB or 0.2dB to avoid inconsistencies in loudness issues
- the numbers quoted are not for general production but for the calibration of equipment used for testing within an experimental research center (or mastering within a production facility).

That is to say, they are not mentioned to challenge the mix engineer, but to remind us of the differences between purposes equipment may be used for and the degree to which equipment used for experimental presentation should be calibrated to ensure consistency of presentation of stimuli
- i.e. the article is not about what to do with balancing mixes but is about how we compare and therefore how we perceive differences when making comparisons
- if we compare apples to oranges (or worse apples to cheese) then we will not get the expected results we would get from comparing apples to other apples
 
Last edited:
I am in the midst or rereading Bob Katz's mastering Audio (2nd edition), so will reference things that are fresh in my mind from this book as we pass through

The discussion on golden ears makes perfect sense - any exceptional individual in any field of endeavour is a combination of training, practice and some natural ability/predisposition within that field

Come to 1/3 down the page and have found what I consider to be a disingenuous statement: that in playing back 24/192 files we will be playing back the whole spectrum (20Hz-95kHz more or less) that such a file can represent

This is disingenuous because

- we do not have transducers that can playback anything above 25kHz within the typical home/professional listening arena
- any playback hardware/software would ordinarily incorporate a low pass filter set to the upper boundary of human hearing (i.e. the LPF would have a corner frequency of ca. 22kHz anyway)
- the tests provided are also disingenuous as they include shifting entire tracks by 24kHz which has the potential to introduce distortions into the track before playback within undertandably incomprehensible frequency ranges

Whilst it is true that a sampling rate of 192kHz is intended to allow frequencies up to ca. 96kHz to be sampled if there is no appreciable content above 20kHz (there may be some higher order harmonics present in some sounds, but they would be negligible in terms of meaningfulness at playback) - most of the equipment in a domestic playback chain is not capable of reproducing some of the frequencies supposedly captured by 192kHz and would treat any content as invisible (rather than fold it down against some non-existent lower Nyquist frequency)\\, especially if the lpf is doing it's job at 22kHz

the other arguments about bit depth are relevant: at playback, after finalisation of the project, anything more than 16 bits is wasted
- in most cases the ad/da section of your interface is only going to give you 20 bits or less, so it will be down-sampling your bit depth
- Bob Katz (amongst many other engineers) recommends that for production (anything prior to finalisation) using 24 bits is imperative to ensure that no rounding errors are introduced
- He also recommends that you dither only once, when committing the final finished project after mastering

on Dynamic range
- Katz also points out that 16 bit audio can have a much greater perceptible range than the theoretical range of 96dB, pretty much showing the same response as this article
- most daws, whilst nominally being at 24 bits project bit depth, effectively use double precision to do their processing (48 bits is typical inside protools) but continually convert between 24 bit and 48 bits at the interface to plug-ins
- the above is not be confused with using 32 bit floating point file format; different beast, different purpose

The paper from the Boston Audio Society is a reasonable attack on the assertions made by high-def audio pundits and only confirms that during playback of the completed project we do not need more than 16/44.1

The second paper is more controversial, in my opinion, as the target of the investigation was to determine if ultrasonics could be detected - i.e. they were not applying a LPF to the signal before playback. Whet they found was that the inter-modulation distortions of trying to reproduce the higher order harmonics (11th,13th, 15th, 17 and 19th of a 2 kHz tone i.e. 22kHz through 38kHz) on a single speaker introduced perceptible even harmonics (2nd, 4th, 6th, 8th, 10th, 12th, 14th and 16th) between each odd harmonic, making it possible to distinguish between stimuli presented on a single speaker vs the imperceptibility of the stimuli being presented through independent speakers for each of the target harmonics. i.e. unless there were non-linearities introduced during reproduction the signals could not be detected reliably by the test subjects. For me, the bigger problem is that with the small number of participants involved, there is no validity to any statistical analysis applied to the results. The choice of a 79.4% point of correct stimulus is chosen to be the point where the experimenters believe that the response are due to more than chance (guessing)

And finally we get to your nugget above - discrimination at 0.1dB or 0.2dB to avoid inconsistencies in loudness issues
- the numbers quoted are not for general production but for the calibration of equipment used for testing within an experimental research center (or mastering within a production facility).

That is to say, they are not mentioned to challenge the mix engineer, but to remind us of the differences between purposes equipment may be used for and the degree to which equipment used for experimental presentation should be calibrated to ensure consistency of presentation of stimuli
- i.e. the article is not about what to do with balancing mixes but to do with how we compare and therefore how we perceive differences when making comparisons
- if we compare apples to oranges (or worse apples to cheese) then we will not get the expected results we would get from comparing apples to other apples
Well that is such a waterfall of knowledge I can't comprehend that I'm just going to keep using 44.1/24bits then dither down at mastering like I always do.

But I wasn't talking about balancing mixes, I was talking about comparisons too. Like eqing something to the point that there's a noticeable dip in volume, adjusting the output, and bypassing it repeatedly so the levels are the same and I can make a better assumption of the before and after. Are you saying this 0.2db claim is only for very specific tests with very specific machines?
 
I command discussion.
I haven't read the link, but on a mastering (and mixing) front, have level matched by ear 100% of the time for a looooong time.
Any meters won't give you the full story, but with good monitoring, good critical listening skills and attention paid to detail, your ears will. gl
 
@crimsonhawk47: no, I am saying that 0.2dB generally has no place outside of the mastering room or the test room, as you cannot reliably hear such small shifts in intensity without a significant amount of training and exposure to material treated in that way.

if you were to apply it to an eq situation and attempt to balance the audio in vs audio out to be less than 0.2.dB difference you would definitely still hear the change in frequency response because the balancing would not affect what the eq has been used to alter.
 
Last edited:
Im pretty sure OP is talking about gain staging bandcoach. How to properly determine if the output signal is matching the input signal through the plugin I'm assuming? He's wondering if you should do it by ear or by meter.

Im actually curious about this too, I used to do it by meter but then a teacher at school told me to do it by ear.

So now I just do all my eqing (always subtractive) then boost it with makeup gain back t what SOUNDS like (by ear) the same volume as the signal coming in..its just cleaner.
 
Im pretty sure OP is talking about gain staging bandcoach. How to properly determine if the output signal is matching the input signal through the plugin I'm assuming? He's wondering if you should do it by ear or by meter.

Im actually curious about this too, I used to do it by meter but then a teacher at school told me to do it by ear.

So now I just do all my eqing (always subtractive) then boost it with makeup gain back t what SOUNDS like (by ear) the same volume as the signal coming in..its just cleaner.

Nope - the question relates to a specific part of an on-line, well written attack on using 192kHz sampling rate with a bit depth of 24 bits

read the article 24/192 Music Downloads are Very Silly Indeed and then find the papers referenced in it if you have an AES subscription and you will begin to understand the issues that were raised and how op has, perhaps, taken it/them out of context
 
Last edited:
Back
Top