Clean Trick 4 your Mix

So when sites use terms like frequency masking (most notably soundonsound) is that just a fake concept to get people REALLY thinking about something else?



I believe that the benefits from the concept of frequency masking are better applied to lossy audio compression (i.e., MP3) than audio production.



One of the ways that MP3 cuts down on filesize is by combining adjacent frequencies which might "mask" each other,
specifically in areas where the algorithm determines that the end result will be as inaudible as possible.
A slightly simplified example:
The codec might decide that 4073Hz, 4074Hz, and 4075Hz are similar enough to be indistinguishable and round them all off to 4074Hz.



A lot of the detail to differentiate one signal from another tends to come from transient information, especially the higher frequencies.
This, surprisingly, holds true even for many bass instruments.
Hence, "masking" isn't really a very big problem at the fundamental frequency ranges as many new mixers assume.
In fact, many new mixers will cut at an instrument at its "fundamental" frequency in an attempt to prevent it from "masking" another instrument,
without realizing that they're increasing the level of the detail relative to the level of the fundamental.
Once they re-balance the faders to get the audio levels re-balanced, they end up with an audio track which does the opposite of what they intended it to do.



It seems to me that you might be over-thinking the technical aspect of mixing.
If you'd just watch some lessons from some great mixing engineers and see how and why they do the things they do, I think it might relax you a bit.
I've been watching lessons for a very long time now, and continue to do so every day.
Even if I run into a poor lesson where I don't learn anything (or see some incorrect info),
I always walk away having experienced another person's perspective on the mixing process.
It was a big turning point for me back when I started to do this.



-Ki
Salem Beats
 
Last edited:
the problem is that the people writing it do not understand that they have taken a psycho-acoustics term and tried to apply it the real physical world; they are not meant to interact in that way.

The term in perceptual psych argot is about why some sounds may be overshadowed by others and it is about lower frequencies hiding higher frequencies if certain intensity levels are reached - however the practical upshot is that we don't ever live in the idealised world of such studies and experiments and so the outcomes are never as clear cut and certainly not applicable to mixing as general rule.

That's damn unfortunate that they'd be using it in that context, as nobody like me would assume it only applies to a romanticized version of sound

At the same time, there's something freeing about knowing you've been using an entire aspect of mixing the wrong way. That gets me excited for the next beat.
 
it is not so much romanticised as idealised and as Salem notes, it is used as part of the codec for mp3 compression technologies, although back when it was first announced and even now, some 20 odd years on, I still find it difficult to accept that the codec can remove material that by rights can't be heard anyway
 
the problem is that the people writing it do not understand that they have taken a psycho-acoustics term and tried to apply it the real physical world; they are not meant to interact in that way.


The term in perceptual psych argot is about why some sounds may be overshadowed by others and it is about lower frequencies hiding higher frequencies if certain intensity levels are reached - however the practical upshot is that we don't ever live in the idealised world of such studies and experiments and so the outcomes are never as clear cut and certainly not applicable to mixing as general rule.


Yeah... Essentially, in layman's terms / real world scenario:


Q: "hey, why didn't I hear my phone ring while I was at the concert?"
A: "because you couldn't hear it over the band."




Q: "hey, I'm just minding my own business and playing my acoustic guitar sitting on the airport runway but I can't hear my guitar so good. What's up with that?"
A: "because those planes are too loud."




Q: "hey, I'm trying to tell you about this thing here in the machine room of this ocean liner but you are not even paying attention!"
A: "dude, I didn't even know you were talking to me because I can't even hear a word you're saying over all this noise."
 
it is not so much romanticised as idealised

I wouldn't even say it is idealized...

id if say someone took the term and is not really understanding it and applying it to some made up concept totally separate from the actual meaning of the term...

...as unfortunately seems to happen way too often with new mixers.
 
I wouldn't even say it is idealized...

id if say someone took the term and is not really understanding it and applying it to some made up concept totally separate from the actual meaning of the term...

...as unfortunately seems to happen way too often with new mixers.


Well I can see why. I hear it everywhere.

I hear it on recording revolution, soundonsound, I just heard it on the fabfilter q demo. In the last one, he even specifically said psychoacoustics but still talked about it like it is practical.

Surely the people that design these products in the first place (especially if they are really well renowned plugins).
 
Well I can see why. I hear it everywhere.

I hear it on recording revolution, soundonsound, I just heard it on the fabfilter q demo. In the last one, he even specifically said psychoacoustics but still talked about it like it is practical.

Surely the people that design these products in the first place (especially if they are really well renowned plugins).

1. People don't know what they are talking about

2. People will use anything for a marketing buzzword regardless of whether it is used correctly.

3. Magazines/websites need to fill pages and that is the priority for them.

4. Manufacturers want to market their products so it seems like they fill some void and solved some problem.

5. One person says something then everybody jumps on the bandwagon
 
1. People don't know what they are talking about

2. People will use anything for a marketing buzzword regardless of whether it is used correctly.

3. Magazines/websites need to fill pages and that is the priority for them.

4. Manufacturers want to market their products so it seems like they fill some void and solved some problem.

5. One person says something then everybody jumps on the bandwagon

dvyce you make a lot of good points about how you do not cut out frequencies of one instrument to make "room" for another, that it is a myth and the problem there would likely be with the faders or the arrangement.

i however have seen some other articles by reputable people, for instance another mod here, who seem to be alluding to just what you are refuting. for example here is an article at pro audio files

"Myth #2: Subtractive EQ Sounds SmootherUltimately the truth to this is based more on application than reality. It tends to be easier to mix additively – boosting up things you want more of.
The problem with this is that it leads to a lot of compensational boosting. By that I mean boosting up lots of frequency ranges when really we just wanted to hear less of one frequency range. Or we will boost up a frequency because we aren’t hearing enough of it, when in reality there’s something from another instrument that’s getting in the way."

source: Mythbusters: Subtractive vs. Additive EQ

On masking he says

" Masking is only an issue when you want two things to be of equal role importance that share significant content in the same frequency area. And then, removing as little as possible from one of the elements will provide a fuller sound which may be preferable to a bigger cut, which will lead to a more open sound.

I also got to ModernMixing.com and the dude there seems to get great mixes and appears reputable, he talks about cutting frequencies in the vocal to make it "fit in the mix" and said "you might have to cut some other areas to make the vocals fit in there"

I understand you can do subtractive EQ just for the sake of the instrument but he seems to be implying he is opening up space in the mix for the vocal by cutting some freqs in the vocal and/or the backing track in some cases.
source: https://www.youtube.com/watch?v=VRJHvrmo7Lw

Am I missing something? Any clarification?
 
Last edited by a moderator:
in the article Myth 1 is poorly explained: the the two signals will not be equivalent as the phase shift introduced by boosting will be different to that introduced by cutting as they will be operating in different frequency bands - that is the nature of most fixed center/fixed bandwidth eq - the phase shifts adhere to those bands; changing the emphasis from cut to boost or vice versa will introduce different types of phases shift in different frequency regions. the caveat of using the exact same eq device later in the explanation is a crutch on which to base the assertion which fails with any other type of eq applied to the signals - i.e. it asks us to limit our acceptance of the explanation to a very specific set of circumstances rather than the broader set of circumstances we usually find ourselves
 
This has been a very useful thread. I do have one final question regarding the "masking"

Is it BS for Bass frequencies? Traditional voicing is more sparse with bass frequencies, but that's more because they are different notes. Are producers layering two bass sounds with the same notes under each other?
 
This has been a very useful thread. I do have one final question regarding the "masking"

Is it BS for Bass frequencies? Traditional voicing is more sparse with bass frequencies, but that's more because they are different notes. Are producers layering two bass sounds with the same notes under each other?

I'm not sure what you are asking.
 
This has been a very useful thread. I do have one final question regarding the "masking"

Is it BS for Bass frequencies? Traditional voicing is more sparse with bass frequencies, but that's more because they are different notes. Are producers layering two bass sounds with the same notes under each other?

Can't say I understand exactly what you're asking either, but maybe this info will answer your question anyways. When I'm mixing and the bass is too thin, I personally go in a couple of different ways depending on the situation. You can boost the low end on both your low-end sounds and on the master track with an EQ. Also simply bass boosting your low end can beef it up a little. Of course you can layer sounds, but I find this to be my least favorite way because it increases my Poly count and can add some extra mud too.

Then after selecting those parts of my process, if it's still not good enough I'll throw specifically EQUO on it to continue to boost the low end (usually on the master track) while using the specifics of EQUO to keep the track balanced still.

But, crimsonhawk, if this info doesn't help you, sorry >.<
 
I layer different bass sounds with the same notes "under" each other.

I don't understand how this relates to the idea of "masking"...

You are talking about layering sounds?

You are asking whether some people layer sounds on top of eachother? the same line with an added sound playing the part?

Sure, people sometimes do that if they want to... and people don't do it if they don't want to.

It has absolutely nothing to do with "masking" as a concept... but sometimes people do that... and sometimes they don't.
 
I have no idea how it relates either, I was just answering his question about weather or not people do that. :O

Allthough I am pretty sure that he's familiar with the concept of layering sounds together, so now that I think about it, he must have meant something slightly different..
 
I have no idea how it relates either, I was just answering his question about weather or not people do that. :O

Allthough I am pretty sure that he's familiar with the concept of layering sounds together, so now that I think about it, he must have meant something slightly different..

Oops... When I replied to you earlier, I thought I was quoting and responding to the OP clarifying what he was talking about.
 
I'm talking about how two sub or bass sources playing different notes sound horrid together. Like if you loaded a bass sample and played a chord

So we've established that frequency masking is a psycho acoustic theory, but is it practical in the bass areas? Or do these sound bad simply cause of mud. To your point on orchestras and how that is an example of how frequency masking isn't real, they are still spacing out the bass instruments.

The more specific part was not do producers layer instruments in general, it was do they layer their bass instruments (say a sub bass under a bass guitar)
 
Back
Top