Salem Beats
Ki from Salem-Beats.com
So when sites use terms like frequency masking (most notably soundonsound) is that just a fake concept to get people REALLY thinking about something else?
I believe that the benefits from the concept of frequency masking are better applied to lossy audio compression (i.e., MP3) than audio production.
One of the ways that MP3 cuts down on filesize is by combining adjacent frequencies which might "mask" each other,
specifically in areas where the algorithm determines that the end result will be as inaudible as possible.
A slightly simplified example:
The codec might decide that 4073Hz, 4074Hz, and 4075Hz are similar enough to be indistinguishable and round them all off to 4074Hz.
A lot of the detail to differentiate one signal from another tends to come from transient information, especially the higher frequencies.
This, surprisingly, holds true even for many bass instruments.
Hence, "masking" isn't really a very big problem at the fundamental frequency ranges as many new mixers assume.
In fact, many new mixers will cut at an instrument at its "fundamental" frequency in an attempt to prevent it from "masking" another instrument,
without realizing that they're increasing the level of the detail relative to the level of the fundamental.
Once they re-balance the faders to get the audio levels re-balanced, they end up with an audio track which does the opposite of what they intended it to do.
It seems to me that you might be over-thinking the technical aspect of mixing.
If you'd just watch some lessons from some great mixing engineers and see how and why they do the things they do, I think it might relax you a bit.
I've been watching lessons for a very long time now, and continue to do so every day.
Even if I run into a poor lesson where I don't learn anything (or see some incorrect info),
I always walk away having experienced another person's perspective on the mixing process.
It was a big turning point for me back when I started to do this.
-Ki
Salem Beats
Last edited: