What does clashing frequencies actually mean?

riceball

New member
Tutorials and books tell you , 'if this instrument and that instrument share the same common frequencies they will clash' or 'make sure you choose your instruments wisely to avoid clashing frequencies'. I never see any of these tutorials go into detail though about what 'clashing' really is(when it comes to audio). Is it when you are playing two instruments with similar frequencies at once they both fight to be the 'main focus'? I was seeing what it sounds like when you don't sidechain a sub to a kick and I felt no difference. I heard the sub ducking but did not feel this 'clashing' It sounded like a solid sub being played with a kick.
 
Tutorials and books tell you , 'if this instrument and that instrument share the same common frequencies they will clash' or 'make sure you choose your instruments wisely to avoid clashing frequencies'. I never see any of these tutorials go into detail though about what 'clashing' really is(when it comes to audio). Is it when you are playing two instruments with similar frequencies at once they both fight to be the 'main focus'? I was seeing what it sounds like when you don't sidechain a sub to a kick and I felt no difference. I heard the sub ducking but did not feel this 'clashing' It sounded like a solid sub being played with a kick.

Frequency clashing, even called frequency masking, is when the rms and peak of signals rather close in level occupy the same frequency range at the same time on the same speaker and hence to some degree cancel each other out in terms of perception. Frequency masking can also be perceived in terms of stereo between the speakers, which basically happens when you have only stereo tracks panned 100%L and 100%R. The headroom is still used as much but the amount of information about each sound source is less when it is received by the brain, causing less level and details to be perceived. When you push a mix that contains a lot of frequency masking with compressors in an A/B scenario you push the content more than you otherwise would to compensate for the lack of perceived loudness and in this way you further unbalance the peaks of the content so now certain frequency ranges are perceived extra loud.

In production you get this issue when you are selecting sound sources into the arrangement that sound similar, play the same tones on them hitting at the same time and when too many sound sources are playing too long notes in terms of the duration of those tones. (e.g., pads)

In recording you get this issue when you are using microphone pairs that are overall too close in their frequency response and when you position the microphones in a phase cancelling way.

In mixing you commonly get this issue in dense arrangements in terms of stereo perception when the pan faders on the stereo tracks are set too far apart, when the pan faders of the mono tracks are set too close to the center, when the stereo image is built without the involvement of the time dimension, when you do not have required compression and when the bulk of the processing happens on the overall/group level rather than on the more individual level. It sounds closed and weak in the stereo field. Frequency masking also de-stabilizes the sound sources in the stereo field in terms of L R localization, they start to fluctuate in the stereo field, which reduces perceived separation and causes a less comfortable playback of the mix.

To avoid this issue you can do mono compatibility checks, the earlier the better. The issue is best addressed by doing moves that reduce this in each phase of the music creation process. Boosting the resonance of sound sources is also good since it moves the fundamental frequency of the sound sources further apart by making each sound source more unique in its frequencies. The RMS and peaks of signals should also in general have both absolute and relative handling. Pad type sound sources, rooms with long reverb time and compressors with long release time that make the same frequencies active over a long duration on the same speaker, should be carefully controlled to ensure they do not create a lot of frequency masking. When you have long streams of the same frequencies in parallel on the same speaker you add to the issue, these streams sort of form a mask, rms and peaks of sound sources are kind of dimmed out inside of them, amounting to lost perceived signal.

By compressing the sound sources before the bulk of the panning has been done, you offset a great amount of unmasking power to the pan knobs, which is great but do ensure that most of the masking issues are dealt with before mixing and mastering, in fact do not start mixing a recording full of frequency masking issues. Force the bulk of the audio quality into the production process and allow enough creativity into the music creation, so that all elements are more unique naturally for a creative reason. Think about when all of the sound sources hit inside of the production and make it so that each side communicates with the other side in terms of rhythm. And stay smart about what sound source plays what tones when. Use dynamic processing to constantly combat the issue.

Part of my engineering philosophy is to work differently with vertical quality dimensions as oppose to horizontal ones that span broadly. With vertical quality dimensions I box in the overall quality I want to achieve, into smaller quality pieces and address each such quality piece with particular technique and gear. With horizontal quality dimensions I tend to achieve those as early as possible in the music creation process and try to maintain those as good as possible from there, many times with various protection strategies applied. Such is the case with for instance the stereo imaging and the frequency masking, I protect the frequencies and the stereo image by narrowing down the stereo image and when I am done I remove that protection. I use these limitations to my advantage during the A/B, so that when version B is better I can remove a number of other limitations applied and make it even much better from there. I have a lot of options for quality improvements left when the final master version B is better than my references.

Each quality dimension has associated protections/limitations and my aim is to make version B better sounding with those enabled, so that I can disable them when version B is better to really make it that much better. For instance I have an EQ filter sitting on the master bus that peaks the 200 Hz and 4 kHz area so that I can achieve softness more easily. This means that when I think version B is softer than the reference, I remove this filter and a set of other filters and now the mix is incredibly soft. I apply similar technique during the readiness evaluation stages. If something is ready when I have tried to make it appear not to be, then I know it is ready. But if something appears not to be ready and on top I try to make it appear not to be and then notice it's far from ready, then it's so much more obvious it's not ready. These decisions are very important. So much of great productions stems from knowing that you truly have the quality you think you have.

My main goal about a final master is to be very satisfied about the final result, then remove all of my temporary processes and go "wow, sweet".

Because much of my engineering philosophy is A/B based, I also put a lot of focus into having A/B references that I can use to reference particular qualities. That I find to be incredibly important.
 
Last edited:
Sounds that share frequencies don't clash, actually the opposite, that's pretty much the definition of harmony in harmony theory.
 
Back
Top