Clean Trick 4 your Mix

I can't think of a situation where I couldn't actually hear one sound over another and it wasn't a levels issue...

It would probably be more of an issue of the mix being "muddy"... the solution would be to roll off and cut frequencies that are unnecessary... for example, if you have a piano track, you can probably roll off everything at least up to 200k and you wouldn't even notice the effect when the piano was soloed (and even more in the mix).

After you remove the garbage that clutters and you boost some areas of the fundamental frequencies, you will have created more definition in your sounds and, as a result, in your whole mix.

Also, no two sounds (unless they are the same sound) are actually "the same frequencies".... AND there is MUCH more to a sounds timbre than where it falls on a spectrum analyzer.


If you are not hearing things, i'd venture to guess that it is really an issue of your mix being bad overall... look at other sounds that you think are not related to the sound you can't hear.


can't get definition in your kick drum? think it is your bass that is "masking" it? think about how those unnecessary frequencies below 200k in the piano are ****ing with your kick drum... you can't HEAR those low freqs in the piano track, but there may be subsonic noise that is moving your speakers. Just because you can't hear it doesn't mean it's not there and doesn't mean it can't overdrive your master bus without you realizing it.



and remember... just because a track is THERE in your mix doesn't mean it SHOULD be there. Just because you played it doesn't mean it is magic gold. If it's not helping, its hurting. If a part is not blending, its conflicting... and i'll bet you it's not an EQ issue.

Ok. This is actually clearing up a lot of misconceptions for me...

So we're saying that muddiness isn't really frequency masking so much as a specific problem in the low-mids, and does benefit from subtractive eq similar to a resonance or boominess.

We're also saying (and I do my best to remember this sometimes) that while clarity is good, it's better to be able to hear the important parts and tuck the less important parts under.

Now there's also the discussion of when high pass filtering/low shelving is unnecessary, and I'm starting to think I have misconceptions of what mud actually sounds like. But it seems like, while people exercise caution against the use of high pass filters unnecessarily, it shouldn't be a problem if it's not affecting the actual tone of the sound, like if you take a high pass filter up until you hear a change, and then dial it back about 5 hz. I don't see how that could be very detrimental to a mix.

Now I'm starting to think my mix problem is because I'm carving the crap out of my sounds (which is pretty common for amateurs) but I always thought I was being conservative. I would sweep around the eq for a frequency I felt was masking other frequencies, and then I would cut it (which was usually below 1000).

I don't want everyone to think I'm posting this mix repeatedly to just be like HEY LOOK AT MY MIX ADAKSDJAKJAWKJSDA. It's really a hindsight thing.

View attachment Warm bounce_08.mp3

In hindsight, one, although I'm a fan of LCR, I probably should've panned some sounds as there is almost no stereo information until the strings come in, and LCR panning seems useless when I only have mono things going on for the most part (and not a lot).

More importantly, that kick is eqed to crap, and it really doesn't seem to be coming through well.

Do you think this is more a problem of the kick I chose, or what? People I play this beat for say it's dope, but they also are not mixing engineers and wouldn't care anyway. W
 
This thread has been... cute. Lol.

I often have an EQ on every track as part of Alloy 2's all-in-one channel strip.
However, I hardly do much more with it than remove some rumble, sometimes remove some "boxiness" (lower midrange resonance buildups that make it sound like the vocalist has caught a cold) when present in a vocal, and do some high-end boosting with a gentle curve on lead vocals that need to stand out more.

The biggest problem with EQ is that it's "dumb" -- it just sits there with a particular set of parameters even if those parameters aren't needed for an entire mix. For example, on vocals, a vocalist (especially a rapper) tends to change his distance from the microphone, at least ever so slightly. Thanks to the proximity effect and some potential comb filtering side effects (think "Reflexion Filters" or small "treated" closets), the actual "tone" of a vocal often changes throughout a take.

I find that when tone-shaping is required, multi-band compression is usually a more effective option than EQ.

EQ is the extreme weapon of "NEVER" and "ALWAYS", and any good production should be dynamic enough in nature that there should be very few times when you would want to "always" or "never" do something with a particular track.

-Ki
Salem Beats
 
Last edited:
What about a Pultec or Massive Passive emulator?

My perspective is that you should hear the tracks, not the EQ. When I use an EQ, it's usually with the "wide strokes", "Low-Q", "Neve"-ish mentality.
When I stray from this, it's the exception rather than the rule and it's usually to implement some sort of intentional special effect.

Having a "clean mix" doesn't boil down to one or two quick tricks or a few outstanding pieces of gear -- rather, it's the skillful application of a carefully-planned strategy.
Having the vision and perspective to plan a strategy at the outset and stick to it through the finish is what sets good mixing/mastering engineers apart from the rest.

-Ki
Salem Beats
 
Last edited:
And by "hear the track, not the eq," You mean eq with transparency?

What if wasn't your track and the kick just REALLY REALLY SUCKED (like in my example above).
 
What if wasn't your track and the kick just REALLY REALLY SUCKED (like in my example above).

If the kick track was programmed on a DAW and could be easily swapped, I'd look around for a different kick sample.

With the specific kick in your example, I'd play around with a compressor, distortion unit, exciter, and transient modulator before considering an EQ. Each of these will shape the overall timbre of your kick.

Oftentimes, people start with their tools out and try to force them to fit into the process.

They start out by grabbing an EQ and thinking to themselves, "OK, how can I use this EQ to make this sound better?"
These people usually end up making a bunch of unnecessary EQ moves, and only end up liking the end result when the sum total of the EQ changes results in a sound which is louder.

Alternatively, you can ask yourself, "Do I like this sound? Why or why not? What's good about this sound? What could be better about this sound?"
Then, once you've identified your goal, you scan your list of tools and come up with a few that you think could accomplish your target.
You could legitimately go through an entire mix without using a single EQ if that's what the mix happened to call for.

-Ki
Salem Beats
 
Last edited:
I'm still a little skeptical on the subject of mid range and space...

Only because you're the only person here who seems to be saying it dvyce. That is the only reason I am challenging it, as I don't want to just believe the first guy that posted it.

You can have frequency masking in places other than the low mids and lower. You don't want everything bright either, otherwise the details of some important instruments will get masked.

So if you're saying frequencies can get masked and you just aren't "carving out space" then maybe we are just having different definitions. Because if I have mud, and I cut out low mids, frankly that IS carving out space. That's saying hey, these two instruments are clashing, and I want one to have more low mids than the other.

I do like the approach that you eq based on how you want the instrument to sound, but the idea that "carving out space is bullshit" doesn't seem right... But the way you proposed it originally was like you were saying not to do it in a graphical sense (like saying "Oh I'll just cut this violin at 250 and then boost this guitar at 250), which of course is bad.

Last thing, regarding a sounds "timbre" and how things can't operate in the same space, that doesn't really sound right either. First of all, you can absolutely have instruments occupying the same range, it's not completely different ranges just because they don't have the same timbre. The timbre is from harmonics and formants, which are parts of the frequency spectrum.

Nobody should stumble upon this and take what I'm saying for fact. I'm just laying down the points for a constructive argument that will actually help me.
 
@crimsonhawk47: dvyce is not the only one here at fp saying it, he is just the only one in this thread saying it

I have told you in several recent threads that most of the eq issues that people think they have disappear if they fix their levels, their panning and their voice/melodic tone allocation first
 
Last edited:
@crimsonhawke: dvyce is not the only one here at fp saying it, he is just the only one in this thread saying it

I have told you in several recent threads that most of the eq issues that people think they have disappear if they fix their levels, their panning and their voice/melodic tone allocation first

Sorry, I usually just hear you say arrangement is the go-to.

crimsonhawke is an alter ego for the record
 
Ik. I was saying that that was your main point in recent threads.

I mean this idea of NOT cutting out space is truly a revelation for me (if I can actually put it to work).

I already tried it a couple hours ago on that mix, and I'm pretty satisfied except with the strings.
 
Ehhhh.... and this is supposed to mean what?

Well, okay, I'll use some recent vocals I mixed as an example. So I rolled off the pointless low end that's just a bunch of mess down in the real low range. Then, to widen it's stereo presence I added reverb and a stereo shaper, then did a little balancing with EQUO. After all that there's little bits of stuff in the low end and sometimes I'm just lazy and throw another eq sweep in a seperate plugin afterwards, granted I could do it in EQUO all the same.
 
This is very interesting thread... I wonder is it better to set your master at -6 or 0. I was reading a lot about -6 - that is the best way to set your mix and after that you can make a better master and then in the end it will be at 0 ofc, but my theory is that is the best way to put everything together at 0 and then send your track to someone who can master your track by analog technique.
 
I'm still a little skeptical on the subject of mid range and space...

Only because you're the only person here who seems to be saying it dvyce. That is the only reason I am challenging it, as I don't want to just believe the first guy that posted it.

you need to read more stuff from reputable people. I am not the first one to say this. This is not some crazy technique I invented. This is normal mixing stuff.



You can have frequency masking in places other than the low mids and lower. You don't want everything bright either, otherwise the details of some important instruments will get masked.

"Frequency masking" is not a real thing.

and...

If your sounds are "bright" then they are "bright".

If you don't want everything "bright", then don't use only "bright" sounds.

EQ will not change where your instruments lay in the frequency spectrum.

If all you have are high pianos, flutes, female vocals, glockenspiels, high synths and violins, EQ wont change your high synths into bass synths, your violins into cellos, your high piano notes into low ones, and your glockenspiels into church bells.

The instruments are where they are.

If your piano is too bright, you can cut some of the high end... but you don't need to cut the high end to make "room" for the flutes.

THey don't "mask" each other.

get it?






So if you're saying frequencies can get masked and you just aren't "carving out space" then maybe we are just having different definitions.

I am NOT.

I am saying frequencies DO NOT get masked.





Because if I have mud, and I cut out low mids, frankly that IS carving out space. That's saying hey, these two instruments are clashing, and I want one to have more low mids than the other.

That is NOT carving out space... space for what?

It is not saying two instruments are "clashing".



You can have a solo piano that sounds "muddy".



Let's say you are playing notes somewhere around the middle range of the instrument.

The microphone will pick up sounds from the instrument and the environment that you don't because it is unnecessary to the sound of the instrument you are trying to capture.

The whole piano vibrates when you play it and you get unwanted low resonance from the notes you are playing and from incidental vibrations of the other strings you are not playing, so you roll off low frequencies that remove that unnecessary garbage (that you may not even be able to hear with your ear) while not even affecting the intended sound of the instrument. You removed low rumble and dissonance that muddies up your sound.

And you may bring up some upper mids because it makes the piano sound more pleasing.

You just made the piano less "muddy" but you did not do it for the purpose of "making space" for something else.

You don't need to remove that low end to "make space" for a bass.

You remove the low end because it is detrimental to the piano sound on it's own.


Instruments that work together do not "clash"... and instruments that "clash" do not work together.

You can have a kick and a bass without them "clashing"... you can have a piano and a vocal without them "clashing"...

and, further, it is about the particular sound in the particular situation... different situations will call for the need to blend sounds differently. Maybe in one song, you want things subtle... sometime you want things to stand out.


I do like the approach that you eq based on how you want the instrument to sound, but the idea that "carving out space is bullshit" doesn't seem right... But the way you proposed it originally was like you were saying not to do it in a graphical sense (like saying "Oh I'll just cut this violin at 250 and then boost this guitar at 250), which of course is bad.

That is not how I proposed it originally. I did mention the phenomenon of those ridiculous frequency charts people like so much to post online saying it is a guideline for mixing... which is bullshit.

but everything I am saying is simply: you don't need to "carve space" from one instrument to "make room" for another. Regardless of how you want to look at it.



Last thing, regarding a sounds "timbre" and how things can't operate in the same space, that doesn't really sound right either. First of all, you can absolutely have instruments occupying the same range, it's not completely different ranges just because they don't have the same timbre. The timbre is from harmonics and formants, which are parts of the frequency spectrum.

I surely did not say that.

What I DID say is that the character of a sound is much more than the "frequencies" that show up on a spectrum analyzer... and that all those other things (harmonics, formants, overtones, etc) are what make the character of a sound.

...and, you know what? i will go so far as to say that having different instruments occupying the same frequency range will make your recording sound BETTER... rather than "masking" anything, it will bring out the best from the song and give you a fuller more harmonious mix.




SO...

The sounds are the sounds.

Regardless of how you EQ them, they are still the same sounds.

Once you start cutting into the relevant frequency ranges that the instruments occupy, they will just begin to sound "filtered"...

...and if you cut frequencies that are unnecessary, that is all about the sound itself rather than some fictitious concept of "making space".

If you think you have too many high freq sounds, then you used too many freq sounds. simple.



A "full sounding mix" comes from using sounds that represent the highs, mids and lows... this comes from using a wide range of sounds. You don't force sounds into different frequency ranges by eq'ing them... you only remove or accentuate what's there already. You don't need to take from one to make room for another. And you can't, with eq, create mass where none exists.
 
Alright, this clears up a lot, but I just have one question

I surely did not say that.

What I DID say is that the character of a sound is much more than the "frequencies" that show up on a spectrum analyzer... and that all those other things (harmonics, formants, overtones, etc) are what make the character of a sound.

Don't formants and harmonics show up in a spectrum analyzer? They may not be as large, but that should just depend on the input gain.
 
Alright, this clears up a lot, but I just have one question



Don't formants and harmonics show up in a spectrum analyzer? They may not be as large, but that should just depend on the input gain.


A frequency spectrum analyser just shows that "something" is there... it doesn't show you "what" is there...


To make a color analogy: It shows you "light" or "dark"... but doesn't tell you if it is light or dark "red", "blue", "green", "purple", magenta", mauve, "cyan", "perrywinkle", etc.


Not that it matters...

Which is, by the way, my whole point...

It doesn't matter.

You can't see music.

Listen to your song and mix it so it sounds good.
 
This is very interesting thread... I wonder is it better to set your master at -6 or 0. I was reading a lot about -6 - that is the best way to set your mix and after that you can make a better master and then in the end it will be at 0 ofc, but my theory is that is the best way to put everything together at 0 and then send your track to someone who can master your track by analog technique.

Doesn't really matter too much what you set it to as long as you're not clipping -- even if you're clipping in the playback domain, the information isn't lost if you're sending a 32-bit Floating-Point WAV file (the volume just needs to be turned down by the recepient).

The long story short on this is that while your audio content may be clipping your speakers, a floating-point WAV file with high bit depth is actually able to represent and hold values which go beyond what your playback system can reproduce.
Don't believe me? Try this test:
Overdrive a mix WAY too loud, render it to 32-bit FP, and then re-import it (and turn the clip's volume down). You'll notice that the clipping distortion goes away when playing back the imported render.
Now, do the same thing but render it to 16-bit integer @ 44.1kHz, and do the same test (re-import and turn down the clip's volume). You'll notice that the clipping distortion remains when playing back the imported render.

Sorry that a lot of this gets a little technical, but basically, just avoid clipping in your DAW. That's the advice I give to your average layman. A peaking 32-bit WAV file will sound distorted on playback, but it doesn't actually lose any of the additional information that goes above 0dB -- the ME can simply turn the audio clip down to "regain" that information.

Furthermore, your DAW's internal mixing path is also 32-bit, which is why you can get away with blasting a track to distortion levels, bussing it to another track, and then afterwards "fixing" the distortion by turning the fader down on the track you're sending to.

Of course, all of this applies only for DAW routing/exporting of digital audio information -- plugins themselves (especially vintage-modeled ones) have expectations for audio levels, and may react differently depending on the levels being fed to them. Many Waves vintage modeling plugins, for example, expect to be fed signals that hover around -20dbFS to -18dbFS most of the time. Feeding these modeled plugins different levels will result in a different type of response from the processing.

-Ki
Salem Beats
 
Last edited:
So when sites use terms like frequency masking (most notably soundonsound) is that just a fake concept to get people REALLY thinking about something else?
 
the problem is that the people writing it do not understand that they have taken a psycho-acoustics term and tried to apply it the real physical world; they are not meant to interact in that way.

The term in perceptual psych argot is about why some sounds may be overshadowed by others and it is about lower frequencies hiding higher frequencies if certain intensity levels are reached - however the practical upshot is that we don't ever live in the idealised world of such studies and experiments and so the outcomes are never as clear cut and certainly not applicable to mixing as general rule.
 
Back
Top