Maximizing Headroom

Yuno

Loudness Warrior
I personally think this is a bigger deal than most people think. When people complain about not getting that "commercially loud" CD, they usually forget this step which is crucial in my opinion.

For the new mixers/producers, headroom is essentially how many dB's below 0 you are giving up for the mastering engineer to play with. Now why is this important? Well, let's use an example:

Say you've just finished mixing a track you've been working on for a while. You send it to your buddy who has his own mastering studio with all the bells 'n' whistles. So you're thinking to yourself, "damn, look at all this gear; after my wonderful mixing job and my track going through all this analog equipment, I'll have a song they can play on the radio for sure." So your friend gets back to you and shows you what he's done with your track. It definitely seems louder to you at first and you pop in another CD for reference. Immediately, the CD comes off as louder AND even a bit cleaner sounding. You think then, "HOW CAN THIS BE? After all my strenuous mixing sessions? After all that expensive mastering equipment? HOW?! I mean, it doesn't sound bad, but it's still not quite competitive with commercial CDs."

Well let me tell you how; the person in this story likely failed to maximize headroom. Little did he know, somewhere along the line he failed to remove the DC Offset (there's a few articles about that in the forums) and there were stray frequency bands that were eating up his headroom.

How do you regain headroom? A few ways.

One, is to remove DC Offset of course although the drawback to this is the loudness of that sample might be inconsistently higher or lower than usually.

The second is my favorite and works great, in some cases getting you as much as 2 extra dB's a headroom: filtering. There are some occasions where a synth or instrument might be making noise lower or higher in the freq spectrum than where the main sound is coming from. I'm talking about the inaudible parts of the freq spec. If you're recording a guitar, most of the sound will be in the mid-high mid range. Sometimes you might get unwanted noise from the sub 150 ranger or amp noise up in the 20k's. Removing this noise not only sounds cleaner but clears up a bunch of headroom. Previously, when your friend was maximizing your track's loudness he also made loud this pesky noise. By removing that noise he can crank it up even more and the audible noise becomes noticeably louder.

When talking about maximizing headroom, it really is a matter of "putting everything in it's place" as far as EQ.

Any other thoughts?
 

Abterra

Chilled out Beatsmith
good post, many producers think that their job stops when the song is written, and they are not responsible for the sound of it. In reality, you should be paying attention to things like headroom, gain-staging, dithering, and most importantly, just general mixing and mastering practices. Control your sound!
 

Yuno

Loudness Warrior
Thanks guys. My only fear is that this thread will go unnoticed by those who need to see it but oh well, that's life.
 

everbeatz

mixing engineer
Good post. Another tip I live by.. - It's usually better to take away than to add, when EQing.

IMHO, that's a common misconception. Why is it 'worse' to add when EQing? I'm really curious to hear the logic behind that. It's actually quite the opposite - about audio signal coloration « Variety Of Sound

"This may lead to some serious issues when for example on each and every track most signal resonances were removed by such steep filtering effects which is a common misconception in mixing audio. It leads not only to rather flat and boring signals but also introduces significant phasing issues as described above as long as no linear phase EQ is used (which introduces other problems and is not discussed here) and as an overall result the mix gets fluffy and lacks definition. As a side note, this also shows that the urban legend that cutting the frequency is always prefered opposed to boost some other frequency parts holds not true in general."
 
Last edited:

krushing

Moderator
IMHO, that's a common misconception. Why is it 'worse' to add when EQing? I'm really curious to hear the logic behind that. It's actually quite the opposite - about audio signal coloration « Variety Of Sound

"This may lead to some serious issues when for example on each and every track most signal resonances were removed by such steep filtering effects which is a common misconception in mixing audio. It leads not only to rather flat and boring signals but also introduces significant phasing issues as described above as long as no linear phase EQ is used (which introduces other problems and is not discussed here) and as an overall result the mix gets fluffy and lacks definition. As a side note, this also shows that the urban legend that cutting the frequency is always prefered opposed to boost some other frequency parts holds not true in general."

I think there's a few levels of fundamental misunderstandings when it comes to EQing - first is the level where one doesn't understand that boosting an instrument to make it better cut (no pun intended) through a muddy mix is probably only gonna add to the muddiness and after that's understood, newbies that don't really understand why it's often (not always) better to make cuts in other tracks - rather than to just liberally boost everything - take this as a literal truth. Which, of course, leads to all these lack-of-definition issues, if and when it's overdone. I don't claim to be an expert (I'm just an amateur), but moderation is the key word here. People want easy 1-2-3 answers and one-button solutions, but the reality is that mixing is more often than not a balancing act. If there was a way to automatically make things sound good in every context, there wouldn't be a need for mixing engineers.

My favourite mixing tool is called "common sense". If something sounds off, you think about why it's off instead of slapping a bunch of plugins on it and hoping that the problem goes away because someone recommended said plugins on an internet forum (probably without listening to the example you provided). I can see it's frustrating for people just starting out that the answer to almost all common mix problems starts with "it depends", but that's how it goes.
 

everbeatz

mixing engineer
People want easy 1-2-3 answers and one-button solutions, but the reality is that mixing is more often than not a balancing act. If there was a way to automatically make things sound good in every context, there wouldn't be a need for mixing engineers.

Quoted for truth
 
One, is to remove DC Offset of course although the drawback to this is the loudness of that sample might be inconsistently higher or lower than usually.

The second is my favorite and works great, in some cases getting you as much as 2 extra dB's a headroom: filtering. There are some occasions where a synth or instrument might be making noise lower or higher in the freq spectrum than where the main sound is coming from. I'm talking about the inaudible parts of the freq spec. If you're recording a guitar, most of the sound will be in the mid-high mid range. Sometimes you might get unwanted noise from the sub 150 ranger or amp noise up in the 20k's. Removing this noise not only sounds cleaner but clears up a bunch of headroom. Previously, when your friend was maximizing your track's loudness he also made loud this pesky noise. By removing that noise he can crank it up even more and the audible noise becomes noticeably louder.
High end is rarely a problem for headroom, though any time you're cutting *noise* out you're improving the sound. But when you're talking about acoustic power, low frequencies eat up *huge* amounts of it compared to high frequencies (including DC, which is basically just a frequency of Zero).

The single best tip I've learned as a mix engineer is to put a high pass (low cut) filter on *everything*. Obviously, you shouldn't cut things which are crucial to the sound of any particular instrument, but you'd be surprised how much you can cut without noticing. Sure, if you solo the instrument and A/B the low cut, you'll notice - but in the *mix*, where it really matters, you can cut a lot of low end out and your mixes will be better for it.

For example - I do a lot of rock, and if my electric guitar players knew I was shamelessly rolling off everything below 200-250 sometimes, they'd kill me. But it makes room for the Kick and Bass Guitar, and cuts all those high-power low-frequencies out which nobody hears over the bass guitar anyway, but they're eating up headroom, so why keep them? Heck, I even roll off the Bass Guitar at around 40-50Hz, higher sometimes (with a smooth roll off, 6-12dB/octave, not really a chop). Oh, by the way, they all think the mixes sound awesome - what they don't know can't hurt them, right? :-)

Disclaimer: make sure you do your low-cutting on speakers or headphones that produce those frequencies well already - you don't want to cut something you meant to keep, just because you can't hear it.

On a side-note, the same goes for other frequencies, not so much for the sake of headroom, but in a mixing sense. If you have two instruments, one which is dominant at 600Hz, and the other which is dominant up at like 1kHz but has some 600Hz content which is masking the first, do a little cutting (no more than 6dB, no more than a Q of 3 or 4) at 600Hz on the second to make room for the first in the mix. That's really half the battle in mixing is just getting the instruments out of each other's way so you can hear everything clearly.

Here's a practical example: I mixed this band originally in 2004, and the original masters were all pumpy and distorted because I was trying to smash them to get them up to the volume of professional mixes. I did very little low-cutting, so it stands to reason that my mix bus compressor and limiter would be overloaded and sound like crap. Seven years later, I remixed the album, and obviously I did more than just low cut - but it makes me want to puke, there's so much clarity.

Check out the sample I have on my website. The track alternates about every 8-10 seconds between original and remixed versions. I think you'll be able to tell which is which :-)
http://www.rosemaryln.com/audio/portfolio/HearThis.mp3
 
Last edited:

moses

hardliner
"HOW CAN THIS BE? After all my strenuous mixing sessions? After all that expensive mastering equipment? HOW?! I mean, it doesn't sound bad, but it's still not quite competitive with commercial CDs."

Well let me tell you how; the person in this story likely failed to maximize headroom. Little did he know, somewhere along the line he failed to remove the DC Offset (there's a few articles about that in the forums) and there were stray frequency bands that were eating up his headroom.

I'd say the mix was bad. Not more. These issues have nothing to do with a missing DIY premaster of the stereo file. It's was just a mediocre mix (and most probably recording/production as well).

Something was seriously wrong with the core production if you still have issues with DC after the mixing stage. Something's even more wrong if you don't have any dynamics left after the mix stage and it's definitely not related to the spectral content. This is related to a stupid use of compressors and limiters all over the place during the mix.

"Headroom" has absolutely no meaning in a digital mix. A digital mix has no "headroom" by design. You can't maximize something that doesn't exist ;) . So please, just use the proper term "dynamic range". Headroom is a term used in analogue environments.

The fact is, a proper mix needs exactly the opposite: It should have a high dynamic range from the start. The dynamic range then gets reduced during the mastering stage. The other way around just a sign of failure regarding the production and mix.


One, is to remove DC Offset of course although the drawback to this is the loudness of that sample might be inconsistently higher or lower than usually.

I'd search for a DC issue in the original files. Having a severe DC content in the stereo bounce means that you made several wrong decisions before you even started to mix.

On a side note, the DC offset is not related to the dynamic range. It's just that limiters severely suffer when they have to control an asymmetric waveform: They overcompress the smaller side of the wave - which finally reduces loudness. But certain frequency content "eating up" room for other frequency content is a myth and mathematically wrong.


The second is my favorite and works great, in some cases getting you as much as 2 extra dB's a headroom: filtering.

Beside the fact that a DC filter is a filter as well, I don't really see the point how this will help to increase loudness. Fact is, the final limiter will have to reduce the peaks by 2-3dB more than without high-pass filtering. This won't help the limiter to achieve a higher loudness, instead, it will have more work to do.



I don't say you shouldn't EQ your material, but it is really a bad idea to filter/EQ for the sake of naive (and wrong) technical reasons (i.e. "Low frequencies eat up more room, so I'll cut them away" or "Let's cut the highs away, they don't seem to be there anyway").

People should visit real concerts more often. A jazz band, orchestra or big band isn't EQed or compressed at all and still sounds much much louder, crispier and transparent, more impressive and more exciting than anything we know. Search problems at the source - this is where they ALWAYS are.
 
Last edited:

BrokenScythe

New member
The single best tip I've learned as a mix engineer is to put a high pass (low cut) filter on *everything*. Obviously, you shouldn't cut things which are crucial to the sound of any particular instrument, but you'd be surprised how much you can cut without noticing. Sure, if you solo the instrument and A/B the low cut, you'll notice - but in the *mix*, where it really matters, you can cut a lot of low end out and your mixes will be better for it.

This is interesting. usually i through an EQ and will cut freqs of say, a kick, on the opposite end only. (So for the kick at about 500 Hz I do a steep roll off.)
So I've been doing this all wrong :P (Pretty sure it shows too, often my kicks can sound a bit boxy)
Thanks for this

---------- Post added at 12:55 AM ---------- Previous post was at 12:48 AM ----------

People should visit real concerts more often. A jazz band, orchestra or big band isn't EQed or compressed at all and still sounds much much louder, crispier and transparent, more impressive and more exciting than anything we know. Search problems at the source - this is where they ALWAYS are.

thing is...aren't live sound and digital sound completely different in terms of mixing? In live sound there's no muddiness because the bass and kick are clashing(?) EQ in a live setting is all about making everything sound fuller right? (Boosting Freqs for each instrument where they shine)
Digital, I believe, is more about control because the sound is all being mixed together. It has to mesh because it's all sharing the same plane of frequencies.
A band doesn't really have that issue. They have open, natural space. No processors or digital bottlenecks to hurdle.
Atleast that's what I've been told...could be very wrong. Correct me if so.
 

Yuno

Loudness Warrior
I completely agree, there's also a huge difference between playing live in a jazz club or in a concert hall compared to a dance club. At least in my very very limited experience live EQ is more "making things work together" where as mixing EQ is much more surgical. And live you aren't concerned about mid/side like you are during a mix sesh.

And @moses, I probably made a bigger issue out of DC offset than I should have. It's generally good to remove the offset regardless but if the offset you encounter is big you should just go back and try to fix it. And I've never been in a truly non digital mixing environment and I'm relatively new to this game. To this point everyone I've talked to just called the space you leave for the mastering engineer "headroom." Thanks for the clarification.
 

moses

hardliner
Digital, I believe, is more about control because the sound is all being mixed together. It has to mesh because it's all sharing the same plane of frequencies.

...and you really think that it isn't the case in real-life (i.e. air pressure)?! The same technical rules and restrictions apply, there is no difference.

My point is, fix it at the source before even thinking about audio processing. Most audio engineers 15-20 years ago had nearly no processors at all, their use was cumbersome. But they still got a great sound. :)
 

BrokenScythe

New member
...and you really think that it isn't the case in real-life (i.e. air pressure)?! The same technical rules and restrictions apply, there is no difference.

My point is, fix it at the source before even thinking about audio processing. Most audio engineers 15-20 years ago had nearly no processors at all, their use was cumbersome. But they still got a great sound. :)

Not quite IMO. I noticed that in another post you mentioned Harmonics.
In a live setting, harmonics between different instruments don't interfere with each other, they compliment each other.
In a digital setting the same can be said BUT if you have too many harmonics using up the same space then that's when you get muddiness. Again, this is why there's such a fixation on EQuing because in a LIve setting you have the luxury of being able to boost EQ to make instruments stand out. In a Digital setting you would make room using EQ so that each instrument can fit within the limited spectrum. I only know the live part because my pops has been a live sound engineer for quite some time. I never see my dad remove frequencies, and if he did it was only because the musician or vocalist was failing to play with proper tone or skill. (IE the damn drummer is smacking the snare way too loud or the vocalist is piercing ears in the upper range)
I agree though that simply finding the right type of sound from the get go solves most of this issue. But often times resources and quality for most of us are limited. I don't expect to have a sound for everything because I haven't paid for that privilege. Therefore we got to make due with what we got, and what we got is the ability to mix and shape sound as best we can.
Also, you and I both know audio engineers back then had a lot more resourcefulness and skill then most of us today. :)
 
Top