Whether or Not to Gain Stage

r51888

New member
I find my self for the majority of my mixes, gain staging (inserting Gain Plugin into channel strip) on all of my tracks. I often see people rather pull the channel faders all the way down and start there. Though prefer my method, would I be saving time with the fader approach? I'd like to know what some of you guys prefer, Gain Plugin or Pulling the faders down?
 
Yeah, you'd be saving time and headroom. I usually only add a gain plugin if something is way too quiet. I prefer to pull my faders down since it we tend to think that things sound better when they are louder (can lead to some crappy mix decisions).

There isn't anything wrong with your method as long as you aren't clipping or killing your ears.
 
I prefer to keep my front sounds -6 db

So i have 6 db space to work on my master,

And i keep my master volume at 20% of volume because i dont really like working too loud
 
I find my self for the majority of my mixes, gain staging (inserting Gain Plugin into channel strip) on all of my tracks. I often see people rather pull the channel faders all the way down and start there. Though prefer my method, would I be saving time with the fader approach? I'd like to know what some of you guys prefer, Gain Plugin or Pulling the faders down?

It is the same thing. Gain and volume, those are identical processes in the software domain, both are multiplications of the samples with added dissonance. The only difference is that putting the gain as the first pre-fader plugin will allow the incoming signal to be adjusted in level before the signal hits effects routed on the channel and depending on your gain plugin moves that can then lower the output quality of those effects. A gain increase on such a gain plugin followed by a volume decrease with the volume fader, or the other way around, is a pure waste of signal. Therefore, by default it is not optimal having to adjust the level before the mixing process starts. The volume level on each sound source should be as close to the final level as possible in terms of balance and gain, with enough headroom on the channels consuming the most signal. Pulling the volume faders down is the worst thing you can do.

Strictly speaking of gain staging (incl. panning), it is most optimal to route the signal to the hardware domain.
 
Last edited:
It is the same thing. Gain and volume, those are identical processes in the software domain, both are multiplications of the samples with added dissonance. The only difference is that putting the gain as the first pre-fader plugin will allow the incoming signal to be adjusted in level before the signal hits effects routed on the channel and depending on your gain plugin moves that can then lower the output quality of those effects. A gain increase on such a gain plugin followed by a volume decrease with the volume fader, or the other way around, is a pure waste of signal. Therefore, by default it is not optimal having to adjust the level before the mixing process starts. The volume level on each sound source should be as close to the final level as possible in terms of balance and gain, with enough headroom on the channels consuming the most signal. Pulling the volume faders down is the worst thing you can do.

Strictly speaking of gain staging (incl. panning), it is most optimal to route the signal to the hardware domain.

Do you really believe in this?
 
Do you really believe in this?

It depends a lot on many things though. Modern DAWs at 64-bit internal precision of the mixer are not subject to some of this because the rounding resolves into the same exact values at these bit depths, so eventhough theoretically there is imperfection there, it has no practical impact once the signal is in digital form and is internally being processed at 64-bit precision. But relative to hardware there are significant differences.
 
Last edited:
From a mathematical point of view, the gain staging is useless for the floating point audio engines of the modern DAWs. But, since faders have their best precision arround the 0 dB gain, it's easier to bring the definitive gain around this point. It's easier to use 200 pixels (fader stroke) for a 10 dB gain variation than 20 pixels for the same gain variation from -40 to -50. Gain staging is only important for the fix arithmetics of the A/D and D/A converters. It's key for the analog domain.
 
From a mathematical point of view, the gain staging is useless for the floating point audio engines of the modern DAWs. But, since faders have their best precision arround the 0 dB gain, it's easier to bring the definitive gain around this point. It's easier to use 200 pixels (fader stroke) for a 10 dB gain variation than 20 pixels for the same gain variation from -40 to -50. Gain staging is only important for the fix arithmetics of the A/D and D/A converters. It's key for the analog domain.

Most engineers out there are victims to poor gain staging. Most use the approach of gain staging ITB, using too low sample rate, using limiters without oversampling, leaving the master fader at unity gain, adding dithering, noise shaping etc. It's devastating to a good sounding mix.
 
Last edited:
Most engineers out there are victims to poor gain staging. Most use the approach of gain staging ITB, using too low sample rate, using limiters without oversampling, leaving the master fader at unity gain, adding dithering, noise shaping etc. It's devastating to a good sounding mix.
You mean most engineers just kill their audio when attempting to master their own mix. Mixing and mastering are two different jobs.
 
You mean most engineers just kill their audio when attempting to master their own mix. Mixing and mastering are two different jobs.

Kind of. In my view the mastering should not take place in software at all, other than feeding the signal unsummed out to the hardware domain. It is also OK to do various monitoring/analyzis on the resulting final digital stereo track because it leaves the stereo track untouched at that point.
 
Last edited:
Hardware can be digital...
No analog solution can compete with a digital limiter. Not to mention that any analog treatment requires two additional conversions D/A and A/D in a digital production loop.
I'm a 100% ITB guy when no vinyl is involved.
 
Hardware can be digital...
No analog solution can compete with a digital limiter. Not to mention that any analog treatment requires two additional conversions D/A and A/D in a digital production loop.
I'm a 100% ITB guy when no vinyl is involved.

It depends on your goals also. The way I see it is that these days you have a couple of seconds to hook the listener, which means that whatever point in time during the playback of the final master if it happens to be a "low point" where the user clicks on the track for the peek review, that low point still needs to be so high in terms of quality that you really have to be extreme about what you allow into the mix. For me that means the artifacts that take place in the time dimension of a software mix, is too much artifacts. With equalizers/compressors/limiters I can't allow a peak on the leading edge of the transients to gradually build from various types of software filters, instead I really want the circuitry of the hardware to counter all of that, which it also does. So for me it is mostly the time domain artifacts that keep me from doing much with software. I have to have a high quality time dimension so that I get the depth and warmth that I need from the hardware delays. But I also need it to achieve a great sense of the rhythm of the track, the rhythm is what makes the listener feel the song is cool. This makes it more "real" sounding also, for lack of better terms, when that "real" hits the listener it is a temporary minor anti-gravity effect and a sense of great warmth.
 
Last edited:
Do you use an analog tape machine for delays ? If you're using a regular digital delay hardware, you're only adding 2 more converters on the signal path than the simple delay plug-in provided by your DAW.
The only known time domain artifact is produced by constant phase filters (FIR). But the minimal phase filters (IIR) behave exactly like their analog conterparts. Poorly designed dynamic processors suffer of aliasing problems. That's history now for most plug-in developpers.
The slew rate of the signal is ENTIRELY determinated by the digital sampling rate. Processing in the analog domain can't change this limitation.
 
Back
Top