Welp, I just had my conventions about mixing shattered.

Kwizstrumental

extra noobish
I'm new to this place, but figured I would join to get more insight and tips.

Anyway... when making a beat, what I've been doing is use my ears, sennheiser headphones and Behringer Xenyx 802 mixer to judge how loud the kick, snare, hi-hat, samples, instruments etc. are...

After I get everything the way I want, I export to file and drop that file into Adobe Audition so I can hit it with Normalize... (I usually would normalize it to 98%). I thought to myself "Hmm, maybe I should load some songs by people I listen to, to check the levels on that to use as a reference." I loaded up a song by Cgats (Remember) and a song by Ryuken vs. (Greydon Square and Cgats' collaboration project, the song was Epoch). When listening to and watching the levels on "Remember" I saw that everything was hitting red, but nothing was really distorting. I have always thought that red is bad, that you don't ~ever~ want to go red. But the song sounded just the same as it did when listening to it on my mp3 player or on my computer.

Loaded up "Epoch" and looked at that one, this one had everything flattened but was hovering just around -2 DB in the bar. No red, but both songs' volumes as far as the beat and the vocals both sound pretty much the same... felt pretty baffled by this and went to Google to ask about what is the standard for normalizing tracks and wound up at an Ableton forum and the consensus there is that you actually don't want to use normalization for the final mix, that you want to use Hard Limiter instead.

I am familiar with the Hard Limiter, but I have always just used that to flatten out vocal snippets, like for dialogue samples. I'd normalize the dialogue sample to 98%, hit it with a Medium in Hard Limiter, then normalize it to something like 50%, so that when I'd drop the dialogue snips in the multi track in Audition (I'd use Pro Tools, but I'm not on a Mac and I don't have an M-Box, so Audition it is), they'd all have a pretty uniform volume. In my usual experience, I wouldn't use Hard Limit on a beat, because it made stuff sound kinda flat. But I guess that is what everyone else does and is what you're ~supposed~ to do?

What is correct? What are the steps?

If I hard limit a track ~now~ will it cause problems later on when the vocalist adds their lyrics, is stuff going to clip or be hard to hear?

And I did hit my own track with Hard Limit, then decided to use Normalize to "0 DB" instead of the usual "98%" - when I did that, my track was hitting red, which bugs me... it's still ingrained in my brain that "red is bad, red means clipping and you don't wanna clip." Decided to do 99%, just so it wouldn't be hitting red all the time....

Anyway, yeah. I just want to know what is standard procedure and ~why~ hard limiting is preferred over just normalizing the "final mix" up to 98% or whatever. Like, is using the Hard Limit going to negatively affect stuff like the dialogue samples or other stuff or prevent the vocals from meshing well with the track later on? I always thought that Normalize was better because it preserved the "dynamics."
 
Last edited:
When listening to and watching the levels on "Remember" I saw that everything was hitting red, but nothing was really distorting. I have always thought that red is bad, that you don't ~ever~ want to go red. But the song sounded just the same as it did when listening to it on my mp3 player or on my computer.

Sounds like your listening to an mp3 that's been converted from a mastered .wav file.
An Mp3 which has been converted from a high gain .wav file will produce intersample peaks causing meters to go in the red.
Mp3 conversion will also use a hpf and lpf so on a 192 kbps mp3 everything above 16-17K is usually gone.
Chances are that the mastered .wav file of the same song doesn't peak into the red.


If I hard limit a track ~now~ will it cause problems later on when the vocalist adds their lyrics, is stuff going to clip or be hard to hear?
It's best to deliver a 2 track for overdubs that hasn't been hard limited, although you might want to make a copy of the track with a limiter to shop the beat around.

Any distortion will accumulate down the line so, if the beat is hard limited, then has vocals overdubbed on top, then mastered again, then converted to lossy, it's not always a great outcome and ties the hands of the engineer doing the overdubs, mixing and final mastering. gl
 
Sounds like your listening to an mp3 that's been converted from a mastered .wav file.
An Mp3 which has been converted from a high gain .wav file will produce intersample peaks causing meters to go in the red.
Mp3 conversion will also use a hpf and lpf so on a 192 kbps mp3 everything above 16-17K is usually gone.
Chances are that the mastered .wav file of the same song doesn't peak into the red.


It's best to deliver a 2 track for overdubs that hasn't been hard limited, although you might want to make a copy of the track with a limiter to shop the beat around.

Any distortion will accumulate down the line so, if the beat is hard limited, then has vocals overdubbed on top, then mastered again, then converted to lossy, it's not always a great outcome and ties the hands of the engineer doing the overdubs, mixing and final mastering. gl

I see --- keep one version that has gone through the Hard Limiter, as a way to give the listener what the actual end sound will be as far as the beat is without a vocal track, but keep one version backed up that ~hasn't~ gone through the Hard Limiter so that when the vocals are added, it won't have an accumulative distortion.

Now, another question - is it better to Hard Limit a vocal track to squash it flat ~before~ adding it into the Multi-track session in Adobe Audition, or is it better to leave the vocals just normalized to 50% and judge their volume and relation to the beat - then apply Hard Limiter to the whole mixdown in post after it's been exported?

Anyway...

I think I've been learning ~a lot~ ... up until Den Kokoro and Zpu-Zilla suggested that I use Side-chaining to get the bassline to duck the kick drum, I had no frickin' idea you were supposed to do that. >.>

I made a file for myself to help me remember the whole process for Sidechaining in Reason.

I also wrote down each of my individual steps I took for mastering the track and saved it to a TXT so I'd have it to remind me what I did so I don't forget.

I can't post links, since I'm a newb here, but if you wanna hear my track, search Kwizstrumental - Thanatos on Soundcloud.
 
Last edited:
Now, another question - is it better to Hard Limit a vocal track to squash it flat ~before~ adding it into the Multi-track session in Adobe Audition, or is it better to leave the vocals just normalized to 50% and judge their volume and relation to the beat - then apply Hard Limiter to the whole mixdown in post after it's been exported?

Don't squash the vocals with a limiter. If you need to get a more consistent volume or an overall reduction in dynamic range for a vocal, just use compression. Apply limiting as a final step to the entire track.

Also, is the song you are referring to actually clipping? Sometimes meters in DAWs might look like they are clipping when approaching or at 0dB. Also, in Adobe Premeire Pro my tracks that are mastered and peaking at 0db show up as clipping despite never going above 0dB (for any file type). I'm assuming the same happens for Audition, but I don't use it. My thoughts are that it is a "noob-proof" way of preventing clipping in the conversion or export to files like mp3. I use FL Studio and this is what is done with the native limiter that's "0db Ceiling" is actually -.1 or -.2 dB.
 
Don't squash the vocals with a limiter. If you need to get a more consistent volume or an overall reduction in dynamic range for a vocal, just use compression. Apply limiting as a final step to the entire track.

Also, is the song you are referring to actually clipping? Sometimes meters in DAWs might look like they are clipping when approaching or at 0dB. Also, in Adobe Premeire Pro my tracks that are mastered and peaking at 0db show up as clipping despite never going above 0dB (for any file type). I'm assuming the same happens for Audition, but I don't use it. My thoughts are that it is a "noob-proof" way of preventing clipping in the conversion or export to files like mp3. I use FL Studio and this is what is done with the native limiter that's "0db Ceiling" is actually -.1 or -.2 dB.

"Don't squash the vocals with a limiter."

I assume that this should also be the case for the dialogue samples? I guess I will try making another version with all the individual dialogue snips just normalized to 50 and ~not~ hard limited, and see how that works out.

As for the question about clipping, my own track in the Audition multi-track view wasn't clipping, but everything else I was listening to was... D12 - Devil's Night, Muse - Supermassive Black Hole, apparently everybody sets their mix to 0 DB, which drives me nuts cuz I hate seeing it "clip" even if it isn't actually clipping / distorting...

So I guess the lesson to learn here is to ~not~ squash anything until the final version (after dialogue snips and vocals), and keep a backup of the non-Hard Limited version around, but use the Hard Limited version as the "shop" mix.

I'm still unsure of whether I should use the 85.9 normalization as was apparently done on Ryuken vs. "Epoch" (I'm weird, I kept using trial and error to keep changing the normalization til I got one that didn't change it so I could determine what level of normalizarion whoever mixed the song used, or at least a close approximation to it) or whether to go for 0 DB. I kinda prefer the former, as I don't see it go red.

Epoch.png

Devils_Night.png
 
Last edited:
- The tracks you showed look terribly over-compressed - you lose sonic integrity to get things that loud

- The loudness war is nearly over - YouTube, iTunes, and Spotify all reduce the volume of songs that are too loud, meaning you add distortion and cloudiness (by pushing the level too far) for nothing

- I would never use a limiter within a project - limiting is saved for the last step in mastering, after the whole project is finished

- I would never use the volume normalization feature at all - try not to view it as how your songs should relate to the maximum volume: view it as balancing a mix to sound good, then boosting the volume of that mix in mastering until you reach diminishing returns

What to do:
- Compress an instrument if the volume of of that instrument is a little inconsistent (automate volume if it's very inconsistent)
- Balance the volume of all the instruments according to your tastes, not according to a mathematical formula
- Compress a group of instruments together in a bus if you want them to sound more glued together
- If you're making your own master, lightly compress the whole mix with a slow attack and slow release (multiband compressor could be desirable here)
- If you're making your own master, lightly limit the the whole mix
- Save 1 dB at the top, to give room for mp3 converters to not clip during conversion
- Aim to never have one compressor or limiter apply more than 3 dB of gain reduction​
 
I tried to use Pro Tools once (well, more than once since we had to use it for our assignments), got interested in it because the Electronic Music class in the Midi Lab at my community college was using it (8 LE, I think?), and Reason 4.0 and Ableton 8. I went on Ebay and bought one of those MBox things like they had in the Midi Lab, hooked that up, got Pro Tools on Windows and tried to output audio from my pc into the Mbox and then listen to that through headphones... audio was absolutely bereft of any low end, so I was like "eh, nevermind." This of course was also before I got a Behringer Xenyx 802 (the stations in the midi lab all had Eurorack MX602A, I had never heard my Sennehsier's have actual head-vibrating bass until I was in that classroom), so maybe if I tried hooking it up again, I'd actually have bass this time... I dunno what I did with the MBox, I think I wound up giving it to my teacher cuz I didn't think I had a use for it.... either that or it's in a box somewhere.

Pro Tools did run somewhat okay on my computer, but the GUI is rather quite overwhelming.
 
Last edited:
"I would never use a limiter within a project - limiting is saved for the last step in mastering, after the whole project is finished."

I took Adrian's advice and went back and re-did all my dialogue snippets without the Hard Limiting, I'd post the link to my completed mix, but I'm a newb on this forum so I don't have that privaledge yet. (Kwizstrumental on Soundcloud).
 
Last edited:
DAWs don't impart a signature sound. Pro Tools doesn't sound inherently better or worse than other DAWs.

The Pro Tools interface has changed a fair amount from 8 LE, but you may still find it intimidating. It may take a little longer to learn than some easier platforms. I hear FL Studio is easy to learn, though I'm glad that's not my DAW of choice.


Bass content is dramatically affected by your speakers or headphones. Proper amplification contributes (including your headphone amplifier for your headphones), and converters may contribute to a degree as well. Working through a crappy Mbox (and I had one) doesn't make your final product sound worse, but it may subtly change the way you perceive it while working on it, influencing you to make different decisions.

I think your bass and Mbox experience are poor reasons to discard Pro Tools. But not caring for the interface, preferring an easier learning curve, or biasing your workflow more towards virtual instruments over audio tracks are all valid reasons.
 
"Bass content is dramatically affected by your speakers or headphones." ~ Up until I plugged my Sennies into the Eurorack mixers at school I had never heard ~real~ bass out of them, I was always plugging them in to my computer's main jack, so I'd never heard them the way they were supposed to be, with power. The Xenyx 802 makes mixing, and listening to music in general, a lot more enjoyable. As for the M-Box, I think I may have given that to Clay (my teacher at Clackamas Community) - I tried looking for it in my boxes of stuff but couldn't find it, oh well. In retrospect, after experiencing the sound from the Xenyx 802, I wish I had kept it and gave Pro Tools another shot. I know PT has some good stuff in it that Adobe Audition doesn't, but fer the most part now, I'm just creating / mixing my beats in Reason and Ableton and using Audition for adding any dialogue samples I want.

"I hear FL Studio is easy to learn, though I'm glad that's not my DAW of choice."

Ah... FL Studio, I tried that once also, couldn't figure out where / how to place samples until I looked up a tutorial and even after that, still confusing and not straightforward.
 
Last edited:
Back
Top