Kwizstrumental
extra noobish
I'm new to this place, but figured I would join to get more insight and tips.
Anyway... when making a beat, what I've been doing is use my ears, sennheiser headphones and Behringer Xenyx 802 mixer to judge how loud the kick, snare, hi-hat, samples, instruments etc. are...
After I get everything the way I want, I export to file and drop that file into Adobe Audition so I can hit it with Normalize... (I usually would normalize it to 98%). I thought to myself "Hmm, maybe I should load some songs by people I listen to, to check the levels on that to use as a reference." I loaded up a song by Cgats (Remember) and a song by Ryuken vs. (Greydon Square and Cgats' collaboration project, the song was Epoch). When listening to and watching the levels on "Remember" I saw that everything was hitting red, but nothing was really distorting. I have always thought that red is bad, that you don't ~ever~ want to go red. But the song sounded just the same as it did when listening to it on my mp3 player or on my computer.
Loaded up "Epoch" and looked at that one, this one had everything flattened but was hovering just around -2 DB in the bar. No red, but both songs' volumes as far as the beat and the vocals both sound pretty much the same... felt pretty baffled by this and went to Google to ask about what is the standard for normalizing tracks and wound up at an Ableton forum and the consensus there is that you actually don't want to use normalization for the final mix, that you want to use Hard Limiter instead.
I am familiar with the Hard Limiter, but I have always just used that to flatten out vocal snippets, like for dialogue samples. I'd normalize the dialogue sample to 98%, hit it with a Medium in Hard Limiter, then normalize it to something like 50%, so that when I'd drop the dialogue snips in the multi track in Audition (I'd use Pro Tools, but I'm not on a Mac and I don't have an M-Box, so Audition it is), they'd all have a pretty uniform volume. In my usual experience, I wouldn't use Hard Limit on a beat, because it made stuff sound kinda flat. But I guess that is what everyone else does and is what you're ~supposed~ to do?
What is correct? What are the steps?
If I hard limit a track ~now~ will it cause problems later on when the vocalist adds their lyrics, is stuff going to clip or be hard to hear?
And I did hit my own track with Hard Limit, then decided to use Normalize to "0 DB" instead of the usual "98%" - when I did that, my track was hitting red, which bugs me... it's still ingrained in my brain that "red is bad, red means clipping and you don't wanna clip." Decided to do 99%, just so it wouldn't be hitting red all the time....
Anyway, yeah. I just want to know what is standard procedure and ~why~ hard limiting is preferred over just normalizing the "final mix" up to 98% or whatever. Like, is using the Hard Limit going to negatively affect stuff like the dialogue samples or other stuff or prevent the vocals from meshing well with the track later on? I always thought that Normalize was better because it preserved the "dynamics."
Anyway... when making a beat, what I've been doing is use my ears, sennheiser headphones and Behringer Xenyx 802 mixer to judge how loud the kick, snare, hi-hat, samples, instruments etc. are...
After I get everything the way I want, I export to file and drop that file into Adobe Audition so I can hit it with Normalize... (I usually would normalize it to 98%). I thought to myself "Hmm, maybe I should load some songs by people I listen to, to check the levels on that to use as a reference." I loaded up a song by Cgats (Remember) and a song by Ryuken vs. (Greydon Square and Cgats' collaboration project, the song was Epoch). When listening to and watching the levels on "Remember" I saw that everything was hitting red, but nothing was really distorting. I have always thought that red is bad, that you don't ~ever~ want to go red. But the song sounded just the same as it did when listening to it on my mp3 player or on my computer.
Loaded up "Epoch" and looked at that one, this one had everything flattened but was hovering just around -2 DB in the bar. No red, but both songs' volumes as far as the beat and the vocals both sound pretty much the same... felt pretty baffled by this and went to Google to ask about what is the standard for normalizing tracks and wound up at an Ableton forum and the consensus there is that you actually don't want to use normalization for the final mix, that you want to use Hard Limiter instead.
I am familiar with the Hard Limiter, but I have always just used that to flatten out vocal snippets, like for dialogue samples. I'd normalize the dialogue sample to 98%, hit it with a Medium in Hard Limiter, then normalize it to something like 50%, so that when I'd drop the dialogue snips in the multi track in Audition (I'd use Pro Tools, but I'm not on a Mac and I don't have an M-Box, so Audition it is), they'd all have a pretty uniform volume. In my usual experience, I wouldn't use Hard Limit on a beat, because it made stuff sound kinda flat. But I guess that is what everyone else does and is what you're ~supposed~ to do?
What is correct? What are the steps?
If I hard limit a track ~now~ will it cause problems later on when the vocalist adds their lyrics, is stuff going to clip or be hard to hear?
And I did hit my own track with Hard Limit, then decided to use Normalize to "0 DB" instead of the usual "98%" - when I did that, my track was hitting red, which bugs me... it's still ingrained in my brain that "red is bad, red means clipping and you don't wanna clip." Decided to do 99%, just so it wouldn't be hitting red all the time....
Anyway, yeah. I just want to know what is standard procedure and ~why~ hard limiting is preferred over just normalizing the "final mix" up to 98% or whatever. Like, is using the Hard Limit going to negatively affect stuff like the dialogue samples or other stuff or prevent the vocals from meshing well with the track later on? I always thought that Normalize was better because it preserved the "dynamics."
Last edited: