Whats The Point In Layering?

dannydawiz

New member
What's the point in layering instruments?

Is there a point in layering two of the same sounds?

For example... I have a synth right now that needs more highs. Whats the difference between just adding more highs with an EQ vs making a duplicate track and filtering out the lows while using the volume fader to increase the highs on the layer?

Or another situation...

Whats the difference between duplicating a track with a layer vs just using the OSC2 function in your synth and programming it this way?
 
Last edited:
For example... I have a synth right now that needs more highs. Whats the difference between just adding more highs with an EQ vs making a duplicate track and filtering out the lows while using the volume fader to increase the highs on the layer?

This isn't exactly necessary, and yes the process you described is not useful. That being said, EQ boosting frequencies is never a good idea, although many people like to boost, it's adding information and artifacts that aren't there. Instead turning the rest down and turning the fader up gives you a higher quality sound. That being said:

Layering encomapsses the use of octaves, intervals, detunes and whatever else you can think up. Using the same instrument and same performance will do nothing but make the sound louder. The purpose of layering is to fill up the spectrum. For example a single lead high up can sound pretty boring and isolated from the track. Playing the same melody an octave lower can fill out that melody, make it more powerful and simultaneously glue it to the rest of the track for that "full" sound.

As for Using a second oscillator in a synth, is a great idea! Frankly a synth only running one octave gets boring, and I would recommend always doubling up (or quadrupling depending on synth) to richen the sound, thicken it, detune one slightly. Whatever you wish it is creative freedom. Again however, two oscillators running different waveforms at different phases can again be duplicated and layered at different octaves for another full sounding effect.

It's all creative freedom, but too full absolutely does exist and it is incredibly easy to get caught up with too many sounds going on.
 
What's the point in layering instruments?

Is there a point in layering two of the same sounds?

For example... I have a synth right now that needs more highs. Whats the difference between just adding more highs with an EQ vs making a duplicate track and filtering out the lows while using the volume fader to increase the highs on the layer?

I wouldn't do either of those as you won't even be boosting the highs like you want, only adding distortions - the use of complementary eq as suggested in the or part of your suggestion does work but it needs to be more than just the same sound with the volume turned up and the lows cut out

Or another situation...

Whats the difference between duplicating a track with a layer vs just using the OSC2 function in your synth and programming it this way?

duplication is never just about exactly the same notes with the same sound or the same sound with an additional oscillator (which can add harmonics but maybe not in the way you are seeking)

more simply

layering is orchestration

- different or the same sounds playing in different octaves (or some harmonically related interval to the underlying chord progression) but at different points in the stereo spread

e.g. a flute could be doubled at the octave above by the piccolo and the octave below by the clarinet to create a much more interesting composite tone, which will sound very different to two flutes playing the same notes or even a flute and clarinet playing the same notes - the piccolo adds the high sparkle and the clarinet adds a low firmament

so explore complementary sounds (sounds with different characteristics) and complimentary sounds (sounds that assist each other)
 
Why is it then that whenever I see people layering sounds they always use a separate track for the layer?

For example...

Lets pretend this is one synth on ONE track.

OSC1 - Saw Wave -1
OSC2 - Square Wave 0

Now lets pretend we have Two synths on TWO tracks.

Track #1 - OSC1 - Saw Wave -1
Track #2 - OSC1 - Square Wave 0

What exactly is the difference between these two sounds? If all layering can essentially be done with the use of ONE track with the use of the OSC2 & OSC3 functions why is it that I always see people using separate tracks for their layers?

EDIT: I suppose you can rephrase this question.

Whats the point in layering BY using a completely separate track vs using the OSC1,2,3 function?
 
Last edited:
different tracks allow you to tailor your eq (complementary eq - taking stuff away) for each of the sounds.

layering using 3 osc allows you to build a complex sound that may or may not be able to be filtered (simple eq) so that each part stands out - massive has two filters that can be accessed in parallel or series and you can allocate the output of an osc to either filter or both; however, at the end of the day, you still only have two marginal filters to use to sculpt the sound in a rough manner. better to have three separate channels and apply appropriate eq to each channel
 
That clarifies my question thank you. :)

On a side note... When layering sounds generally do you give each sound its own EQ spectrum or is it acceptable to share an EQ spectrum between two sounds?

For example...

Sound 1 500-1000hz
Sound 2 500-1000hz
 
Last edited:
the only answer I can give is that each track and set of sounds will be different - there are no hard and fast rules on what to do with your eq only general guidelines which is take away what you don't need from other channels rather than add what you do need in one channel (doing the latter over many channels will only make a mess in my experience)
 
I think the "classic" take on layering is simply to combine two distinctly different timbres into one sound - a snappy kick and a boomy kick, for example, into a snappy and boomy kick (for a very simple example). Layering two instances of the same synth, one osc each isn't much different from just using two oscs in the first place (although this'll depend a lot on the synth in question), unless, as pointed out, you wanna process them completely differently.
 
There's nothing wrong with EQ boosts if you know what you're doing and like the resulting sound. With enough headroom and a bit of limiting, you can get a certain type of sound that you can't get from attenuation. People try to invent rules all the time, but if boosting was bad, they'd make it impossible to do in all the plugins and consoles and DAWs. Beware the urban myths.
 
As great as layering can be, it is important you work your layers into the mix in a very minimal fashion. Layering can add great width to your sounds, especially if the sounds are ever so slightly different and it creates a small sense oh phasing. Go too far though and you end up with music and beats that are closer to noise then pieces of art.
 
There's nothing wrong with EQ boosts if you know what you're doing and like the resulting sound. With enough headroom and a bit of limiting, you can get a certain type of sound that you can't get from attenuation. People try to invent rules all the time, but if boosting was bad, they'd make it impossible to do in all the plugins and consoles and DAWs. Beware the urban myths.

I agree to a point; however, almost all pro engineers say the same thing "use eq to take stuff away before you try to add stuff - each layer can be eq'd/bandpassed to add to the whole". They (and I) say this because the anomalies (phase shifts and resonant bumps) that are added by subtractive eq are less noticeable than those added by additive eq
 
I think you have to make a couple of distinctions else the advice you get will be misleading. The first is: is this sound design or mixing? And what is the content?

Since you are talking about changing the synth patch it sounds like this is more of a sound design question. In that case the advice of sound engineers is irrelevant. They are talking from a prerogative of taking given content and making it sound better without changing it's essence. That is completely different to sound design. It just doesn't apply. And it overlooks what type of content those sound engineers are talking about. They might be (probably are) talking about recorded instruments... which they may well have miked, chose the recording environment and parameters, to get the sound they want. That could have added 6dB of anything. So it's dangerous to take advice from what recording and mix engineers say without knowing their full workflow. I'm pretty sure I've heard Eddy Kramer talk about doing loads of positive EQ on a mix...in a lecture. Anyway..

Secondly it depends on what the content is. For example, again, a vocal versus a synth. With a vocal you may well worry about subtle phase problems introduced because you might want the vocal to be natural, or not, but worrying about what an EQ might do to a sound that came out of a synth full of filters jumping all over the place is kinda bizarre. It's a synthetic sound created by those exact distortions. Also, going back to what I said above about recording environments: those recording environments are there to add phase distortions and resonances, that's a lot of what reverberation is.

As mentioned the one thing you definitely need to listen out for is beating. It sounds awful and makes mixing harder I think. It's one of the reasons I don't like chorus plugins... that said laying sounds is one way that I would avoid using them. If you are layering high frequencies then you might not have to worry about them.
 
Back
Top