Some advice from a pro

DarkRed

New member
Ok, this thread was created purely to help engineers to suffer less with their mixing and mastering. As a pro I read and see tons of stuff that is simply turning guys that are otherwise good engineers, into newbies. In this thread I will list some of the misconceptions I've noticed out there over the years, that simply make productions bad sounding.

- Volume faders are no EQs

The number one cause for bad sounding productions out there, is the volume fader. Some smart ass engineer out there thought that EQs cause phase, hence you should use the volume fader instead. That is the worst idea ever. A volume fader is a volume fader. An EQ is an EQ. In fact, you use the EQ so that you do not have to use the volume fader.

- Expanders are more important than compressors

Compressors have become the number one topic among engineers. But pros like me focus on something else. Expanders. And the difference: Compressors make mixes more dense which causes a more heavy listening experience. Expanders make mixes less dense, which causes a less heavy listening experience. Guess which one of those impacts win... You guessed it right!

- Volume faders are polarity balancers

Most non pro mixes I know have one thing in common: They do not improve over time. The main reason for this is that the polarities within the mix are left totally unbalanced.

- The left ear and the right ear are not the same

Because engineers assume that the left ear and the right ear are completely the same, what they do is to set the pan knobs of the tracks at a precise fixed position and leave it there. Pros know that the left ear and the right ear are not the same, hence they always automate the pan knobs.

- Dither the final print? Nope...

Dithering the final signal is like putting the death touch on the mix. Whoever invented the dither for printing the final master must have been evil.

- 1 print per playback format, not 1 print for all playback formats

Engineers might spend tons of time with stuff prior to the final print. Then comes time to print the master and all of a sudden the engineers turn incredibly sloppy. Do not when you are done with the master suddenly waste all of your transients on the entire mix by not printing explicitly to the target playback format.

- Work at highest possible sample rate? Of course...

If phase is not on your mind yet, it will be...

- Hardware kisses your signal, software eats your signal

Newbies think like this: Since hardware is expensive, software sounds good. Honestly. If you are in a great sounding recording room and you compare a grand piano vs. a 4 MB 16-bit PCM sound, 100% of the persons asked will say the grand piano sounds more beautiful. It is the same with hardware vs. software. Don't use software to create a beautiful sound, when it is hardware that does that.

And the final advice...

- We focus on silence, not on loudness
 
Last edited:
Hmmmmm. Going to need more than just your word on what "pro engineers" do. Maybe concrete rationale or actual evidence? Example: you say dithering is "putting the death touch" on a mix, but you don't say why. Without elaboration, just about your whole post comes off as your own ideas as opposed to something that others in the industry know/utilize.

On dithering, it actually does serve a purpose....

On software vs hardware (specifically your comment on not using digital to obtain a beautiful sound)... I'm not even sure how to respond to that without sounding somewhat snobby. Just remember that "beauty" is based on preference and perception. Both of those vary from person to person.
 
Hmmmmm. Going to need more than just your word on what "pro engineers" do. Maybe concrete rationale or actual evidence? Example: you say dithering is "putting the death touch" on a mix, but you don't say why. Without elaboration, just about your whole post comes off as your own ideas as opposed to something that others in the industry know/utilize.

On dithering, it actually does serve a purpose....

On software vs hardware (specifically your comment on not using digital to obtain a beautiful sound)... I'm not even sure how to respond to that without sounding somewhat snobby. Just remember that "beauty" is based on preference and perception. Both of those vary from person to person.


+1
 
i have a bunch of these i happen to collect last month from a few blog post. the artist who says that are in bold. have fun :

2. “Avoid redundancy – There’s no need to have two simultaneous chorded instruments with loud highs… having snappy, fast decaying highs on rhythmical elements can go a long way.” – Madeon

3. “Good engineering comes with time. Meaningful composition should come first.” – Madeon

4. “You can use Ozone’s stereo imaging and take frequencies above seven thousand [7 kHz], or even a little bit lower, and you can widen everything up there, so that the mix starts to sound a lot wider.” – Skrillex

5. “It’s all about the three pieces that make a really nice drum sound. You need a nice transient in the beginning, and then the note around the 200-hertz frequency that gives it that boof, and then a tail, which can be anything.” – Skrillex

8. “TAKE BREAKS. come back to it with fresh ears. thats when you will notice if things arent right. I take short breaks all the time and then listen back, sometimes I will even take notes during the first listen after a break.” – Seven Lions

10. “DEPICT THE THINGS THAT YOU LIKE. SERIOUSLY! TAKE ALL THE ASPECTS OF YOUR TASTE AND COMBINE THEM. I KNOW THAT MAY SEEM OBVIOUS OR TOO SIMPLE, BUT IT’S WORTH EMPHASIZING BECAUSE HONESTY IN ART IS SO FUNDAMENTAL THAT IT’S OFTEN OVERLOOKED.” – Porter Robinson

13. “getting the relative volume levels of each instrument correct is a more important task than EQing. new producers often prefer a sound after it’s been EQed and in many cases it’s only because the levels have changed” – Porter Robinson
14. "with synths you can compress the living daylights out of synths and just lift it. You can just play 'em as loud as you can and it still won't over clash the music. But the 808's and kicks are really, really important." *– Porter Robinson

17. “Here’s something I like to think about concerning loudness… You have a very discrete amount of digital headroom with which to fill before you begin to clip/distort right? So as you begin to layer layer layer layer sounds, essentially you are necessarily bringing the volume of each individual noise DOWN to make room. This has a significant effect on “perceived loudness.” The easiest example to see this in action is to listen to an artist like Arojack — dude often writes tracks that are simply drums and a lead synth. As a result his music is often Noticably “louder” then someone like ours (for example) even tho we are both filling up the same amount of digital space. The rule of thumb then… Is simpler is often louder.” – The M Machine

22. “we leave the sound design tricks to the VERY end of the song. it’s the last thing we do. we focus on the melody, harmony, song structure, vibe, transitions… everything. then once that’s good we go in and work on the details to make it sonically come to life” – The Glitch Mob

24. “always solo your tracks and make sure to cut out the low frequency of stuff if it doesnt need to be there. you’d be surprised how many samples or synths have hidden low frequency noise that is mudding up your mixes.” – The Glitch Mob

26. “Every single channel in every track will have EQ, and most will have compression… Subtractive EQing is about the most powerful weapon you have in sound treatment.” – Nick Thayer

27. “parellel compression allows you to draw out different characteristics of a sound an combine them together. For instance say you want a snare to have a very snappy attack, but you also want it to have some body to it. If you parellel compress it with one of the paths set to a 10ms attack and the other set to around 150 that enables you to blend body with attack.” – Nick Thayer

31. “use transient shapers! i think my music started sounding good on the moment i started using them” – Nitro Fun

33. “ for the majority of a session the speakers would be at a middle point between loud and quiet, so that it’s not hurting the ears but still creating a vibe.” – Chase & Status

35. “Generally we make… we’ll make an A and a B section, so this might be the breakdown or the general theme of the song, then we’ll make a drop as well – and the drop will be just a lot more stripped back” – What So Not

38. “I usually sit down at the start with either a chord progression or drumbeat, depending on where my head is at the time. If it’s a chord progression, that means the track is probably going to be really focused around the melody of the track.” – Flume

41. Re: how to get crisp sounds, “a good basic sound and then some multiband compression and eq´ing” – Pegboard Nerds

45. “Headphones are nice, but I wouldn’t recommend doing final mixes on them. We usually check tunes on a laptop too, to hear if the main sounds come across without sub.” – Noisia

46. “1. The lower the frequency the more power it requires in your master bus so cut out any bass not needed. 2. Try to bring out the thing that the track is saying. It might take a while to figure out what this is exactly, but once you do you can focus your whole mixing process around that as opposed to following standard rules.” – Noisia

47. "Send emails. Find promotion channels on YouTube and SoundCloud and send out emails to the ones you think would be interested! It doesn’t hurt, and don’t get discouraged if they don’t reply, keep producing." - Sushi Killer

48. "There are many ways to achieve a wider sounding mix. One of our favourite ways of getting a sound to fill out a wider portion of the stereo field is by using a stereo delay (not good idea on the mix bus but it works well on individual sounds). To do this, pick a stereo delay that lets you control the delay times of both the left and right channels individually. The idea is to set the delay time between the two channels at very small intervals, typically we will go with something like 5ms on the left channel and 8ms on the right. The effect works really great on backing vocals and guitars." - Televisor
 
There are top level mix engineers, that when you hear their work, you are like "wow, that's nice".
Then you actually see them in the process and they don't know what a second release parameter does on a compressor, and a multitude of other things..
and you think "wtf, how could they be so simple, how do they not know every tool in their craft, how do they do it.?"

They have vision. They see sounds bro.
No, but really, they know what sounds good.

Many people have different hearing test results at specific frequencies as well. The way they mix is determined by this is well.

They know what sounds good, and that getting there, etched in stone rules are broken tablets. "Dozer".
 
Last edited:
Just to demonstrate the differences of opinion, I disagree with most (all???) of what was said. I think it’s very safe to say I’m a pro. Lord knows I’ve had enough songs on the radio to qualify. I mix over 100 records a year and produce a dozen more, I'm a voting member of the Grammys, etc. Anyone curious of what I’ve done or wants to hear what my advice sounds like is welcome to visit my website and nose around: www.vonpimpenstein.com . My comments are in red.

Ok, this thread was created purely to help engineers to suffer less with their mixing and mastering. As a pro I read and see tons of stuff that is simply turning guys that are otherwise good engineers, into newbies. In this thread I will list some of the misconceptions I've noticed out there over the years, that simply make productions bad sounding.

- Volume faders are no EQs

The number one cause for bad sounding productions out there, is the volume fader. Some smart ass engineer out there thought that EQs cause phase, hence you should use the volume fader instead. That is the worst idea ever. A volume fader is a volume fader. An EQ is an EQ. In fact, you use the EQ so that you do not have to use the volume fader.

I would agree that the No.1 cause of bad mixes is the fader, but that’s because 95% of good mixing is good balance. The reason smart ass engineers out there thought EQ causes phase is because IT DOES. That’s how EQ works. You can use EQ so you don’t have to use the volume fader, but more often than not in a good mix you use the volume fader so that you can use less EQ.

- Expanders are more important than compressors

Compressors have become the number one topic among engineers. But pros like me focus on something else. Expanders. And the difference: Compressors make mixes more dense which causes a more heavy listening experience. Expanders make mixes less dense, which causes a less heavy listening experience. Guess which one of those impacts win... You guessed it right!

I don’t think pros focus more on expanders than compressors. Back in the day with analog tape we have to run around throwing expanders and gates on every freakin’ track to avoid tape hiss. We don’t have to do that anymore. This idea also ignores the fact that much of the time we are using compressors to INCREASE dynamic range (kick and snare are obvious examples).

- Volume faders are polarity balancers

Most non pro mixes I know have one thing in common: They do not improve over time. The main reason for this is that the polarities within the mix are left totally unbalanced.

Volume faders have absolutely zero to do with polarity. Zero.

- The left ear and the right ear are not the same

Because engineers assume that the left ear and the right ear are completely the same, what they do is to set the pan knobs of the tracks at a precise fixed position and leave it there. Pros know that the left ear and the right ear are not the same, hence they always automate the pan knobs.

The left and right ears are generally the same or extremely close barring injury, excess hearing damage to one ear vs. the other (ie. if you shoot guns, you probably have worse hearing in one ear, etc.), or were born with some kind of defect. Otherwise, they track very closely. I take very good care of my ears and also have them tested every few years and the results have always been within the margin of error from ear to ear. I’ve had many conversations with doctors about ears and I’ve never heard any of them say that there is a fundamental difference – quite the contrary they say that baring some REASON they should generally hear the same. In fact, many tests rely on a difference in hearing between two ears to indicate a problem.

That aside, pros don’t always automate their pan knobs. I’d say 99% of the time pan knobs are static. You only automate a pan if it actually needs to move around. I’d say the vast majority of Billboard 100 singles have less than a couple pan pots automated.


- Dither the final print? Nope...

Dithering the final signal is like putting the death touch on the mix. Whoever invented the dither for printing the final master must have been evil.

Dithering is a complicated subject due to the math involved. But suffice to say, if your noise floor is below the least significant bit of the final bit depth, than you will lose dynamic range and clarity if you do not dither. When going down to 16 bits, I would consider dithering mandatory (with an exception I’ll get to). Going from 32 to 24bit, I suppose you could argue that it doesn’t help any noticeable amount – but the converse is true in that it doesn’t hurt any noticeable amount. Remember, dither is NOTHING BUT HISS. If your noise floor is greater than -90dB, then there is no point in dithering from 24bits to 16bits. Because you already have enough dither (provided that you are not introducing further processing that could lower that noise floor). So for example, if you are mixing on an analog console it is very likely that your noise floor is already worse than -90dB, and surely if you are using tape. But in a DAW environment, that is unlikely. But again, dither is just hiss. So if dither is the death touch, than so was all that hiss from tape, or hiss from analog consoles, etc.

I print 24bit mixes and my dither is perpetually on.


- 1 print per playback format, not 1 print for all playback formats

Engineers might spend tons of time with stuff prior to the final print. Then comes time to print the master and all of a sudden the engineers turn incredibly sloppy. Do not when you are done with the master suddenly waste all of your transients on the entire mix by not printing explicitly to the target playback format.

There was a time when vinyl needed a separate mix for some songs because of the difficulties of the needle tracking low frequencies that weren’t in mono. So back then sometimes you would print an alternate for vinyl with your crazy stereo bass in mono. This was particularly true for club records when everyone was trying to make their record louder than everyone elses, which is really difficult with low end stereo information. When cutting a master for vinyle, they would often sum the low end into mono anyway, but if you did it in the mix you had a little more control. That said, printing another version was more the exception than the rule. Even in mastering, with virtually all consumer playback being digital, the mastering is the same for different versions with the exception of ‘mastered for itunes’ which is really more about encoding than anything else.

- Work at highest possible sample rate? Of course...

If phase is not on your mind yet, it will be...

Higher samples rates do not change phase. They can decrease or increase jitter during AD/DA conversion. But they will not affect phase. High sample rates, contrary to urban myth, do not INCREASE audio resolution in a properly band-limited system; only more data points. More data points can be good or bad depending, but the key is the whole ‘properly band-limited system’. High sample rates can cause things to sound worse quite often because in all the crazy math that goes on in a DAW, there is often something that isn’t properly band-limited (and with all these plugins, it’s very hard to know how or if the band-limiting is being coded). This can cause intermodular distortion when using non-linear processes. So higher doesn’t necessarily mean better. It definitely does not mean more or less phase. And most pros I know are working at 44.1k the majority of the time.

- Hardware kisses your signal, software eats your signal

Newbies think like this: Since hardware is expensive, software sounds good. Honestly. If you are in a great sounding recording room and you compare a grand piano vs. a 4 MB 16-bit PCM sound, 100% of the persons asked will say the grand piano sounds more beautiful. It is the same with hardware vs. software. Don't use software to create a beautiful sound, when it is hardware that does that.

I don’t think anyone is making piano libraries at 16bit these days… but when it comes to INSTRUMENTS, yes a real instrument has nuances that typically can’t be captured with a virtual one. But when it comes to processors, that’s a different story. There are a lot of very very very good plugins. And most of them do not exhibit the hiss, dc offset, hum, and other crude that is frequently in their hardware bretheren. Not to mention that sometimes the hardware is often poorly maintained ‘cause more issues, or doesn’t work at all. And these days recall is CRITICAL. Gone were the days when you mixed a record and the label stopped by to approve it that day. Now getting revision notes a week later after 80 people have listened to the thing is normal. It is so infruriating trying to recall a mix with tons of analog gear because you simply can’t recall it perfectly most of the time (which is why people used to print stem mixes back in the day).

And the final advice...

- We focus on silence, not on loudness
 
Just to demonstrate the differences of opinion, I disagree with most (all???) of what was said. I think it’s very safe to say I’m a pro. Lord knows I’ve had enough songs on the radio to qualify. I mix over 100 records a year and produce a dozen more, I'm a voting member of the Grammys, etc. Anyone curious of what I’ve done or wants to hear what my advice sounds like is welcome to visit my website and nose around: www.vonpimpenstein.com . My comments are in red.

Ok, this thread was created purely to help engineers to suffer less with their mixing and mastering. As a pro I read and see tons of stuff that is simply turning guys that are otherwise good engineers, into newbies. In this thread I will list some of the misconceptions I've noticed out there over the years, that simply make productions bad sounding.

- Volume faders are no EQs

The number one cause for bad sounding productions out there, is the volume fader. Some smart ass engineer out there thought that EQs cause phase, hence you should use the volume fader instead. That is the worst idea ever. A volume fader is a volume fader. An EQ is an EQ. In fact, you use the EQ so that you do not have to use the volume fader.

I would agree that the No.1 cause of bad mixes is the fader, but that’s because 95% of good mixing is good balance. The reason smart ass engineers out there thought EQ causes phase is because IT DOES. That’s how EQ works. You can use EQ so you don’t have to use the volume fader, but more often than not in a good mix you use the volume fader so that you can use less EQ.

- Expanders are more important than compressors

Compressors have become the number one topic among engineers. But pros like me focus on something else. Expanders. And the difference: Compressors make mixes more dense which causes a more heavy listening experience. Expanders make mixes less dense, which causes a less heavy listening experience. Guess which one of those impacts win... You guessed it right!

I don’t think pros focus more on expanders than compressors. Back in the day with analog tape we have to run around throwing expanders and gates on every freakin’ track to avoid tape hiss. We don’t have to do that anymore. This idea also ignores the fact that much of the time we are using compressors to INCREASE dynamic range (kick and snare are obvious examples).

- Volume faders are polarity balancers

Most non pro mixes I know have one thing in common: They do not improve over time. The main reason for this is that the polarities within the mix are left totally unbalanced.

Volume faders have absolutely zero to do with polarity. Zero.

- The left ear and the right ear are not the same

Because engineers assume that the left ear and the right ear are completely the same, what they do is to set the pan knobs of the tracks at a precise fixed position and leave it there. Pros know that the left ear and the right ear are not the same, hence they always automate the pan knobs.

The left and right ears are generally the same or extremely close barring injury, excess hearing damage to one ear vs. the other (ie. if you shoot guns, you probably have worse hearing in one ear, etc.), or were born with some kind of defect. Otherwise, they track very closely. I take very good care of my ears and also have them tested every few years and the results have always been within the margin of error from ear to ear. I’ve had many conversations with doctors about ears and I’ve never heard any of them say that there is a fundamental difference – quite the contrary they say that baring some REASON they should generally hear the same. In fact, many tests rely on a difference in hearing between two ears to indicate a problem.

That aside, pros don’t always automate their pan knobs. I’d say 99% of the time pan knobs are static. You only automate a pan if it actually needs to move around. I’d say the vast majority of Billboard 100 singles have less than a couple pan pots automated.


- Dither the final print? Nope...

Dithering the final signal is like putting the death touch on the mix. Whoever invented the dither for printing the final master must have been evil.

Dithering is a complicated subject due to the math involved. But suffice to say, if your noise floor is below the least significant bit of the final bit depth, than you will lose dynamic range and clarity if you do not dither. When going down to 16 bits, I would consider dithering mandatory (with an exception I’ll get to). Going from 32 to 24bit, I suppose you could argue that it doesn’t help any noticeable amount – but the converse is true in that it doesn’t hurt any noticeable amount. Remember, dither is NOTHING BUT HISS. If your noise floor is greater than -90dB, then there is no point in dithering from 24bits to 16bits. Because you already have enough dither (provided that you are not introducing further processing that could lower that noise floor). So for example, if you are mixing on an analog console it is very likely that your noise floor is already worse than -90dB, and surely if you are using tape. But in a DAW environment, that is unlikely. But again, dither is just hiss. So if dither is the death touch, than so was all that hiss from tape, or hiss from analog consoles, etc.

I print 24bit mixes and my dither is perpetually on.


- 1 print per playback format, not 1 print for all playback formats

Engineers might spend tons of time with stuff prior to the final print. Then comes time to print the master and all of a sudden the engineers turn incredibly sloppy. Do not when you are done with the master suddenly waste all of your transients on the entire mix by not printing explicitly to the target playback format.

There was a time when vinyl needed a separate mix for some songs because of the difficulties of the needle tracking low frequencies that weren’t in mono. So back then sometimes you would print an alternate for vinyl with your crazy stereo bass in mono. This was particularly true for club records when everyone was trying to make their record louder than everyone elses, which is really difficult with low end stereo information. When cutting a master for vinyle, they would often sum the low end into mono anyway, but if you did it in the mix you had a little more control. That said, printing another version was more the exception than the rule. Even in mastering, with virtually all consumer playback being digital, the mastering is the same for different versions with the exception of ‘mastered for itunes’ which is really more about encoding than anything else.

- Work at highest possible sample rate? Of course...

If phase is not on your mind yet, it will be...

Higher samples rates do not change phase. They can decrease or increase jitter during AD/DA conversion. But they will not affect phase. High sample rates, contrary to urban myth, do not INCREASE audio resolution in a properly band-limited system; only more data points. More data points can be good or bad depending, but the key is the whole ‘properly band-limited system’. High sample rates can cause things to sound worse quite often because in all the crazy math that goes on in a DAW, there is often something that isn’t properly band-limited (and with all these plugins, it’s very hard to know how or if the band-limiting is being coded). This can cause intermodular distortion when using non-linear processes. So higher doesn’t necessarily mean better. It definitely does not mean more or less phase. And most pros I know are working at 44.1k the majority of the time.

- Hardware kisses your signal, software eats your signal

Newbies think like this: Since hardware is expensive, software sounds good. Honestly. If you are in a great sounding recording room and you compare a grand piano vs. a 4 MB 16-bit PCM sound, 100% of the persons asked will say the grand piano sounds more beautiful. It is the same with hardware vs. software. Don't use software to create a beautiful sound, when it is hardware that does that.

I don’t think anyone is making piano libraries at 16bit these days… but when it comes to INSTRUMENTS, yes a real instrument has nuances that typically can’t be captured with a virtual one. But when it comes to processors, that’s a different story. There are a lot of very very very good plugins. And most of them do not exhibit the hiss, dc offset, hum, and other crude that is frequently in their hardware bretheren. Not to mention that sometimes the hardware is often poorly maintained ‘cause more issues, or doesn’t work at all. And these days recall is CRITICAL. Gone were the days when you mixed a record and the label stopped by to approve it that day. Now getting revision notes a week later after 80 people have listened to the thing is normal. It is so infruriating trying to recall a mix with tons of analog gear because you simply can’t recall it perfectly most of the time (which is why people used to print stem mixes back in the day).

And the final advice...

- We focus on silence, not on loudness
+1 Thank you Chris for correcting this.
 
Just to demonstrate the differences of opinion, I disagree with most (all???) of what was said. I think it’s very safe to say I’m a pro. Lord knows I’ve had enough songs on the radio to qualify. I mix over 100 records a year and produce a dozen more, I'm a voting member of the Grammys, etc. Anyone curious of what I’ve done or wants to hear what my advice sounds like is welcome to visit my website and nose around: www.vonpimpenstein.com . My comments are in red.

Ok, this thread was created purely to help engineers to suffer less with their mixing and mastering. As a pro I read and see tons of stuff that is simply turning guys that are otherwise good engineers, into newbies. In this thread I will list some of the misconceptions I've noticed out there over the years, that simply make productions bad sounding.

- Volume faders are no EQs

The number one cause for bad sounding productions out there, is the volume fader. Some smart ass engineer out there thought that EQs cause phase, hence you should use the volume fader instead. That is the worst idea ever. A volume fader is a volume fader. An EQ is an EQ. In fact, you use the EQ so that you do not have to use the volume fader.

I would agree that the No.1 cause of bad mixes is the fader, but that’s because 95% of good mixing is good balance. The reason smart ass engineers out there thought EQ causes phase is because IT DOES. That’s how EQ works. You can use EQ so you don’t have to use the volume fader, but more often than not in a good mix you use the volume fader so that you can use less EQ.

- Expanders are more important than compressors

Compressors have become the number one topic among engineers. But pros like me focus on something else. Expanders. And the difference: Compressors make mixes more dense which causes a more heavy listening experience. Expanders make mixes less dense, which causes a less heavy listening experience. Guess which one of those impacts win... You guessed it right!

I don’t think pros focus more on expanders than compressors. Back in the day with analog tape we have to run around throwing expanders and gates on every freakin’ track to avoid tape hiss. We don’t have to do that anymore. This idea also ignores the fact that much of the time we are using compressors to INCREASE dynamic range (kick and snare are obvious examples).

- Volume faders are polarity balancers

Most non pro mixes I know have one thing in common: They do not improve over time. The main reason for this is that the polarities within the mix are left totally unbalanced.

Volume faders have absolutely zero to do with polarity. Zero.

- The left ear and the right ear are not the same

Because engineers assume that the left ear and the right ear are completely the same, what they do is to set the pan knobs of the tracks at a precise fixed position and leave it there. Pros know that the left ear and the right ear are not the same, hence they always automate the pan knobs.

The left and right ears are generally the same or extremely close barring injury, excess hearing damage to one ear vs. the other (ie. if you shoot guns, you probably have worse hearing in one ear, etc.), or were born with some kind of defect. Otherwise, they track very closely. I take very good care of my ears and also have them tested every few years and the results have always been within the margin of error from ear to ear. I’ve had many conversations with doctors about ears and I’ve never heard any of them say that there is a fundamental difference – quite the contrary they say that baring some REASON they should generally hear the same. In fact, many tests rely on a difference in hearing between two ears to indicate a problem.

That aside, pros don’t always automate their pan knobs. I’d say 99% of the time pan knobs are static. You only automate a pan if it actually needs to move around. I’d say the vast majority of Billboard 100 singles have less than a couple pan pots automated.


- Dither the final print? Nope...

Dithering the final signal is like putting the death touch on the mix. Whoever invented the dither for printing the final master must have been evil.

Dithering is a complicated subject due to the math involved. But suffice to say, if your noise floor is below the least significant bit of the final bit depth, than you will lose dynamic range and clarity if you do not dither. When going down to 16 bits, I would consider dithering mandatory (with an exception I’ll get to). Going from 32 to 24bit, I suppose you could argue that it doesn’t help any noticeable amount – but the converse is true in that it doesn’t hurt any noticeable amount. Remember, dither is NOTHING BUT HISS. If your noise floor is greater than -90dB, then there is no point in dithering from 24bits to 16bits. Because you already have enough dither (provided that you are not introducing further processing that could lower that noise floor). So for example, if you are mixing on an analog console it is very likely that your noise floor is already worse than -90dB, and surely if you are using tape. But in a DAW environment, that is unlikely. But again, dither is just hiss. So if dither is the death touch, than so was all that hiss from tape, or hiss from analog consoles, etc.

I print 24bit mixes and my dither is perpetually on.


- 1 print per playback format, not 1 print for all playback formats

Engineers might spend tons of time with stuff prior to the final print. Then comes time to print the master and all of a sudden the engineers turn incredibly sloppy. Do not when you are done with the master suddenly waste all of your transients on the entire mix by not printing explicitly to the target playback format.

There was a time when vinyl needed a separate mix for some songs because of the difficulties of the needle tracking low frequencies that weren’t in mono. So back then sometimes you would print an alternate for vinyl with your crazy stereo bass in mono. This was particularly true for club records when everyone was trying to make their record louder than everyone elses, which is really difficult with low end stereo information. When cutting a master for vinyle, they would often sum the low end into mono anyway, but if you did it in the mix you had a little more control. That said, printing another version was more the exception than the rule. Even in mastering, with virtually all consumer playback being digital, the mastering is the same for different versions with the exception of ‘mastered for itunes’ which is really more about encoding than anything else.

- Work at highest possible sample rate? Of course...

If phase is not on your mind yet, it will be...

Higher samples rates do not change phase. They can decrease or increase jitter during AD/DA conversion. But they will not affect phase. High sample rates, contrary to urban myth, do not INCREASE audio resolution in a properly band-limited system; only more data points. More data points can be good or bad depending, but the key is the whole ‘properly band-limited system’. High sample rates can cause things to sound worse quite often because in all the crazy math that goes on in a DAW, there is often something that isn’t properly band-limited (and with all these plugins, it’s very hard to know how or if the band-limiting is being coded). This can cause intermodular distortion when using non-linear processes. So higher doesn’t necessarily mean better. It definitely does not mean more or less phase. And most pros I know are working at 44.1k the majority of the time.

- Hardware kisses your signal, software eats your signal

Newbies think like this: Since hardware is expensive, software sounds good. Honestly. If you are in a great sounding recording room and you compare a grand piano vs. a 4 MB 16-bit PCM sound, 100% of the persons asked will say the grand piano sounds more beautiful. It is the same with hardware vs. software. Don't use software to create a beautiful sound, when it is hardware that does that.

I don’t think anyone is making piano libraries at 16bit these days… but when it comes to INSTRUMENTS, yes a real instrument has nuances that typically can’t be captured with a virtual one. But when it comes to processors, that’s a different story. There are a lot of very very very good plugins. And most of them do not exhibit the hiss, dc offset, hum, and other crude that is frequently in their hardware bretheren. Not to mention that sometimes the hardware is often poorly maintained ‘cause more issues, or doesn’t work at all. And these days recall is CRITICAL. Gone were the days when you mixed a record and the label stopped by to approve it that day. Now getting revision notes a week later after 80 people have listened to the thing is normal. It is so infruriating trying to recall a mix with tons of analog gear because you simply can’t recall it perfectly most of the time (which is why people used to print stem mixes back in the day).

And the final advice...

- We focus on silence, not on loudness

Thank you for all of this clarification. I'm relieved to know the parts I'm not confused on, I agree with. But I have a few questions.

1)
Dithering is a complicated subject due to the math involved. But suffice to say, if your noise floor is below the least significant bit of the final bit depth, than you will lose dynamic range and clarity if you do not dither.

Will you though? Aren't we talking about the difference between aliasing and the difference between a reconstructed signal + hiss? Granted, that's an argument for dithering every single time (who prefers aliasing?), but aren't we talking about an insanely quiet signal? Someone would have to be very careful to turn up their speakers and make sure they aren't going to blow their heads off listening the quietest parts of a mix. In that sense, does it matter? I still dither everytime, but people might find it cool if a record fades out and they hear a bizarre artifact in the background.

2) You talk about how eq uses phase, and it does, and I realized this is a perfect opportunity to ask someone really knowledgeable about linear-phase eq's. Are they actually phase free? I thought an EQ used phase to change the spectrum. How can an eq do that while being phase free? Are they hype or is there a legitimate reason? I've had it explained to me a few times and I didn't really get it.

3)
There are a lot of very very very good plugins.

You're stranded on an island and you have to mix records for coconuts. I know, it's some crazy Twilight Zone episode. You've got a shack made of wood from the almighty island god with perfect acoustics and you've got your favorite pair of monitors.

What are your top survival plugins to get you some of those sweet sweet coconuts?
 
You can mix hits on stock plugins.
Dither sound, eq phase, not a huge deal killer.

Room acoustics and Music theory is king..
(although if your good at referencing, you could do 90% of the mix on desktop computer full range speakers. A logitech system even)

Now someone fly me to their place and teach me more music theory. I get to learning it and get sidetracked everytime. ADHD bro.
 
You can mix hits on stock plugins.
Dither sound, eq phase, not a huge deal killer.

Room acoustics and Music theory is king..
(although if your good at referencing, you could do 90% of the mix on desktop computer full range speakers. A logitech system even)

Now someone fly me to their place and teach me more music theory. I get to learning it and get sidetracked everytime. ADHD bro.

Oh I know that. I don't know if you're replying to me or not so sorry if you aren't.

I'm not biting my nails about my plugins, dithering, or eq phase. I just want to know because I like knowing. Minimum-phase eq's have never failed me before. I've never been overwhelmed by hiss below -90db from dithering, and one of my favorite songs was done with all stock plugins. But I run into people that say these are make or breaks for mixes and I just wanna know so I can steer them on a better path (like room acoustics).
 
Back
Top