Oh man, I'm going to answer all of this. (because I want to be kind and help out)
so the limiter wasn't clipping much at all, maybe 3db tops on the highest snare and bass kicks. heres the new one with no clipping and no limiting
If you are talking about the last limiter that's a lot. Try to make it peak somewhere around -0.3 to -0.0 dBFS, then use the last limiter to add the ceiling at say -0.4 dBFS for the mp3 print and maybe -0.2 dBFS for the CD. It's extremely mild limiting.
when you dither is it only always on your master limiter? where else could you even dither? and what would be the reason you chose 16bit dither instead of 24 dither? or vise versa?
You should not dither at all, never, just the internal processing should do that. You should instead route the signal to the hardware domain and then print the audio straight to the final playback format.
i have a fair size CPU and storage, so i decide to use 88.2/24bit... 24 bit really doesn't mess with my storage or CPU plus it seems to be the new DAW standard albeit not iTunes yet.
You should use higher sample rate than that. 384 kHz sample rate at 32-bit float point precision is very ideal, but requires a state of the art production setup. If possible use 192 kHz sample rate at 32-bit float point precision.
heres a big mystery..... now, i know you have to bump everything down to 44.1/16bit for distribution. but i wonder if you can capture small bits of higher fidelity when you mix, comp and EQ on, say, 88.2 then you bounce the track out inside the project at 88.2 still.... mix, master. maybe even bounce it all out again STILL in the 88.2 project.. thus potentially, for lack of a better term, "glueing" everything together at 88.2.... AND THEN you bounce it out and "bump everything down" to 44.1/16bit i feel like the compressing and EQ you did before the final bump down might have maintained more good artifacts.. instead of just doing the bump down with all the EQ's and compressors still active and "unglued"....
It is ideal to zero phase frequency match the signal in the DAW, then route the signal out to the hardware domain where you apply hardware processing and then from there straight to the final playback format, so if the final playback format is 192 kHz @ 24 bit, that is what you record. So in practice when you are done you are basically just recording at various quality levels to be able to distribute the music at various quality levels. So for instance you create a CD 44.1 kHz@16-bit version, you create a 96 kHz@24-bit version, a 192 kHz@24-bit version, a 384 kHz@24 bit version, a 384 kHz@24 bit version that is aimed for mp3 distribution (with a lower ceiling)...
, i use all synths and samples besides my voice, and my voice never really distorts if i mix it right... i don't record any external guitars or drums.. so as for phase issues i would be surprised. latency, still not sure about that. i quantize my notes to the grid and control the attack. maybe EQ and monitoring and i need to sweep more. and or maybe i just have shitty synths? the distortion was mostly coming from the ESX24 inside logic pro x. electric 80's power chord preset with a little bit of altering..
You have latency issues, maybe 95% have that, even the mods in here.
i have in-phase by waves. i dont really know about phase or mess with it that much, does it affect software synths? and how do i apply zero phase frequency matching?
You can learn about zero phase filtering here:
https://www.youtube.com/watch?v=uPB2gdQtfvQ
You frequency match using the zero phase filtering technique.
i do like the dynamics of volume automation. but i try my hardest to avoid doing it a ton cause the surprise of having a track get louder without a heads up can be off putting.
Yes it can, but it is not an issue when you have the right peak characteristics and do it mildly, meaning when you have worked with the compressors in a good way during recording.
your second paragraph is confusing. I'm about do a little more research on youtube about dither 24 bit vs 16bit but i have a clarette 2pre audio interface and a pair of HS7 if that pertains to the d/a you speak of.
D/A means digital-to-analog signal conversion, I kind of just meant the unit that produces the output incl. the D/A. You need multiple entire signal chains after the DAW sequencer for an optimal monitoring.
i feel like your terminology is a little off base. few questions if you get back. what are cans? whats D/A and whats A/B? i feel like in that second paragraph you were trying to say. mix on multiple sets of studio monitors?
Cans, it's just a popular term for headphones. D/A is digital-to-analog converter. A/B is when you compare version A against version B.
Yes, use multiple entire discrete output chains both in parallel and in solo when you monitor.
does anyone here use logic pro x? does anyone know if the ESX24 is just not a quality synthesizer and won't ever be found in a professional track? or am i just not finding a way to remove these frequencies that just ruin phone speakers? its really driving me insane. of all the places to find this solution. i feel like the internet would have been a good place to find it.
It's not about the DAW itself, it is the combination of the DAW sequencer, the DAW sequencer version, the DAW audio interface, the sample rate, the computing capacity and how you use the DAW that is going to determine the performance.
The particular issue you are facing with the mid range is actually dealing with the question how to make a production hit? I mean this is what makes a commercial mix "pop" into a hit, the mid range is what has the impact and that requires access. When you work like you do, you have a lot of information that is missing in the audio. You need access and currently you don't have that, because of skills and because of production/recording/gear/setup/monitoring, but you can follow my advice in this post and the previous post I wrote to improve it quite dramatically.