My songs sound not full wide or surround after panning,laying, eq,stereo separation,

ShaiBobble

fghhjhjhjjjj
All my songs end up sounding unprofessional after long mixing and mastering sessions. No matter what i do the beat will sound the same. Almost as if its mono( which ts definitly not) & in the middle only, and every one elses is stereo and basically 360 . I use tape sat, exciters imagers eqs and all that bull and it always sound the same. i use ozone 7 for mastering. I use fl studio for mixing. I watched videos for years and it still never works. i layer and everything. I use nexus purity, serum electrax everything to no success. IS IT MY COMPUTER? OR MY FL STUDIO? I DONT THINK ITS BAD MIXING BECAUSE EVEN BAD MIXES SOUND DIFFERNT AND BETTER THEN MINES. LISTEN TO CAPTIN CROCK( MINES THEN ANY OTHER SONG ( I ADDED MAD HIGH AS AN EXAMPLE.
View attachment 07. Mad High.mp3View attachment captin Crock Master.mp3<br><br><br>NOW COMPARE IT TO ANYOTHER SONG OUT THERE AND YOU WILL HEAR THE DIFFERNCE
 
Last edited:
All my songs end up sounding unprofessional after long mixing and mastering sessions. No matter what i do the beat will sound the same. Almost as if its mono( which ts definitly not) & in the middle only, and every one elses is stereo and basically 360 . I use tape sat, exciters imagers eqs and all that bull and it always sound the same. i use ozone 7 for mastering. I use fl studio for mixing. I watched videos for years and it still never works. i layer and everything. I use nexus purity, serum electrax everything to no success. IS IT MY COMPUTER? OR MY FL STUDIO? I DONT THINK ITS BAD MIXING BECAUSE EVEN BAD MIXES SOUND DIFFERNT AND BETTER THEN MINES. LISTEN TO CAPTIN CROCK( MINES THEN ANY OTHER SONG ( I ADDED MAD HIGH AS AN EXAMPLE.
View attachment 45313View attachment 45312<br><br><br>NOW COMPARE IT TO ANYOTHER SONG OUT THERE AND YOU WILL HEAR THE DIFFERNCE

I listened to both of these clips to become aware of what you have done and what you want it to sound like.

The issue is that you have too little information about each sound source inside of the mix compared to the song you are comparing against. So you have a lower quality recording that has gone through a lower quality mixing. On top of that you are missing an essential sound source in your version - vocals with reverb/ambience - that is not the issue, but in this case that is what further adds to the frustration to the point that you are writing this, at that point the difference in the total amount of information between the two mixes simply becomes overwhelming.

Look, you need to get a better audio interface that can handle greater amounts of voltage. (what audio interface are you using?) You then need higher quality sound sources, whether that is from samples or real sound sources, you need more information in them, better resolution. Then once you mix with that amount of information with gear that can handle that amount of information, you will finally end up with an equal amount of information and be satisfied at that point.

Much of what you like has to do with the level of information running into reverb, it becomes sweet in a different way then. (because of the resonance)
 
Last edited:
I dont fully understand what you mean by information but the settings on the plugins and rendering setting were all on the highest level -Ultra on nexus and 441 tuning, and then rendering at 320 and 551 sync. The reverb was on a send but the piano reverb was from the plugin. Mines just kinda sounds 2d while every other song in the world sounds 3d. I dont understand. Its like i can hear the difference on headphone and monitors.


But basically youre saying my music will always sound like the toilet even if the arrangement is great and even if i send it to a professional.
 
Last edited:
You know what nevermind Fxck It. Thank you very much for your help. This music bull shit just isnt for me.
 
Last edited:
But basically youre saying my music will always sound like the toilet even if the arrangement is great and even if i send it to a professional.

Yes, you cannot make chicken salad out of chicken shit, I know that sounds simple, but it is this simple. It's not a big issue, but you have to correct it if you want the target audio quality. What audio interface are you using? Could you please provide me details about your audio interface. Let's start there.
 
Last edited:
Focusrite 2i2 , trs to xlr cables, krk 5s that sit on auralex mopads. akg 240 headphones hp slimline desktop. Intel Pentium CPU J2900 processor 2.41GHz . sample rate at 44.1k. focusrite asio driver in flstudio. set at 44.1k. But only my songs sound shitty on it. other songs all sound different and great. and they said soundcards dont affect the final product they are just for mixing. I guess its like some type of sonic problem but not reverse polarity.
 
Focusrite 2i2 , trs to xlr cables, krk 5s that sit on auralex mopads. akg 240 headphones hp slimline desktop. Intel Pentium CPU J2900 processor 2.41GHz . sample rate at 44.1k. focusrite asio driver in flstudio. set at 44.1k. But only my songs sound shitty on it. other songs all sound different and great. and they said soundcards dont affect the final product they are just for mixing. I guess its like some type of sonic problem but not reverse polarity.

Oh man, not so strange you are struggling, it was far worse than I was afraid.

Your input: 7.74596669 volts
Your reference input (likely): 15.45523543 volts
Difference: -50%

Your output: 2.449489742 volts
Your reference output (likely): 15.45523543 volts
Difference: -84%

To understand what this difference does, compare these two video qualities:





That is the difference, it's like you are tuning the contrast, light, hue etc. on the second video not knowing why it cannot be like the version above.

The bottom line is this. Audio quality differences are a bit more difficult to understand than video quality differences. For this reason you need to logically work with it as if it is video quality you are trying to improve, that way it becomes easier to understand how to create a pro sound. You need a far better camera.

Start saving for an Apogee Symphony audio interface.
 
Last edited:
why do i need a $3000 interface when others dont? & where did you get the numbers from? The top video is recorded with a 4k camera and the bottom idk but a interface just plays the sound, it doesnt manipulate it ( as a better camera would obviously give a better video) so how would that really make a difference with the final product that would be played on things without interfaces?
 
Last edited:
why do i need a $3000 interface when others dont? & where did you get the numbers from? The top video is recorded with a 4k camera and the bottom idk but a interface just plays the sound, it doesnt manipulate it ( as a better camera would obviously give a better video) so how would that really make a difference with the final product that would be played on things without interfaces?

Imagine that the original recorded frequencies you get from somewhere else sum into a book and your task is to highlight the contents of that book and provide your best representation of it, but you can only do so by reading a fraction of the pages inside of it, because a great number of pages are unavailable to you. That is 50% of the issue. The other 50% of the issue is like representing the sound sources in the capture moment, same loss of information and hence you can only achieve so much.

The quality of a production in terms of audio quality, you need to understand from the point of view of what information do you have available and how much of it do you have access to.

The amount of information access increases cumulatively when it comes to audio interfaces of increasing quality.
 
Last edited:
so how do i find out how much information i have and all this?
Your input: 7.74596669 volts
Your reference input (likely): 15.45523543 volts
Difference: -50%

Your output: 2.449489742 volts
Your reference output (likely): 15.45523543 volts
Difference: -84%
 
Okay thanks. I quit. Did you pay $70k a year for music school because this shit is stupid af and is it worth the loans?
 
I'm not questioning your knowledge or ability with mixing but are you sure you're mixing correctly? I'm no expert but I had this problem for a while and just recently got my mixes to sound decent. (At least to me lol) I have no fancy interface, my computer barely can handle FL at times, and I often just mix with a pair of ear buds then let the stereo in my car be the judge of the rest haha. I set my audio settings to FL Studio ASIO and make sure there's no under runs. Then I write the song, EQ, compress( only if I need to) pan, and reverb slightly Then render a .wav and import the whole track into FL studio as a new project and then use ozone to thicken and add depth to the mix along with the multiband compressor and a slight bit of reverb to glue it all together. Like I said I'm no expert but when I was listening to my tracks and I thought they sounded terrible I was over EQ'ing and using compressors all the wrong way and it ruined my mixes. I've learned that keeping it simple will give you much better outcome. I'm still learning a lot myself so don't think I'm talking down on you I'm just trying to help.
 
Last edited:
Your choice of audio interface is of almost no consequence as far as mixing or even production is concerned, a basic pro level interface is probably adequate, unless you are recording, even then you can easily make do*. DarkRed spouts psychobabble and almost always criticizes peoples monitoring despite having no way of gauging that from listening to a mix. He also has a penchant for shilling expensive gear for no constructive reason.

Maybe you should stop comparing your own music to other tracks except to get only a general frame of reference. Instead figure out what is important to you and build up from that rather than trying to work backwards from someone else's work.

If making music is so unimportant to you that you are willing to just give up on a whim because of something said on a forum, then why do you care so much about copying other peoples results???



*I am willing to bet there are huge selling records recorded on gear as cheap as yours in people's kitchens.
 
Last edited:
Your choice of audio interface is of almost no consequence as far as mixing or even production is concerned, a basic pro level interface is probably adequate, unless you are recording, even then you can easily make do*. DarkRed spouts psychobabble and almost always criticizes peoples monitoring despite having no way of gauging that from listening to a mix. He also has a penchant for shilling expensive gear for no constructive reason.

Maybe you should stop comparing your own music to other tracks except to get only a general frame of reference. Instead figure out what is important to you and build up from that rather than trying to work backwards from someone else's work.

If making music is so unimportant to you that you are willing to just give up on a whim because of something said on a forum, then why do you care so much about copying other peoples results???



*I am willing to bet there are huge selling records recorded on gear as cheap as yours in people's kitchens.

Well,I would not say that Dark Red talks 'psychobabble', Dark Red obviously uses analogue Desks, and analogue audio Hardware.. 'In the Box' producers, are limited to what their DAW can do... like what headroom they handle, the algorithms and other shit, what compressors(digital) and what sample rate, and if they over sample 'information' that they need to translate...

Synths for starters, 'in the box' are limited. some are far better than others... Some synths would only have a few 'voices' of information that is a 'weak signal'... Some synths you can up the voices to 128, and then have them at 64 each, to play two notes polyphonic, that would give you the 'voltage' that dark red is talking about.. then your DAW would be starting to implode the CPU, as some vsti's are very CPU intensive...

Kick drums, especially FL kick drums, and FL synths, are shit... they have no guts, nomatter what you do to them.. others, like the EKS by synapse audio, has fat oversampled sound... so it kicks like a mother fucker... the you have unison, which gives those rich monophonic lead sounds...

A compressor trying to process a few bits of audio signal, will only amplify the empty space that it already has streaming through it...

I have never used nexus, and find most vsti's to be raspy, and have malignant frequencies in them, as well as no real exciting timbres... you need to sometimes knife out a whole bunch of recalcitrant frequencies that fuck with your head, and your mix... Though a DAW like FL, is not that good... but thats just me... try logic, or pro tools... though they won't help, unless you sort out the issues your having with everything else...

It takes years to master 'in the box' audio production, and even longer for professional analogue hardware mastery... you do it because you love it don't you? keep experimenting, it will come eventually...
 
Well,I would not say that Dark Red talks 'psychobabble', Dark Red obviously uses analogue Desks, and analogue audio Hardware.. 'In the Box' producers, are limited to what their DAW can do... like what headroom they handle, the algorithms and other shit, what compressors(digital) and what sample rate, and if they over sample 'information' that they need to translate...

Synths for starters, 'in the box' are limited. some are far better than others... Some synths would only have a few 'voices' of information that is a 'weak signal'... Some synths you can up the voices to 128, and then have them at 64 each, to play two notes polyphonic, that would give you the 'voltage' that dark red is talking about.. then your DAW would be starting to implode the CPU, as some vsti's are very CPU intensive...

Kick drums, especially FL kick drums, and FL synths, are shit... they have no guts, nomatter what you do to them.. others, like the EKS by synapse audio, has fat oversampled sound... so it kicks like a mother fucker... the you have unison, which gives those rich monophonic lead sounds...

A compressor trying to process a few bits of audio signal, will only amplify the empty space that it already has streaming through it...

I have never used nexus, and find most vsti's to be raspy, and have malignant frequencies in them, as well as no real exciting timbres... you need to sometimes knife out a whole bunch of recalcitrant frequencies that fuck with your head, and your mix... Though a DAW like FL, is not that good... but thats just me... try logic, or pro tools... though they won't help, unless you sort out the issues your having with everything else...

It takes years to master 'in the box' audio production, and even longer for professional analogue hardware mastery... you do it because you love it don't you? keep experimenting, it will come eventually...

Synapsis, you are touching on important topics, these are cans of worms in many studios out there and they confuse a lot of engineers. The overall information density of a mix is the result of the information density of each of its elements. I will go a bit deeper to help highlight how all of this comes together.

There is a point in time in every engineer's path towards great mixes, that the engineer realizes that the frequency range is shared among all of the sound sources in the mix and that this naturally produces a race condition. When all sound sources have their fundamental frequencies very close to each other - they are tuned to sound the same, they play the same tones at the same time, they are panned in the same way and so on - they all race towards trying to stand out from the crowd. Some of the frequencies cancel, some don't, but the brain is desperately trying to perceive and separate these frequency streams present inside of the mix frequency stream based on things like differences in gain.

One dimension where the frequency racing is much determined is in the level of information density of each sound source. With greater information density of each sound source comes less frequency racing because the signal of each track in the mix is naturally then more unique at any given time when playing the recording. And it is not only that, but the brain can now perceive more of the more that is present in the signal, therefore in terms of perceived quality you have two dimensions boosting the perceived quality only here, but then on top you also have what I have discussed in other threads, something called resonance potential, the fact that for sound sources A, B, C to be near your desired resonance level you might need A to be at voltage X, B at voltage Y and C at voltage Z, something that might be impossible to both capture and dial in with emotion when you don't have the gear that makes you able to access that amount of total voltage to create the desired electromagnetic energy fields. So the question then becomes how much resonance will listeners out there perceive and how will they perceive it...

But from here it goes way more advanced and into many more dimensions when it comes to the frequency racing and how the information density of each sound source impacts what you can do during the mixing and sound designing to create a certain perceived mix quality. This has to do with things like the fact that the perception of a sound source is not the result of that sound sources frequencies, but the result of that sound source's frequencies in its context.

Any object in creation, whether they mimic some other object or not, have a unique set of electromagnetic frequency states. When you combine these with different types of songs in different types of genres with different types of arrangements with different types of playing, different types of instruments etc. you get in combination with skills, taste and inner emotions certain kinds of results. The mystery is to some degree de-mystified when you get more experience and broaden your perspective. One can however say that the issue in general when it comes to the lack of certain audio and music qualities, is the lack of access to certain electromagnetic energy fields, the lack of balance among these and the lack of dynamic riding of these since the art is to a great degree also to play your emotions with these as a creative artist. So music creation is an incredibly big field for creativity and it is important to have hygiene, integrity, authenticity and creativity in regards to what is behind creating and achieving the kind of music and audio that you love.
 
Last edited:
Synapsis, you are touching on important topics, these are cans of worms in many studios out there and they confuse a lot of engineers. The overall information density of a mix is the result of the information density of each of its elements. I will go a bit deeper to help highlight how all of this comes together.

There is a point in time in every engineer's path towards great mixes, that the engineer realizes that the frequency range is shared among all of the sound sources in the mix and that this naturally produces a race condition. When all sound sources have their fundamental frequencies very close to each other - they are tuned to sound the same, they play the same tones at the same time, they are panned in the same way and so on - they all race towards trying to stand out from the crowd. Some of the frequencies cancel, some don't, but the brain is desperately trying to perceive and separate these frequency streams present inside of the mix frequency stream based on things like differences in gain.

One dimension where the frequency racing is much determined is in the level of information density of each sound source. With greater information density of each sound source comes less frequency racing because the signal of each track in the mix is naturally then more unique at any given time when playing the recording. And it is not only that, but the brain can now perceive more of the more that is present in the signal, therefore in terms of perceived quality you have two dimensions boosting the perceived quality only here, but then on top you also have what I have discussed in other threads, something called resonance potential, the fact that for sound sources A, B, C to be near your desired resonance level you might need A to be at voltage X, B at voltage Y and C at voltage Z, something that might be impossible to both capture and dial in with emotion when you don't have the gear that makes you able to access that amount of total voltage to create the desired electromagnetic energy fields. So the question then becomes how much resonance will listeners out there perceive and how will they perceive it...

But from here it goes way more advanced and into many more dimensions when it comes to the frequency racing and how the information density of each sound source impacts what you can do during the mixing and sound designing to create a certain perceived mix quality. This has to do with things like the fact that the perception of a sound source is not the result of that sound sources frequencies, but the result of that sound source's frequencies in its context.

Any object in creation, whether they mimic some other object or not, have a unique set of electromagnetic frequency states. When you combine these with different types of songs in different types of genres with different types of arrangements with different types of playing, different types of instruments etc. you get in combination with skills, taste and inner emotions certain kinds of results. The mystery is to some degree de-mystified when you get more experience and broaden your perspective. One can however say that the issue in general when it comes to the lack of certain audio and music qualities, is the lack of access to certain electromagnetic energy fields, the lack of balance among these and the lack of dynamic riding of these since the art is to a great degree also to play your emotions with these as a creative artist. So music creation is an incredibly big field for creativity and it is important to have hygiene, integrity, authenticity and creativity in regards to what is behind creating and achieving the kind of music and audio that you love.

I know exactly what your saying mate... I think the issue with these 'producers' is that they believe that downloading a DAW, and some vsti's slapping it together, chucking a master limiter on it.. and going' why does that sound shit', and when you turn it up, it scratches the synaptic connections in your brain...

I have had the privilege at university to use high end gear... Our lecturer was a wealth of knowledge.. and showed us many great things.. I am now 35, and was the old boy at university..

These people don't have any real knowledge about how frequencies interact.. or the ears to hear it... A good engineer, will learn.. and make adjustments, though many will fall back into old habits...

some in the box instruments are rubbish, samplers contain partials that can be very unsettling, which need to be cut...

What these people are getting at, is that they believe that there are people out there producing cutting edge tracks in DAW's like Fruity loops...

Some can get a good sound from digital audio... though every track, goes to the studio, and gets properly boosted and warmed up by analogue.. I saw a post about the vintage king, and how 'digital compressors' are now better quality in the 'digital age'.. This is absolute nonsense.. the circuits and parts in those analogue equipment introduce harmonic resonance... or 'distortion' as my lecturer said... though not digital distortion.. zingy warm distortion that amplifies the harmonics in a nice way...

The information density from a virtual synth is proportionate to the quality of the synth, its oscillators, how many voices it has, and if those voices can be summed into unison, to create a signal that is dense in it's information... Most digital synths, do not have the signal density.. and have partials that are not very pleasing... resolution plays a part, as the nyquist frequency can further issues... for instance your sample rate should at least be 48000 bps... The digital domain, and producers playing in this domain, need to understand.. that you can only do so much in the box... and not to expect miracles to occur...

I mean.. if two frequencies are racing, and masking... a simply cut of one of the instruments will allow the other to be present... and doing little cuts over the whole spectrum can allow frequencies to come through where otherwise they would be fighting....

I honestly hear some tracks on here, and before the slider is half way, my ears are screaming... Its all the partials and digital distortion introduced in order to 'make it loud', yet all those inharmonic partials are clashing and making a mess of everything... I am glad to have met you here.. I know from what you post, that you know what your talking about.. it's the same at university.. it sounds like .. wtf... but the penny drops.. and you go ahhh, thats what the pure harmonic resonance of E is.. and its pure harmonic scale...

Thanks for your input... always appreciated :)
 
Back
Top