FREQUENCIES ?????

D

DYEONE

Guest
Anybody clued up on 'Frequencies' and 'Sample Synthesis' ?

This is something i really need to know about because;

When i'm trying to create a sound (e.g a bass sound), i'll do this by layering more than one sound together, sometimes this sounds ok, other times i can get odd sounds creaping in/out and sometimes it just gets muddy.

From what i have read in other places this is because some of the sounds i'm playing are using/sharing the same frequencies.

Does anybody know of a site or be good enough to explain to me how to control frequencies to give the best souding result.

(I've really got no idea on this whatsoever, so please keep it simple !)

Thanks

Rich.
 
mmmhhh... frequency..

Let me try to explain this.

well two sounds with the same frequency are basically tuned on the same note. This shouldnt be a problem at all. Furthermore, to sound good together they should share the same RANGE of notes.

To make an arrangement correct musically, the sounds should share the same RANGE as much as possible.

RANGES are like "do,re,mi,fa,sol,la,si" (standard MAJOR range) but they can also be: "do,re,re#,fa,sol,sol#,la#" (another range, MINOR this time).



Ok to make things short, I tried to make you understand that notes can be different BUT go really well together if they are composed on the same RANGE of notes.


When two sounds dont sound good together, they are probably very close to eachother tunewise, or one of them is not tuned correctly (meaning it is not a note but one of those semitones you can hear in gipsy or arabic music), or they are not on the same range.

wow all this is interesting.
 
Just to add a little more ....

imagine you have a graph with x and y axis ... the vertical axis (y) is the amplitude or the loudness , the horizontal(x) axis is the frequency , starting at 20Hz going to 20000Hz , this is the human range of hearing.

Do re mi etc are laid out left to right along the x axis. Each note will correspond to a number of Hertz.

Sound is like an ocean , and this ocean has waves , and as waves lapping against the shore have a certain rythm or period ( 1 every 2 secs or something) , so too does sound. The number of waves per second is known as hertz ( HZ ).

Concert pitch A has 440 wave cycles per second ( 440hz ). One octave above will be 880 hz , an octave above that 1760hz .....

Now you are dealing with Bass sounds which are down below 200hz. However , the bite or edge of a bass is around 400hz.

A kik drum has its bottom end between 60 and 120 hz , but the impact that feels like a fist in the chest is around 700hz.

When you want the bass and the kik to *sit* well together and not crowd each other out or muddy each other up what you have to do is cut a little space in the instruments with the EQ.

In our above example you would cut a notch around 100hz in the bass and boost it at 150hz to compensate , also boost around 400Hz to bring it out more clearly.

The kik should be boosted around 100hz and 700hz and a notch cut in it around 400 hz.

If you were to draw this out on a graph you would see the notches of one instrument fit nicely with the boosts on the other.

As you are layering bass sounds try to choose ones which compliment each other rather than conflict and then use EQ to shape them further.

I know this is probably rather a headfull , but there is alot of information here ... this is one of the foundations that mixing is built on. Don't worry that it takes time to get the hang of , it will , and lots of practice. Its something that you begin to understand intuitively as you become more familiar with your tools and the medium of sound.

And its very difficult to explain ....
 
hehe, more physics :D
well explained robin and Mano.
one thing I want to add is that possibly when you're doubling pitches, that it's not tuned to the same not... ex: "A" can be tuned to 440 Hz or 442Hz, I'm not exactly sure why the difference. But, if the 2 notes aren't EXACTlY the same, or even close enough actually, there may be what is known as a "phase cancellation" which is basically when the waves of two pitches cancel each other out, either partially or completely. That may lead to some muddiness in sound.
-Mike
 
FREQ'...

Thanks for that, very helpful.

So basically from what you say it's just a case of cutting and boosting in the right places until everything sits nicely together.

My only problem now is how to find the right places.

How can i find out what frequencies my samples use, is there a method for doing this, or should it tell me somewhere on my sampler ?

(I use a Yamaha A4000, if anybody is familiar with it)

Rich.
 
i use an A3000 and have noticed that if i have say 2 or 3 samples going thru the same effect patch (i have three), it usually is okay, but sometimes is not (especially thru distortion fx), and i get glitching. Try sample soloing one sound, do a record take, then sample solo the next sound and record it on another track. Or resample with the effect on, then send that sample thru the assignable outs. I reckon there is a good argument for using more than just stereo outs, considering the amount of sounds just going out thru those two poor little jacks.
To check out your samples, save it onto a floppy or CD (not sure what facilities the A4000 has) and drop it onto a PC and put it thru a spectrum analysis program. But first try just putting a notch type filter onto your bass sound at frequency 100 if you can suss that out, then as each sample has its own one band eq boost that eq at 150 up a coupla db or something.
, like robin said, sound is an ocean. If you listen very often it becomes Frequent Seas. (a word pun which i am considering using for an album, just incase u were wondering).
Thanks to everyone for their mega informative offerings.
 
quote: "How can i find out what frequencies my samples use, is there a method for doing this, or should it tell me somewhere on my sampler ? "

There sure is a method , its called listening :)The trick is how to listen.

Well I wasn't going to get into this as my last post was already pretty long and I didn't want to give to much to swallow .....

But as you ask ....

There are as many ways of mixing as there are mixes , but like all skills there are a few set methods that serve as good points from which to depart. Here is ONE .....

For this I assume you have access to some decent EQ. A mixing board would be best , but it can be done in a computer one file at a time. Every instrument has its resonant frequency , infact every object , even the stars themselves, pulse with an inner resonance. In Japan they demollish buildings by setting up speakers and bombarding the structure with its resonant frequency.

This frequency is the sound of the instrument. If you have a parametric EQ ( this is one which can sweep thru the freqs with an adjustable width ) , sweep it thru the sound untill you find the frequncey it realy jumps out at. Boost this a little .

The second and third harmonics strengthen this fundamental. The second is an octave apart and adds solidity to the sound , the third is an octave and a fifth away from the root and adds colour to the origional sound.

So if you found your origional frequency to be 400Hz , you could try boosting about 800Hz or 1200Hz. Then you want to cut away every other frequency as these are muddying up the mix , providing energy in frequency areas that will be used by other sounds and their resonant frequencies.

I usualy use 2 parametric EQs and 2 shelving EQs. I use the shelves to cut away either side of the parametrics' boosts.

The sounds will sound thin and weak on their own but when in the full mix everything will seem to sit nicely. This technique is realy at its best when working on a desk when you have all the sounds infront of you on individual channels.

Pre EQing on a computer is a bit hit and miss , and I feel that truly exceptional mixes cannot be produced this way. By doing things like this you rely on a set of stock methods and settings. But every mix is unique , every sound has to be considered in relation to all the other sounds occuring at the same time AND all the other sounds occuring before and after that moment in time.

No structure can encompass that. There are no rules. Just use your ears and your personal taste.
 
PROPS.

Top advice, really appreciate it.

I can see the sun through the clouds again now.

Rich.
 
outputs aplenty

You can get a breakout box for the A3000 which gives you four sets of stereo assignable outs, i assume it is available or even standard for the A4000. If not, you have the main outs and at least one extra stereo pair of assignable outs. (note-if you send a sample thru an assignable out, you have to turn off the main out option or it will go out thru both - which can be useful sometimes).

Send different sounds through different outs. A stereo output should not be seen as just one output. It has two buses - two leads coming out the back, right? So you can send a bass sound out thru assignable one and two, and send a high hat sound out thru the assignable one and two, and the way you seperate them is by panning the bass sound alll the way left within your sampler's bass sample's settings - and then panning your high hat sound full hard to the right. Then make sure the lead from your assignable 1&2 left output goes to its own individual channel on your mixer, and take the lead from your assignable 1&2 right output goes to its own mixer channel.

Result - one stereo output, cut into two seperate outputs. Then you can pan and mix them on your mixing board, which should give you a three band control which is good to start with. Continue on until near perfect mix is achieved, hehe.

My question - what happens then? Do i record the whole lot onto a stereo track in pro tools and mix that in with the synth, guitar and vocal tracks, or should i record say eight different sampler channels into seperate tracks on pro tools? I know which sounds easier, but...i guess that whole parametric eq with two extra eq's trick would be more possible in software seeing as i don't have that hardware. Huh, answered my own question.
Thanks to Robin for being so clear and comprehensive. I shall be looking at you advice seriously this weekend.
seperation helps integration?
 
There are different schools of thought. It is considered the "American Approach" to acheive integration through tight separation. A "Brittish Method" is more concerned with the interaction of sounds in the mix spectrum.

I don't have much truck with these geographical stereotypes but you could say the former method produces very *clean* mixes , wheras the latter method produces more *organic* mixes.

I think a carefull application of both approaches , appropriate to the situation will produce those mixes that can be described as *magical*.
 
RobinH U R a legend! Good calls aplenty. Man i'm gonna check this stuff all out in my own studio! Succinct clarity is givin' me a good vibe rush!
 
Very interesting thread and RobinH congrats for your advices !
I'm sure Mano would be interested for a tutorial about that subject and your knowledges would be very helpful for the ODJ members ! ;)
 
RobinH, i checked out a frogola page thru a link from your site, downloaded beatnik and everything, and man, well, initially i was befuddled, and then i thought "Great!". It was very weird. Minimalist. Strange.
I assume the javascrpt can talk with the beatnik player?
 
Also Mr RH, you have a nice site, clean, although, er, maybe a bit of Anti-alias Text stuff could happen (it does take up a bit more memory for the gifs i guess). Cool site tho. Interested to see some JavaSound happen. I'm learning myself javascript at the mo (in between learning flashscrpt) but when finished intend to do some java learning. Would be interested to see what sort of synthesis java applets would be capable of on their own.
Forward soldiers.
 
i did a demonstration in a science class about this topic a couple years ago. I drew out the waves produced by some notes, then i played the track. science! anyways, you can find pitch to frequency charts and apps all over the web, most of it's free too cause science is awesome.
 
Back
Top