Ok since we have this topic open let me blab on some more about my weekend adventures. I'm only going to try and keep this topic on things I've already done rather than future plans, because discussing future plans always tends to take my drive away to actually do the things I want
My goal over the last two weeks was to set up a "live electronic production system". I chose Reaktor because of its flexibility and because I could send pitch data between various modules.. That meant I could set a root note in my bass module and let all my other sequencer modules operate relative to this root note... That means that changing the bass root note would automatically allow everything else to alter around it.
(Not 100% musically correct, but its much easier to work with in a live set than risking the chance that everything will suddenly be 'off' with regards to the root)
Initially I set up all my sequencers in Reaktor... BUT, that meant that nothing was being processed through effect units yet and I wanted a good mastering effect unit to run on-top of everything to give it all a bit of a finishing.
There was a problem however. A 1.5Ghz Athlon XP didn't have enough processing power. With all the wiring and samplers together and running as a VST under Cubase the system was running at about 70% Raw... and maxing out at 99-100% with send effects and iZotope running on top (Btw iZotope is hardly the most effecient mastering VST in retrospect)
Anyway... I had a theory on why Reaktor was pulling so much processing power... It takes a whole load of Audio signals rather than Event signals for its sampler units.
Audio inputs run at the sampling rate (in my case 48000Hz), even when you convert from event signals you're still doing a whole heap of conversions at 48Khz.
Event Signals run at a default rate of 400Hz. which is a good 120 times faster than my sampling rate.
So I decided to create only sequencer instruments in Reaktor using only Event tables to store data rather than Audio tables push it out as midi and then use Battery & Kontakt as my samplers.
It was a whole lot of work.. but it paid of... I reduced my calculation overhead down to 4%, from what I estimated was around 30%. I decided to dump iZotope for live production... although it does sound great, but the quality wouldn't really push through on anything other than good monitors anyway.
I'm now just sticking with Ultrafunk's Reverb unit for a bit of finalising Reverb, You can add a nice bit of stereo widening & reverb depth by just using a very small room and low diffusion.
Almost the same kind of effect you'd get by using iZotope's verb & stereo widening but using less than 5% of my processor rather than about 30%. (Actually I like the sound better than the iZotope deal... the sound is better centered and Ultrafunk is the only software reverb I've found that can be set to not build up that anoying noisy feedback)
So the end I got it all down to running at under 50%, and whats more I found that Kontakt's claims are true.. When you aren't triggering a sound through one of its instruments it doesn't take up any of your processor load. So it all hovers nicely between the 30 & 50% mark even when I set
the Cubase SX priority to "Realtime" and my latency down to 7 or 10ms.