I would sort of agree. Sort of.
Working at 20 bits (not an option in all hardware/softtware rigs) will increase your storage and processing overhead apprx 25% and working at 24 bits instead of the "CD-standard" of 16 bits will increase it 50%.
But even though it increases your processing and storage load, and you'll be outputting to a 16 bit CD or mp3 (which we'll get to in a second) -- increasing your bit depth from 16 to 24 bits while you're recording and mixing increases the dynamic resolution of your signal by a factor of 256. And that translates into much greater detail and smoothness in the high end -- since, as a rule, high frequency information in most recorded audio occurs over a much narrower effective dynamic range [look at a mix in a computer wave editor... the big undulations are bass info and the tiny little wiggles that "ride" the big waves is the high freqs -- imagine a high hat signal riding along with a bass signal.] The more mixing and processing you're doing -- the more benefit you'll see from this enhanced resolution in the production stages.
Now... with regard to increased sample rates like 88.2 kHz, 96 kHz, 176.4 kHz, 192 kHz -- you're increasing your frequency resolution -- by a simple multiplicative (rather than exponential) factor. IOW, if you increase your sample rate by double (say from 44.1kHz to 88.2) you'll be able to theoretically digitize audio signals up to approximately 40 kHz. You'll also double your processing and storage overheads.
However, numerous rigorous listening tests have shown that increasing the bit depth by even small amounts provides superior aural results as opposed to, say, doubling the sample rate. High end converter designers Appogee have white papers on their web site that show that a 20 bit 48 kHz resolution provides a superior sound to 16 bit 96 kHz. (Now, no one ever uses 16 bit 96 kHz but this is an illustrative experiment/demonstration.)
Another problem using elevated sample rates in the production stage is that -- unless you're working at an exact multiple of your target sample rate -- the downsample will induce extra "alias error" as each sample is recalculated. (Whereas, if you downsample from, say, 88.2 down to its even multiple 44.1 the conversion simply discards every other sample and there is no recalculation of the remaining smaples.) There's no way for a 'ragged' downsample to retain its original fidelity -- but if you're doing a lot of mixing and processing at the higher rate you may actually get enough benefit there to make up for the loss when you subsequently downsample to your target rate.
(It gets complicated if you're outputting to multiple formats. If you're outputting to a 96 kHz audio-only DVD, yeah, by all means use 96 or even 192 for your production work... if, on the other hand, your primary target is conventional 44.1 kHz CDs or mp3s -- save the processing power and storage and record/mix at 44.1 kHz [or maybe 88.2] -- but with the highest bit rate you can 'afford' (bit rate conversion does not involve any "ragged conversion" type issues and using higher bit depths for production is close to a win-win situation.)
(And if you're outputting to the audio track of a DVD video, the target sample rate is 96 kHz -- but DVD video formats use a perceptually encoded data compression similar to mp3 that reduces fidelity enough that some authorities say it doesn't really matter what your source material was recorded at... since it will be degraded no matter what to a data compressed DVD audio format.)