Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
the wizards beard
Apr 15, 2007
Reppin

4 LIFE 4 REAL
Slightly unusual request:

I'm planning on upgrading my interface to something with more I/O (looking like a Presonus Firestudio Mobile) and I want to run separate mic pres into the line inputs. I know very little about mic pre-amps, but I do know about designing and building low-noise electronics, so I'm considering a clone of a high-end pre-amp. Could someone recommend their favourite pre that doesn't rely on some esoteric, archaic component? I would prefer solid-state over vacuum tube and don't want to spend hours hand-wiring transformers.

Adbot
ADBOT LOVES YOU

Nelsocracy
Nov 25, 2004
Indubitably!
Could anyone recommend me a decent sound card/audio interface that will be compatible with a Lenovo? USB 2.0 would be preferred. I really wanted to go the firewire route with PreSonus gear but after reading into it there are far too many issues with the firewire card etc. that it seems the only way I could have it working is if I purchase a macbook (I want a pc).

I'm looking at this one right here, what do you guys think?

M-Audio Fast Track
http://www.amazon.com/M-Audio-Fast-Track-Ultra-High-speed/dp/B000Z8U0IY/ref=sr_1_4?ie=UTF8&s=musical-instruments&qid=1278348789&sr=1-4

RivensBitch
Jul 25, 2002

WanderingKid posted:

Mics? Hell thats even worse. You aren't going to eliminate ambient noise completely, even in a super quiet room

You lost me here. Have you ever recorded in a studio that had soundproofed, professionally designed rooms? Ambient noise can absolutely be eliminated.

quote:

and then theres noise generated by the electronics in the preamp etc all of which is way more significant than the digital noise floor. I agree that good gainstaging and elimination of common electrical problems can minimise the noise from analogue gear. But they are all designed to work best around 1.23 Volts RMS anyway and when I stick that into the line ins on my soundcard theres still *loads* off the top of the fader before I'm touching 0dB full scale.

Are talking about the fader on an analog mixer or your DAW? I don't record with faders in my signal chain, I use direct outs if there's a mixer but usually I'm using preamps built into an interface or outboard preamps connected directly to the ADC.

Either way, if you still have appreciable noise introduced by your analog gear when tracking things like drums of vocals, you've got a problem. If you don't, then what are we arguing about? Technique? Style? Who cares I never said my way was the only way.

RivensBitch fucked around with this message at 18:26 on Jul 5, 2010

WanderingKid
Feb 27, 2005

lives here...

RivensBitch posted:

Either way, if you still have appreciable noise introduced by your analog gear when tracking things like drums of vocals, you've got a problem. If you don't, then what are we arguing about?

I'm just saying that the whole advantage of going digital is that noise inherent in digital systems is insignificant compared to the kind of noise you get with analogue. So the idea of running a signal into a soundcard thats so hot its about to clip is kind of pointless and its not going to get you any more appreciable signal relative to noise floor. Not when the noise floor of most prosumer ADCs is still less than -100dB. Basically everything you run into an analogue input on your soundcard is going to have more noise.

And its even more pointless because you need to trim your levels anyway so have some headroom to mix with.

WanderingKid fucked around with this message at 20:42 on Jul 5, 2010

ChristsDickWorship
Dec 7, 2004

Annihilate your demons



WanderingKid posted:

I get where you are coming from and it looks like it makes sense but its actually insane.
Systematically maximizing your signal to noise ratio in all steps of the recording process is insane? What is the alternative procedure you're suggesting? Turn it up until you can hear it and then it's all good?

WanderingKid posted:

I'm just saying that the whole advantage of going digital is that noise inherent in digital systems is insignificant compared to the kind of noise you get with analogue. So the idea of running a signal into a soundcard thats so hot its about to clip is kind of pointless and its not going to get you any more appreciable signal relative to noise floor. Not when the noise floor of most prosumer ADCs is still less than -100dB. Basically everything you run into an analogue input on your soundcard is going to have more noise.
The specs on most A/D converter chips will say you have 100dB before the noise floor, but they won't take into account the fact that they have been sandwiched on a circuit board between the power supply and a bunch of mic preamps in the audio interface, picking up noise. It doesn't take into account the 10-12ns of jitter you probably have with a prosumer interface, robbing you of another 10dB of headroom.

So let's say you have a usable -80dB digital noise floor. Let's say you recorded 16 tracks with an average level of -18dBfs, 62dB to the digital noisefloor on each one. Well, since the noise floor is the same noise on each track, it will sum most similarly to identical signals, so when we mix all these tracks together with their faders at 0, we add 6dB of noise every time the trackcount doubles, which adds 24dB more in the case of 16 channels. So the final mix, once its brought up to commercial CD levels has a noise floor at -56dB.

Let's add a little reference: vinyl surface noise is about -50dB. In the above you only recorded each track with 12dB less noise than a vinyl on playback. When you summed all the tracks together and mastered the album, you approached vinyl's playback hiss in terms of noisefloor.

Are CD rereleases of old LPs generally quieter than playing back the vinyl? If so, I guess those analog setups they were originally recorded on have a noticeably lower noise floor than many modern digital recording setups.

WanderingKid
Feb 27, 2005

lives here...
Pretty much everything on the issue that needs to be said already has been here. That forum is mostly full of headcases but that thread gets the point across fairly well and many of the people doing the talking are not wing nuts (for a change).

For some more on noise in digital systems check out some of Dan Lavry's posts on the issue. He also posted a bunch of important stuff about AD noise floors here.

Heres some more from prosoundweb with some contributions from John Hodgson (the guy who wrote the G-Force plugins).

I'm not an expert on the subject since I'm just a dude who makes music. Its clear I'm not explaining the concepts very well so its probably better if you hear it straight from the horse's mouth.

WanderingKid fucked around with this message at 13:39 on Jul 6, 2010

Schlieren
Jan 7, 2005

LEZZZZZZZZZBIAN CRUSH
If someone could render the threads linked by WK into intelligible English, that would be great; pretty sure that stuff isn't intended for audiences of the experience level of the average ML poster. Or, certainly not for my level of experience :)

the wizards beard
Apr 15, 2007
Reppin

4 LIFE 4 REAL

wixard posted:

It doesn't take into account the 10-12ns of jitter you probably have with a prosumer interface, robbing you of another 10dB of headroom.

Can you explain this? These two concepts don't seem related at all (at least not intuitively).

WanderingKid
Feb 27, 2005

lives here...
Rivensbitch et all aren't doing anything that is inherently wrong. Theres nothing wrong with recording as hot as they can and trimming the level before it hits a soundcard input or desk. The guy in the first link mentioned how he works pro tools sessions all the time where all the channels are recorded hot as hell and when he wants to spread them out on an SSL, he has to trim all the inputs because all his VU meters are pegged.

You can do that if you have to. But you aren't lowering SNR by any degree that matters by simply recording -12 to -20dB below full scale to begin with and you don't have to trim later. And all your hardware sounds better because it was designed to operate around 0 VU (volume units) anyway. Above 0 VU you have quite alot of headroom before the signal turns to poo poo. In digital systems, the signal turns to poo poo precisely at 0dB full scale. So what most people are doing nowadays is calibrating their meters and gain structuring so that they have some headroom in their DAW before they clip.

It doesn't matter whether you do it pre recording or post recording. It only matters that you do it otherwise how the hell are you gonna mix with no headroom? Thats not the point of contention however and I'm pretty sure we all agree on these points. The point of contention arises from this idea that recording super hot gives you higher signal to noise ratio and thats not true because the noise introduced from the digital part of the system - the converters is insignificant next to noise generated from the electronics in whatever box you are recording.

If you are getting audible 60hz hum and you record quiet and then later on try to squeeze +20dB out of it in your DAW then yeah of course you are going to get +20dB more 60hz hum. Thats not a case for recording stupidly hot to begin with. Thats a case for trying to kill a ground loop which is causing an inordinate amount of 60hz hum.

If you pump volume like a motherfucker then you will notice more noisy byproducts of your gear. Hell if you pump volume like a motherfucker it is possible to hear quantization noise in a 16 bit recording under very specific conditions and that level of noise is almost insignificant to begin with. Follow the izotope dither guide if you want to check it out yourself but it quickly becomes apparent that nobody actually listens to music like that and you won't be able to resolve quantisation noise from more significant sources of broadband noise or overwhelming signal.

WanderingKid fucked around with this message at 16:21 on Jul 6, 2010

RivensBitch
Jul 25, 2002

This argument is getting pretty silly. For what it's worth, even if the tracks aren't recorded that hot I always bring all my faders down to -9 to -12dB when setting up a mix because headroom rules and there's nothing more painful than watching a green engineer push his faders to their limit and have no more to give.

I once attended an Ableton Live demo where the presenter was bitching and moaning about how Ableton's mix bus distorts and can't handle this and that and the entire time I'm watching him mix with all of his faders at 0 and all of his channel meters tickling red, and his master bus absolutely red red red. I just shook my head and went back to the bar.

Kids these days.

WanderingKid
Feb 27, 2005

lives here...
I used to do that too. When faced with DAW meters though your natural inclination is to go with green = go, yellow = brace yourself and red = stop. So noobs naturally find themselves mixing to the red line and squashing the poo poo out of everything with compressors because they got no headroom. You gotta create it somewhere I suppose.

Shame it tooks years before us bedroom warriors got the message from some guy who works an SSL all day long. You mean, you have ~20dB headroom over the STOP sign on your meters? Where you been all my life baby?

Hogscraper
Nov 6, 2004

Audio master
The dBfs meter is flawed when it comes to recording. Like Wandering Kid said you have to create your own headroom but who in their right mind knows this when they're starting out? I certainly didn't. I started with a 4 track cassette recorder that sounded really cool when you pushed it hard so when I got a computer that was capable of recording I just naturally thought that 0 was where I was supposed to go.

The creators of DAW software just assume that everyone knows that you should create this headroom on your own I guess. Even though I'm sure 75% of their client base are bedroom artists that don't know anything about gain staging.

RivensBitch
Jul 25, 2002

I'm going to actually find some time to record some test tones at different levels and then normalize them and flip 180 degrees to really get down to what kind of noise might be introduced at the ADC. If WanderingKid and Hog are right, there shouldn't be any appreciable noise difference when recording at 0dB vs -20dB.

Laserjet 4P
Mar 28, 2005

What does it mean?
Fun Shoe

WanderingKid posted:

Shame it tooks years before us bedroom warriors got the message from some guy who works an SSL all day long.

http://www.gearslutz.com/board/so-much-gear-so-little-time/463010-reason-most-itb-mixes-don-t-sound-good-analog-mixes-restored.html

vvv edit: well gently caress

Laserjet 4P fucked around with this message at 22:46 on Jul 6, 2010

WanderingKid
Feb 27, 2005

lives here...

RivensBitch posted:

I'm going to actually find some time to record some test tones at different levels and then normalize them and flip 180 degrees to really get down to what kind of noise might be introduced at the ADC. If WanderingKid and Hog are right, there shouldn't be any appreciable noise difference when recording at 0dB vs -20dB.

Most of that has already been worked out for you. TC publish the noise figures for the ADC and DAC stage if you want to go have a look.

Emu claims their 0404 has 111dB SNR on the ADC side. That pretty insane. Its just impossible for any analog signal generator or signal processor to produce less noise so its unlikely that the converter stage is going to be the weak link in the chain so to speak unless you digitally create all your sounds and mix it all in a computer. Your preamps will put out alot more noise than your ADC. Most soundcards these days have pretty ridiculous noise figures.


Beat you. :O

Hogscraper
Nov 6, 2004

Audio master

Schlieren posted:

If someone could render the threads linked by WK into intelligible English, that would be great; pretty sure that stuff isn't intended for audiences of the experience level of the average ML poster. Or, certainly not for my level of experience :)
They basically say to track at a lower level so you don't distort the analog section of your analog to digital converter (the part of your soundcard that turns sound into 1s and 0s). You want to hit your soundcard at 0 dBu. Where 0 dBu is on the dBfs scale is different between manufacturers. See my post on the previous page that started a lot of this mess for some examples and what to look for in your soundcard manual to find this value.

Hogscraper
Nov 6, 2004

Audio master

WanderingKid posted:

Pretty much everything on the issue that needs to be said already has been here. That forum is mostly full of headcases but that thread gets the point across fairly well and many of the people doing the talking are not wing nuts (for a change).
True, but don't let a forum full of 75% headcases discourage anyone from reading through that site. Some of the most intelligent minds in pro audio frequent that board and you can spend a lifetime there using the search function and just learning.

If someone posts some nonsense bullshit they usually get called out pretty quickly too!

RivensBitch
Jul 25, 2002

WanderingKid posted:

Most of that has already been worked out for you.

Well where's the fun of reading someone else's conclusions, folding your arms and convincing yourself you're right without even trying it out for yourself?

If you're right, then a simple experiment should yield a null waveform. Record a test tone whose peaks are at 0dB in your DAW. Record the same test tone where the peaks are at -20dB in the DAW. Then normalize the second waveform to 0dB.

By your argument, they should sum to 0dB when one waveform is inverted. If not, then we can conclude that ONE of the levels is producing more noise than the other.

If you feel up to it, why don't you try this experiment with me and we can compare our results? I'll try to post my waveforms/results later tonight if time permits.

After all if we're just going to reference gearsluts and call the argument settled, then what's the point of this thread? I'd much rather we engage in some group :science: and try to come to some conclusions of our own, wouldn't you?

RivensBitch fucked around with this message at 04:11 on Jul 7, 2010

Elder
Oct 19, 2004

It's the Evolution Revolution.
I'd love to see the results!

RivensBitch
Jul 25, 2002

I generated a sine wave test tone on my desktop PC, sampled it internally within Ableton (to ensure consistency) and looped it so that it was alternating with a short section of sine wave, and a short section of silence. I then took the output of my M-Audio interface on the desktop and connected it to a preamp input on my TC Konnekt 48 connected to my Macbook Pro.

My first pass was recorded at about -1.9dB according to the spectrometer in live. I verified that the waveform was not distorted.

My second pass was about -17.5dB below that figure.

I then lined up the two waveforms on their own tracks such that the sharp transients were exactly aligned, and added utility plugins to both channels. I engaged the DC offset correction, and inverted the waveform on the second channel. I then boosted the second, quieter waveform until I achieved the maximum phase cancellation, and sampled the result.

Here are the RAW wave files:
http://eric.frap.net/sa/ml/raw%20waves.zip

Here is the Ableton Session (latest version of 8 with a max 4 live patch to fine tune summing levels down to .00001 of a dB):
http://eric.frap.net/sa/ml/ML%20gain%20experiment.zip

I could not get the waveforms to cancel no matter what I tried. I had to create a max for live patch to allow me to adjust the gains down to a ten thousandth of a dB, and still I couldn't get them to phase cancel. There was still sine wave present at about -40dB.

So who wants to take a guess as to what is going on here? Perhaps someone else could repeat the experiment to verify I'm not doing something wrong? Is it possible that noise and jitter introduced by recording at -20dB and boosting it to 0dB is causing the phase cancellation?

And before anyone asks, the DC offset correction increased the phase cancellation by an extra 10dB so I wasn't cheating there.

RivensBitch fucked around with this message at 09:45 on Jul 7, 2010

chippy
Aug 16, 2006

OK I DON'T GET IT
This conversation is completely blowing my mind :(

WanderingKid
Feb 27, 2005

lives here...
Isn't this just testing how noisy and non linear preamps are at different amounts of gain? I mean I'm just a dude with a couple of synthesizers and a guitar. I don't even have instruments sensitive enough to measure the noise of converter and preamp stages nor do I have the electronics know how to do it properly. The only spectrum analyser I have is SPAN and that only goes down to -72dB. I just know what engineers teach me for free in their own time and one of the fundamental basics is that digital systems are extremely intolerant of noise and interference. Much more so than analog systems.

I mean if you can explain how this is supposed to work and what its attempting to prove I'll help any which way I can but yeah. I'm just a guy with a guitar. Thats why I just linked to posts by Jon Hodgson and Dan Lavry because evidently I'm not paraphrasing them very well.

WanderingKid
Feb 27, 2005

lives here...
I'm gonna try to sum up my position in way that makes sense from a musician's perspective.

when you work on a nice desk with a bunch of VU meters theres a red line at 0 VU and if you tick that its not the end of the world because depending on the desk, you got maybe up to +25dB above that before your signal turns to dogshit.

In your DAW you tick the red line and you are fine but if you go over it you will clip which is the point at which your signal instantly turns to dogshit. I understand why DAWs have full scale peaking meters instead of the old style VU averaging meters because in digital audio you can't use an averaging meter where there is no overload margin. Its either overloading (clipping) or its not. If you use averaging meters to mix, your meters won't tell you when you clip.

So on a nice analog console you got a good deal of headroom (maybe about 20dB) above 0 VU to mix with. In a DAW mixer you can have tonnes of headroom (potentially alot more than 20dB) but its under 0dB full scale. I guess what people are doing now with digital mixing is to make their DAW mixer a bit like a kick rear end desk. You take 20dB off the top of your peaking meters and call -20dB full scale the equivalent of 0 VU. Theres obvious differences between averaging and peaking so its not the same and it wont get the same results but you are worried more about clips in digital audio so it makes sense.

The second bit is about gain structure. So if you distort the ever living gently caress out of your guitar and then record it into your soundcard, its going to sound distorted no matter how much you pull down the channel gain. Thats obvious right? If you get your mic and drive your preamp super hard so its hissing all over the place, thats gonna show in the recording.

so if you want a super hot, distorted sound thats fine and you know what to do. You drive your amp into its overload margin (saturation) or beyond it and you record it that way. When you mix it in Cubase or whatever you can trim your channel inputs so you have your 20dB of headroom and you can mix it easy. Thats fine.

But why would you record everything as hot as it can possibly go? The reason most people use is that you get more signal to noise ratio or that you need to use all your 24 'bits' which is an idea that I don't buy, based on the information I'm getting from Paul Frindle and others.

WanderingKid fucked around with this message at 12:03 on Jul 7, 2010

RivensBitch
Jul 25, 2002

WanderingKid: For someone who has posted a lot of :words: on this subject, I'm really disappointed that you're now going to claim ignorance and just hide behind links to other people's opinion on the subject. I find it really surprising that you can spend so much time posting about this but couldn't find time to try the experiment yourself and see what results you could come up with.

I'm wondering if you even took the time to load the wav files from my experiment into your DAW... Did you notice anything interesting about the silent sections between the sine waveforms?

As I said earlier in the thread, where's the fun of reading someone else's conclusions, folding your arms and convincing yourself you're right without even trying it out for yourself? It seems you're content with that, and that's fine I guess but it seems kind of disingenuous. So does the "from a musician's point of view" argument you bring up, we're all musicians here so I don't really get your point unless you're trying to imply that this subject is some kind of rocket science that's beyond our ability to understand, so we might as well just read your links and accept what's said as gospel.

WanderingKid
Feb 27, 2005

lives here...
I'm not asking you to accept anything as gospel. Read it or don't. Take what you can away from it. Or don't. I've already said I'd listen to the wavs when I get home because I don't have speakers on my work computer. Reserving judgement on the test until I can hear it.

WanderingKid fucked around with this message at 16:56 on Jul 7, 2010

mike-
Jul 9, 2004

Phillipians 1:21
I'm no expert engineer but this seems like an interesting experiment to do, I'd like to see how it changes on different interfaces.

From RivenBitch's explanation I get the basics of what I'm supposed to do, but it would be nice if you could post a slightly more detailed set of instructions to duplicate this experiment. For instance, I don't know what DC Offset Correction even is!

I have Cubase 4 and Ableton Live 8, so if anyone wants to assist a little I'll try this too.

RivensBitch
Jul 25, 2002

mike- posted:

I'm no expert engineer but this seems like an interesting experiment to do, I'd like to see how it changes on different interfaces.

From RivenBitch's explanation I get the basics of what I'm supposed to do, but it would be nice if you could post a slightly more detailed set of instructions to duplicate this experiment. For instance, I don't know what DC Offset Correction even is!

I have Cubase 4 and Ableton Live 8, so if anyone wants to assist a little I'll try this too.

Download my ableton set and the plugins will be loaded already, it will also have the waveforms loaded so you can see what I did.

ChristsDickWorship
Dec 7, 2004

Annihilate your demons



the wizards beard posted:

Can you explain this? These two concepts don't seem related at all (at least not intuitively).
Here's an image from the tech paper RivensBitch posted describing TC's new jitter reduction technology.



It's a recreation of a 20KHz tone from a DAC with no jitter suppression on its wordclock input. The blue one is using its internal clock, which is apparently very stable, and the pink one is it chasing a wordclock signal with 25ns peak jitter applied to it. The noisefloor is raised about 50dB. I admit I guessed at how the noise floor would increase with 10-12ns of jitter, but jitter = noise.

As far as I know, no one is a real expert on jitter, it is too hard to measure consistently. People like Bob Katz have a lot of time and effort spent trying to hear it and track it down and with their own critical listening tests they conclude things that are often counterintuitive to the way most of us think about digital audio. For instance, he asserts here (most of the way down under "Can Compact Discs contain jitter?") that he can hear the difference between a standalone professional CD record deck taking AES inputs and a SCSI-based CD recorder writing data from a hard drive. He also asserts that if you have a jittery CD or DAT copy of your music you can write it to hard disk, burn it with that SCSI CDR drive and fix it, creating a dub that sounds better than the original. Julian Dunn is another guy to look up on the nuts and bolts of jitter if you're interested, I think he has a paper that attempts to quantify what amount of jitter is audible that I see referenced a lot but have never read.

I only know in my practical experience with using digital snakes and sending their output through thousands of watts of gain in PA systems, the noise floor between well-clocked and poorly clocked digital rigs is very noticeable, even on setups that cost 5 figures for 12 or 16 channels, and unfortunately figuring out what the best clocking scheme is to reduce it isn't always as easy as buying a $2000 master clock and feeding wordclock everywhere.

WanderingKid posted:

Pretty much everything on the issue that needs to be said already has been here. That forum is mostly full of headcases but that thread gets the point across fairly well and many of the people doing the talking are not wing nuts (for a change).
The only benefit to not pushing your levels closer to 0dBFS is that you can have your faders like you used to back in the old days of analog. The only other justification for doing it is "it's probably not going to hurt you" and I don't see how you can vehemently get behind that (I also don't agree with it). Almost every engineer I know who went from analog to digital will tell you they got the hang of digital mixing when they started pushing their input gain and attenuating elsewhere if they had to. Yes, on ideal digital gear you can get away with treating it the same as analog, but I don't think it's a good idea to say like no one will ever find the noisefloor of their digital gear. I have found the noisefloor of digital gear and processing many times on a lot of really expensive gear.

The problem with your argument is if you follow your logic to its conclusion, you're saying that when you record with an SSL console, which your linked thread stated has a noisefloor 75dB from clipping, you will hear no degradation tracking to your final mix to your Emu 0404 until you get to -36dBFS, 75dB above the published noisefloor of the interface. I can't agree that is true in practice.

Hogscraper posted:

Something I didn't mention in my blog post that I'll mention here is most of these plugins that emulate old gear like an LA2A, 1176, Pultec, etc are all calibrated to work better with a lower peak level. Check the manuals that come with each plugin to find out what their ideal level is. I think the Waves and UAD emulations are -18 dBfs.
This is because they are emulating the controls of analog gear and display dBu instead of dBFS for threshold, etc. Waves has a tool to scale it +/- 18dBFS to match the level of your recorded signal however you like. An SSL G series mixing console has a trim knob at the top of every channel to do this as well, you wouldn't have to watch the needle bang in the red and keep your faders at -12 with almost any analog piece of gear I can think of if you chose to calibrate all your analog ins and outs to your digital gear instead of the other way around.

You've said multiple times dBFS is flawed but you have yet to provide a scenario where maximizing dBFS doesn't also maximize your RMS and signal to noise. You asserted several pages ago and again on this page that ADC's have a nominal input gain below 0dBFS above which they react worse to signal. I haven't seen you actually support any of your assertions logically, you just quote manuals out of context. 0 dBu means literally nothing if you are mixing entirely in the box and printing a digital master. Manufacturers publish where 0dBu hits their ADCs in dBFS so that you can calibrate them to your existing analog gear, not because you should. All you are doing is telling people how to calibrate meters so they can record with the same amount of headroom they used to with analog gear, and I'm asserting that headroom is better spent driving the ADC in very many digital setups. Find a post by Bob Katz or Dan Lavry where they actually say their ADC sounds worse when they hit it harder, not where they say if you want to mix ITB with the same gain structure as an analog desk, you need to record around -18dBFS.

You realize the logical conclusion of your assertion is that by mastering a CD at the levels you usually do, you are running the DAC on everyone's playback devices way past its point of peak performance and making them sound worse, right?

RivensBitch - My analysis rig is tied up on the back of a truck until the weekend, bouncing from gig to gig. I will run this experiment this weekend (Sunday most likely, if another gig doesn't pop up) and I'll also happily see what FFT analysis in Smaart 7 can tell us exactly about the differences in your results and mine (and anybody else's).

Hogscraper
Nov 6, 2004

Audio master

wixard posted:

You realize the logical conclusion of your assertion is that by mastering a CD at the levels you usually do, you are running the DAC on everyone's playback devices way past its point of peak performance and making them sound worse, right?
Believe me, I'm aware. Do I hear audible distortion? No. Do I hear audible distortion when I track hot, no. But that doesn't mean it's not there adding up as I stack track upon track.

I just assume if some designer of an ADC/DAC system is kind enough to put an optimal 0 dBu point in the manual I follow it. I usually am trimming the gain up or down before mixing after that point anyway so tracking low to save an extra step isn't my argument for it.

To under stand my view is to understand saturation plugins. The URS plug I have is extremely subtle and you can just infinitely barely hear it working but if you put it across 32 tracks and bypass them all simultaneously it's mind blowing. Little oddities definitely do add up. Some are sonically pleasant and others detrimental to the sound you're trying to achieve.

I try to hit the 0 dBu point because that's what the engineer who designed the piece of gear I'm using says is the cleanest/clearest point. More than that results in some distortion no matter how minutely audible. Less than that results in extraneous noise.

It's kind of a dumb argument when I can't hear the distortion on a $4,000 playback system. That, or I'm hearing it all the time so I can't a/b the source and hear it anyway. I know. It's theoretical cosmic stuff I'm apparently worrying about.

Hogscraper fucked around with this message at 18:15 on Jul 8, 2010

the wizards beard
Apr 15, 2007
Reppin

4 LIFE 4 REAL

wixard posted:

Here's an image from the tech paper RivensBitch posted describing TC's new jitter reduction technology.

This is literally the most extreme example they could pick and that image is misleading, the text mentions symmetric noise skirts around the peak but they've cropped things to make it look like white noise. "Noise floor" to me suggests the background noise across the entire spectrum, not a specific band.

I'll take your word take it is a significant noise source but that paper reads more like advertising than a peer-reviewed scientific article.

No. 9
Feb 8, 2005

by R. Guyovich
Sorry to interrupt, but I recently got a Desktop Konnekt 6. I've got my latency up all the way and I'm still getting skipping in my audio. The ASIO buffer is at 8192 samples, latency at 186.2 ms. I'm baffled because this didn't happen with my Fast Track Pro with lower latency settings. I've got everything up to date as far as software and firmware for the thing. This is just on basic playback on iTunes even.

Hogscraper
Nov 6, 2004

Audio master
Could be your firewire chipset causing an issue. Have you tried turning on safe mode in the TC control panel? Start with Safe Mode 3 and turn your latency down and see what you come up with.

No. 9
Feb 8, 2005

by R. Guyovich
I'm in Safe Mode 3 already, I had to switch to Legacy drivers because I kept getting BSODs -- but even before that switch I still had this.

Hogscraper
Nov 6, 2004

Audio master

No. 9 posted:

I'm in Safe Mode 3 already, I had to switch to Legacy drivers because I kept getting BSODs -- but even before that switch I still had this.
Are you running the release or beta drivers? I've had better stability with beta drivers which seems odd to me. Check out the TC forums for the links to the newest soft.

I can run my 24D with about a 6 ms roundtrip latency. This is on a Mac, but I don't see why you couldn't hit 20-30 ms on Windows without any problems.

No. 9
Feb 8, 2005

by R. Guyovich
So, I feel dumb asking but I can't find the beta drivers. The sticky on the top of the forum just links the release page for 2.4.1 which I have.

No. 9 fucked around with this message at 02:57 on Jul 9, 2010

Hogscraper
Nov 6, 2004

Audio master

No. 9 posted:

So, I feel dumb asking but I can't find the beta drivers. The sticky on the top of the forum just links the release page for 2.4.1 which I have.
Hmm. I guess that's it. Try signing up for a forums account and posting your problem in there. They're usually pretty fast at responding.

ChristsDickWorship
Dec 7, 2004

Annihilate your demons



Hogscraper posted:

I try to hit the 0 dBu point because that's what the engineer who designed the piece of gear I'm using says is the cleanest/clearest point. More than that results in some distortion no matter how minutely audible. Less than that results in extraneous noise.
I really think you're mistaken here. Mixing to get your outputs very close to nominal 0dBu if you know you'll be hitting an analog piece at some point makes sense, but I've never heard that there was a point at less than clipping where an ADC or DAC starts falling off and distorting before you said it in this thread. Most live mixing consoles now have features built-in to digitally attenuate signal after the AD to build your gain structure from very hot inputs. RME, for one, doesn't publish a 0dBu figure for their interfaces, only 0 dBFS figures. I really don't think 0dBu is relevant to how digital gear operates, I see the word nominal as a reference point to an analog voltage.

the wizards beard posted:

This is literally the most extreme example they could pick and that image is misleading, the text mentions symmetric noise skirts around the peak but they've cropped things to make it look like white noise. "Noise floor" to me suggests the background noise across the entire spectrum, not a specific band.
Yea, I assume they chose 20KHz because it had a more extreme effect than 1KHz as well, but it's an illustration of the principle at work. You notice I didn't guess that half that jitter would result in half that charted noisefloor, but it was the last image I recalled that actually showed an audio recreation of jitter's effects. By nature it will mostly effect the frequencies being recreated (although if you read a lot on the subject there is talk of jitter that creates phantom tones in the audio), but you can imagine that with something like a vocal track that has frequency content all up and down the spectrum, the skirted noise is not centered around 20KHz like that test tone, it is much more broadband.

WanderingKid
Feb 27, 2005

lives here...
I had to go bag the Wavelab 5 demo and since it was a quiet day at work, I toyed around with it at the office. No speakers again but all I'm seeing is noise floor that scales with peak signal. You have about 12 bits of useful signal and the rest is noise on both mono wavs. The summed wav is stereo for some reason which I don't know what to make of yet. I suspect the summed file is botched because the residue impulse gets higher in amplitude the further into the recording which may indicate that you didn't line the two mono wavs up to the exact sample?

I still don't understand what this test is meant to prove. That if you add +17.5dB of gain you get (surprise) +17.5dB of gain?

RivensBitch
Jul 25, 2002

The waveforms were synced down to the sample. The summed wav is stereo because it's taken from the master bus.

What does this experiment prove? Well it certainly DOESN'T prove that there is no difference in signal when recording hot vs -20dB from hot. How would you construct it different to prove that there is no advantage to recording as hot as possible?

Adbot
ADBOT LOVES YOU

WanderingKid
Feb 27, 2005

lives here...
But there is no advantage in recording as hot as possible so thats a loaded question. The idea of recording super hot comes from the 16 bit days and is a defunct habit that some people still do even though there is no logical basis for it. Explanations for this are given by Paul Frindle and Skip Burrows here and here. I've quoted the most relevant passages below. (Skip's spelling is rather atrocious and is copy pasted as is.)

Skip Burrows posted:

fossaree posted:

I've got a question :

When people did come across the idea that you should get the highest level before its distortion when tracking to DAW ?

is it a total misunderstood or does it have some background of truth ?

Or it's ok and perfectly acceptable ?


No ofenses here , I'm just trying to get the " DAW's tracking idiosincracies " made clear
for everyone ;-) .

I’ll Try and tackle this before Paul explains this way better than I can. In the beginning of early digital recorders the bit rate was 16 bit or on some it was 14 bit companded. Lets stick with 16 bit as the later would take more energy than I have. 16 bit gives us a theoretical recorded dynamic range of 96 DB. That is 6 DB per bit. So 16X6 is 96. Now remember that that is 96 DB from 0 DB Full scale, or 0DBFS.

Now remember that if we record at -20 like we do now, that gives 76 DB of dynamic range. Also remember at the lowest bit or the LSB (Least Significant bit) you have, is not noise but an error that turns the audio into again, not noise but digital garbage. So that LSB is 6 DB worth of Error. So from our 76 DB of dynamic range we have to take away about 6 DB.(assuming there is not dither present) That leaves us only 70 DB of useable dynamic range. Heck a Berringer Mixer will do better than that! Something to remember, With every bit we add, we double our resolution. The amount of quantization steps we have in 16 bit recordings is 65536. That’s 2(y)over(X)(16) Sorry I cant get this to do it in scientific notation. If we apply that to 17 bits we get 131072 of quantization steps and so fourth and so on. The difference between 16 bit and 24 bit recording is 256 times the resolution. And we go from 96 DB of dynamic range to 144 DB of dynamic range! So now that I have gone off on the third rail, why did we record so hot? Do this little experiment.

In Protools set up a vocal so it peaks at -20 like we talk about. Create a send to a buss and have that buss go to an AUX track that has a nice reverb on it. Make the send pre fader. Bring up the send just where you hear a bit of reverb. Not swimming, just a touch of ambience. Now mute the vocal so your left with only reverb. Now, Bounce what you are hearing verb only to a 16 bit version and one to a 24 bit version. Create a new session and import those bounces into the session. Listen to what’s happening to your reverb tales in 16 bit. It turns to total crap. The 24 bit version, still sound perfect. That’s the reason people still think they should slam the levels, its from the 16 bit days where if you didn’t, things got bad real quick. Ok I’m sure I have board you enough. Thanks for reading.

Paul Frindle posted:

Ok one final word about converters...

Whilst is seems logical to work converters up to their maximum dynamic range by pushing levels to near clipping, in fact for many designs (especially cheaper ones) the actual signal performance drops off considerably at high levels. There are several reasons for this that include partially compromised analgue input stages, behaviours of the ADC converter IC itself - and even in some cases deliberate and built in high level non-linearities to try to ameliorate over-driving etc..

How much of an issue all these factors are will depend entirely on you gear.

Forgetting the rather strange idea that people seem to want to represent dynamic range as 'data bits'?

Given that the dynamic range of most converters these days is at around 105 - 110dB - and that in most situations of recording the Mic and Mic amp will only produce around 90-100dB effective dynamic range at the very best - I personally would rather avoid the top 6 - 10dB of ADC modulation level in the interests of avoiding these issues - and suffer absolutely NO reduction in real SNR in the recorded file :-)

In almost every case losing 10dB of ADC modulation will make absolutely no difference to the dynamic range of the music recorded :-) There will be no 'loss of bits', no increased quantisation distortion what so ever (obviously because the noise of the converter is 30dB above the 24 bit data channel).

Whether one wants to argue about the dogma of 'bit's of data' - or understand what is really actually happening is up to you...

You decide which is actually dogma :-)

Sorry forgot to add; that if you do operate this way whilst recording in the first place (if it's your recording and you have the luxury), you will automatically produce 6 - 10dB of overload margin in your mixer - and you won't need to insert a trim in the head of your channels

Theres a little bit more with Bob Katz chiming in here. I admit reading through the entire thread is a bit much so I'll just highlight the most relevant posts addressing this issue.

But back to your example. I can't get them to null but they are very very slightly different lengths. The first impulse is 1 sample longer than the other and I have no idea why. Getting these files to null is completely tangential to what we are talking about in either case.

WanderingKid fucked around with this message at 20:29 on Jul 9, 2010

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply